48th week of 2013 patent applcation highlights part 62 |
Patent application number | Title | Published |
20130318260 | DATA TRANSFER DEVICE - A data transfer unit includes: a collection-interval storage unit that storing therein a collection interval set by the host computer; a data collection unit reading a collection interval stored in the collection-interval storage unit and collecting device data at the read collection interval; a transfer-interval storage unit storing therein a transfer interval set by the host computer, which is equal to or larger than the collection interval; a ring buffer accumulating and storing therein device data that are collected by the data collection unit and have not been transferred to the host computer by a data transferring unit; and a data transferring unit reading a transfer interval stored in the transfer-interval storage unit, and collectively transfers device data accumulated and stored in the ring buffer to the host computer at the read transfer interval. | 2013-11-28 |
20130318261 | Multi-Computers Network Sharing System Having Plug-and-Play Interfaces - The present invention relates to a multi-computers network sharing system having plug-and-play interfaces, comprising: an ethernet physical unit, a media accessing and controlling unit, a flow control unit, a plug-and-play interface transforming module, and a plurality of plug-and-play interfaces. In the multi-computers network sharing system of the present invention, when one of the external electronic devices is connected to any one of the plug-and-play interfaces, the flow control unit transmits the ethernet signal accessed by the first media accessing and controlling unit to the plug-and-play interface transforming module, therefore the plug-and-play interface transforming module transforms the ethernet signal to a plug-and-play interface signal and next transmits the plug-and-play interface signal to the electronic device connected with the plug-and-play interface, such that the electronic device can connect to the Internet without using any network cables or wi-fi devices. | 2013-11-28 |
20130318262 | Data Transmission Method and Apparatus - The present invention provides a data transmission method and apparatus. The method includes: receiving a wireless data exchange request of a first data exchange apparatus; locally creating, according to the wireless data exchange request, a magnetic disk symbol associated with the first data exchange apparatus; and processing, through the magnetic disk symbol, data interaction between local data and data in the first data exchange apparatus corresponding to the magnetic disk symbol. By using the data transmission method and apparatus according to the present invention, wireless data transmission performed by a user between a handheld terminal and a computer is as simple and convenient as data transmission between local disks. | 2013-11-28 |
20130318263 | System and Method to Transmit Data over a Bus System - A system includes a bus system to connect a number of components in a chain-like structure. A first control device (e.g., microcontroller or microprocessor) is configured to control the components in a first mode of the system. A second control device (e.g., microcontroller or microprocessor) is configured to control a first subset of the components in a second mode of the system. | 2013-11-28 |
20130318264 | Optimized Link Training And Management Mechanism - In one embodiment, a converged protocol stack can be used to unify communications from a first communication protocol to a second communication protocol to provide for data transfer across a physical interconnect. This stack can be incorporated in an apparatus that includes a protocol stack for a first communication protocol including transaction and link layers, and a physical (PHY) unit coupled to the protocol stack to provide communication between the apparatus and a device coupled to the apparatus via a physical link. This PHY unit may include a physical unit circuit according to the second communication protocol. Other embodiments are described and claimed. | 2013-11-28 |
20130318265 | ELECTRONIC APPARATUS, SYSTEM INCLUDING ELECTRONIC APPARATUS AND RELAY APPARATUS, AND CONTROL METHOD FOR THE SAME - An electronic apparatus controls a peripheral device by using a relay apparatus. The electronic apparatus includes an interface connected to the peripheral device; a communication unit that performs communication with the relay apparatus; a receiver that receives a control signal for controlling the electronic apparatus; and a controller that, when the control signal is received, performs an operation based on the control signal, and controls the communication unit to transmit information about the performed operation and information about the peripheral device to the relay apparatus in order to control the peripheral device to perform an operation that corresponds to the performed operation. | 2013-11-28 |
20130318266 | ON-PACKAGE INPUT/OUTPUT ARCHITECTURE - An on-package interface. A first set of single-ended transmitter circuits on a first die. The transmitter circuits are impedance matched and have no equalization. A first set of single-ended receiver circuits on a second die. The receiver circuits have no termination and no equalization. A plurality of conductive lines couple the first set of transmitter circuits and the first set of receiver circuits. The lengths of the plurality of conductive lines are matched. | 2013-11-28 |
20130318267 | APPARATUS AND METHOD FOR POLLING ADDRESSES OF ONE OR MORE SLAVE DEVICES IN A COMMUNICATIONS SYSTEM - An address polling method and system for communicating unique slave address values to a master device over a shared bus. The method includes receiving a request signal from the master device requesting that a slave address from each slave device coupled to the data line be sent to the master; causing, in a serial manner, the data line to be placed in logic states corresponding to bit values in a first slave address; and upon the data line being placed in a logic state that is different from a corresponding bit value of the first slave address, determining that another slave device is placing its slave address on the data line and temporarily entering an idle state until such other slave device has finished communicating its slave address to the master device. | 2013-11-28 |
20130318268 | OFFLOADING OF COMPUTATION FOR RACK LEVEL SERVERS AND CORRESPONDING METHODS AND SYSTEMS - A distributed server system for handling multiple networked applications is disclosed. Systems can include at least one main processor; a plurality of offload processors connected to a memory bus; an arbiter connected to each of the plurality of offload processors, the arbiter configured to schedule resource priority for instructions or data received from the memory bus; and a virtual switch respectively connected to the main processor and the plurality of offload processors using the memory bus, with the virtual switch capable of receiving memory read/write data over the memory bus, and further directing at least some memory read/write data to the arbiter. | 2013-11-28 |
20130318269 | PROCESSING STRUCTURED AND UNSTRUCTURED DATA USING OFFLOAD PROCESSORS - Methods of processing structured data are disclosed that can include providing a plurality of XIMM modules connected to a memory bus in a first server, with the XIMM modules each respectively having a DMA slave module connected to the memory bus and an arbiter for scheduling tasks, with the XIMM modules further providing an in-memory database; and connecting a central processing unit (CPU) in the first server to the XIMM modules by the memory bus, with the CPU arranged to process and direct structured queries to the plurality of XIMM modules. | 2013-11-28 |
20130318270 | ARBITRATION CIRCUITY AND METHOD FOR ARBITRATING BETWEEN A PLURALITY OF REQUESTS FOR ACCESS TO A SHARED RESOURCE - Arbitration circuitry for arbitrating between a plurality W of requests R for access to a shared resource. Included are state bits storage storing I state bits Q and generating 2I output bits comprising the true and compliment values of each stored state bit and routing circuitry for generating a set of mask signals M from the output bits. Grant circuitry receives the set of mask signals and the plurality of requests, and grants access to the shared resource to an asserted request having regard to the priority ordering encoded by the set of mask signals. State bit update circuitry is responsive to a trigger condition to perform an update causing a change in the priority ordering encoded by the set of mask signals. The routing circuitry provides a pattern of connections such that each mask signal in the set is directly connected to one of said output bits. | 2013-11-28 |
20130318271 | CABLE HARNESS SWITCHES - In one implementation, a cable harness switch includes a plurality of input ports, a first plurality of output ports, a second plurality of input ports, and a circuit switch module. Each input port from the plurality of input ports is configured to be coupled to a network link. Each output port from the first plurality of output ports is configured to be coupled to a network link. Each output port from the second plurality of output ports configured to be coupled to a network switch device. The circuit switch module is operatively coupled to the plurality of input ports, the first plurality of output ports, and the second plurality of output ports to define a network circuit including an input port from the plurality of input ports and an output port from the first plurality of output ports and the second plurality of output ports. | 2013-11-28 |
20130318272 | Entertainment System with Network of Docking Stations - An entertainment system comprising a media server networked with a plurality of docking stations is presented. The media server and docking stations can be networked together into a looped daisy-chained network to provide for content distribution to docked media players. The looped daisy-chained network retains connectivity or continuity when media players are undocked or when a connection is broken. In preferred embodiments, the entertainment system can be deployed within an aircraft as an in-flight entertainment system. | 2013-11-28 |
20130318273 | Wireless Communication Device and Method for Manufacturing Wireless Communication Device - The present invention provides a wireless communication device and a method for manufacturing a wireless communication device. The wireless communication device includes: an antenna; a main board, including a ground part, where the ground part is connected to the antenna; at least one matching network, connected to the ground part; a USB connector, including a shell and at least one first pin extending from the shell, where the at least one first pin is connected to the at least one matching network, and at least one first pin is one-to-one corresponding to at least one matching network. According to the present invention, a matching network may be connected between a pin of the USB connector of the wireless communication device and the ground part of the main board, and is configured to control wireless performance of an antenna radiation system of the wireless communication device. | 2013-11-28 |
20130318274 | Scalable Portable-Computer System - A scalable portable-computer system is disclosed. A novel portable-computer comprises a cluster connectivity bus, hard-wired to the central and graphics processing units (CPU and GPU, respectively) of said portable computer. | 2013-11-28 |
20130318275 | OFFLOADING OF COMPUTATION FOR RACK LEVEL SERVERS AND CORRESPONDING METHODS AND SYSTEMS - A method is disclosed that includes writing data to predetermined physical addresses of a system memory, the data including metadata that identifies a processing type; configuring a processor module to include the predetermined physical addresses, the processor module being physically connected to the memory bus by a memory module connection; and processing the write data according to the processing type with an offload processor mounted on the processor module. | 2013-11-28 |
20130318276 | OFFLOADING OF COMPUTATION FOR RACK LEVEL SERVERS AND CORRESPONDING METHODS AND SYSTEMS - A system is disclosed that can include at least one processor module connectable to a memory bus. The processor module can include at least one memory, at least one offload processor mounted on the processor module, and configured to execute operations on data received over the memory bus, and to output context data to the memory and read context data from the memory, and a hardware scheduling logic mounted on the module and configured to control operations of the at least one processor. | 2013-11-28 |
20130318277 | PROCESSING STRUCTURED AND UNSTRUCTURED DATA USING OFFLOAD PROCESSORS - A structured data processing system is disclosed that can include a plurality of XIMM modules connected to a memory bus in a first server, with the XIMM modules each respectively having a DMA slave module connected to the memory bus and an arbiter for scheduling tasks, with the XIMM modules providing an in-memory database; and a central processing unit (CPU) in the first server connected to the XIMM modules by the memory bus, with the CPU arranged to process and direct structured queries to the plurality of XIMM modules. | 2013-11-28 |
20130318278 | COMPUTING DEVICE AND METHOD FOR ADJUSTING BUS BANDWIDTH OF COMPUTING DEVICE - In a method for adjusting bus bandwidth applied on a computing device, the computing device includes a bus controller and several graphics processing units (GPUs). The bus controller establishes a data flow of each signal channel of the peripheral component interconnect express (PCI-E) bus connected to each GPU, and obtains a total data flow of the PCI-E bus connected to each GPU according to the data flow of each of the signal channels. If there is a fully-utilized GPU according to the total data flow of the PCI-E bus; the method locates an available idle signal channel of the PCI-E bus according to the data flow of each of signal channels, and reroutes the data flow of the fully-utilized GPU to the idle signal channel using a switch of the bus controller. | 2013-11-28 |
20130318279 | Providing A Load/Store Communication Protocol With A Low Power Physical Unit - In one embodiment, a converged protocol stack can be used to unify communications from a first communication protocol to a second communication protocol to provide for data transfer across a physical interconnect. This stack can be incorporated in an apparatus that includes a protocol stack for a first communication protocol including transaction and link layers, and a physical (PHY) unit coupled to the protocol stack to provide communication between the apparatus and a device coupled to the apparatus via a physical link. This PHY unit may include a physical unit circuit according to the second communication protocol. Other embodiments are described and claimed. | 2013-11-28 |
20130318280 | OFFLOADING OF COMPUTATION FOR RACK LEVEL SERVERS AND CORRESPONDING METHODS AND SYSTEMS - Methods for handling multiple networked applications using a distributed server system are disclosed. Methods can include providing at least one main processor and a plurality of offload processors connected to a memory bus; providing an arbiter connected to each of the plurality of offload processors, the arbiter capable of scheduling resource priority for instructions or data received from the memory bus; and operating a virtual switch respectively connected to the main processor and the plurality of offload processors using the memory bus, with the virtual switch capable of receiving memory read/write data over the memory bus; and directing at least some memory read/write data to the arbiter from the virtual switch. | 2013-11-28 |
20130318281 | MEMORY SYSTEM IN WHICH EXTENDED FUNCTION CAN EASILY BE SET - According to one embodiment, a memory system, such as a SDIO card, includes a nonvolatile semiconductor memory device, a control section, a memory, an extended function section, and an extension register. The extended function section is controlled by the control section. A first command reads data from the extension register in units of given data lengths. A second command writes data to the extension register in units of given data lengths. A extension register includes a first area, and second area different from the first area, information configured to specify a type of the extended function and controllable driver, and address information indicating a place to which the extended function is assigned, the place being on the extension register, are recorded in the first area, and the second area includes the extended function. | 2013-11-28 |
20130318282 | MEMORY SYSTEM CAPABLE OF CONTROLLING WIRELESS COMMUNICATION FUNCTION - According to one embodiment, a memory system includes a nonvolatile semiconductor memory device, controller, memory, wireless communication function section, and extension register. The controller controls the nonvolatile semiconductor memory device. The memory is serving as a work area of the controller. The wireless communication module has a wireless communication function. The extension register is provided in the memory. The controller processes a first command to read data from the extension register, and a second command to write data to the extension register. The extension register records, an information specifying the type of the wireless communication function in a specific page, and an address information indicating a region on the extension register to which the wireless communication function is assigned. | 2013-11-28 |
20130318283 | SPECIALIZING I/0 ACCESS PATTERNS FOR FLASH STORAGE - Systems and methods for efficiently using solid-state devices are provided. Some embodiments provide for a data processing system that uses a non-volatile solid-state device as a circular log, with the goal of aligning data access patterns to the underlying, hidden device implementation, in order to maximize performance. In addition, metadata can be interspersed with data in order to align data access patterns to the underlying device implementation. Multiple input/output (I/O) buffers can also be used to pipeline insertions of metadata and data into a linear log. The observed queuing behavior of the multiple I/O buffers can be used to determine when the utilization of the storage device is approaching saturation (e.g., in order to predict excessively-long response times). Then, the I/O load on the storage device may be shed when utilization approaches saturation. As a result, the overall response time of the system is improved. | 2013-11-28 |
20130318284 | Data Storage Device and Flash Memory Control Method - A data storage device and a flash memory control method. The disclosed data storage device includes a random access memory, a flash memory and a controller. The flash memory provides a data space for data storage and an in-system-program (ISP) space stored with ISP codes. One of the ISP codes is a permanent-ISP code. The permanent-ISP code contains a look-up table showing how the ISP codes stored in the flash memory map to the random access memory. By the controller, the permanent-ISP code obtained from the flash memory is loaded into the random access memory. Based on the look-up table contained in the permanent-ISP code and loaded in the random access memory with the permanent-ISP code, subsequently requested ISP codes are obtained from the ISP codes of the flash memory and are loaded into the random access memory. | 2013-11-28 |
20130318285 | FLASH MEMORY CONTROLLER - An apparatus and method of managing the operation of a plurality of FLASH chips provides for a physical layer (PHY) interface to a FLASH memory circuit having a plurality of FLASH chips having a common interface bus. The apparatus has a PHY for controlling the voltages on the interface pins in accordance with a microprogrammable state machine. A data transfer in progress over the bus may be interrupted to perform another command to another chip on the shared bus and the data transfer may be resumed after completion of the another command. | 2013-11-28 |
20130318286 | MEASURE OF HEALTH FOR WRITING TO LOCATIONS IN FLASH - A number of pulses to modify information stored in a given location in a plurality of locations is obtained for each of the plurality of locations in flash memory. A location having the largest number of pulses is selecting from the plurality of locations. The selected location is written to. | 2013-11-28 |
20130318287 | BRIDGING DEVICE HAVING A FREQUENCY CONFIGURABLE CLOCK DOMAIN - A composite memory device including discrete memory devices and a bridge device for controlling the discrete memory devices. A configurable clock controller receives a system clock and generates a memory clock having a frequency that is a predetermined ratio of the system clock. The system clock frequency is dynamically variable between a maximum and a minimum value, and the ratio of the memory clock frequency relative to the system clock frequency is set by loading a frequency register with a Frequency Divide Ratio (FDR) code any time during operation of the composite memory device. In response to the FDR code, the configurable clock controller changes the memory clock frequency. | 2013-11-28 |
20130318288 | METHOD AND SYSTEM FOR DATA DE-DUPLICATION - An apparatus may comprise a non-volatile random access memory to store data and a processor coupled to the non-volatile random access memory. The apparatus may further include a data de-duplication module operable on the processor to read a signature of incoming data, compare the signature to first data in the non-volatile random access memory, and flag the incoming data for discard when the signature indicates a match to the first data. Other embodiments are disclosed and claimed. | 2013-11-28 |
20130318289 | SELECTIVE ENABLEMENT OF OPERATING MODES OR FEATURES VIA HOST TRANSFER RATE DETECTION - Selective enablement of operating modes or features of a storage system via host transfer rate detection enables, in some situations, enhanced performance. For example, a Solid-State Disk (SSD) having a serial interface compatible with a particular serial interface standard selectively enables coalescing of status information for return to a host based on detecting a particular host transfer rate capability. Some hosts are not fully compliant with the particular standard, being unable to properly process the coalesced status information. The selective enablement disables status coalescing for a non-compliant host and enables status coalescing for at least some compliant hosts, without the SSD having prior knowledge of coupling to a noncompliant/compliant host. The SSD conservatively determines the host is non-compliant/compliant based on a negotiated speed of the serial interface, and selectively disables/enables status coalescing in response to the negotiated speed. | 2013-11-28 |
20130318290 | Garbage Collection Implemented in Hardware - A computing device is provided and includes a memory module, a sweep engine, a root snapshot module, and a trace engine. The memory module has a memory implemented as at least one hardware circuit. The memory module uses a dual-ported memory configuration. The sweep engine includes a stack pointer. The sweep engine is configured to send a garbage collection signal if the stack pointer falls below a specified level. The sweep engine is in communication with the memory module to reclaim memory. The root snapshot engine is configured to take a snapshot of roots from at least one mutator if the garbage collection signal is received from the sweep engine. The trace engine receives roots from the root snapshot engine and is in communication with the memory module to receive data. | 2013-11-28 |
20130318291 | METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR GENERATING TEST PACKETS IN A NETWORK TEST DEVICE USING VALUE LIST CACHING - Methods, systems, and computer readable media for generating test packets in a network device using value lists caching are disclosed. In one method, value lists are stored in dynamic random access memory of a network test device. Each value lists includes values for user defined fields (UDFs) to be inserted in test packets. Portions of each value lists are read into per-port caches. The UDF values are drained from the per-port caches using per-port stream engines to generate and send streams of test packets to one or more devices under test. The per-port caches are refilled with portions of the value lists from the DRAM and a rate sufficient to maintain the sending of the stream engine packets to the one or more devices under test. | 2013-11-28 |
20130318292 | CACHE MEMORY STAGED REOPEN - An apparatus is described. The apparatus includes a cache memory having two or more memory blocks and a central processing unit (CPU), coupled to the cache memory, to open a first memory block within the cache memory upon exiting from a low power state | 2013-11-28 |
20130318293 | SEMICONDUCTOR MEMORY DEVICE, AND METHOD OF CONTROLLING THE SAME - A semiconductor device includes a memory core with a plurality of memory cells, an internal voltage generator and a low power entry circuit. The low power entry circuit receives a plurality of control signals which are provided to a command decoder, and generates a low power signal indicating a low power consumption mode where a refresh operation is prohibited. The internal voltage generator includes a detector and at least one of booster circuits. The internal voltage generator, coupled to the memory core via an internal power supply line, generates a boosted internal voltage based on an external voltage and supplies the boosted internal voltage to the memory core via the internal power supply line. The internal voltage generator stops supplying the boosted internal voltage to the internal power supply line in response to the low power signal while the external voltage is supplied to the semiconductor device. | 2013-11-28 |
20130318294 | INTERNAL PROCESSOR BUFFER - One or more of the present techniques provide a compute engine buffer configured to maneuver data and increase the efficiency of a compute engine. One such compute engine buffer is connected to a compute engine which performs operations on operands retrieved from the buffer, and stores results of the operations to the buffer. Such a compute engine buffer includes a compute buffer having storage units which may be electrically connected or isolated, based on the size of the operands to be stored and the configuration of the compute engine. The compute engine buffer further includes a data buffer, which may be a simple buffer. Operands may be copied to the data buffer before being copied to the compute buffer, which may save additional clock cycles for the compute engine, further increasing the compute engine efficiency. | 2013-11-28 |
20130318295 | DISK STORAGE APPARATUS AND WRITE METHOD - According to one embodiment, a disk storage apparatus includes a write controller and a refresh controller. The write controller is configured to perform shingled write, writing data on a disk, using, as write units, data areas including groups of tracks. The refresh controller is configured to count the number of times the shingled write has been performed in a data area adjacent to the inner or outer circumference of a data area, in accordance with a weighting value set on the basis of a shingled write direction, and to instructs that a refresh process be performed, when the number of times counted exceeds a threshold value. | 2013-11-28 |
20130318296 | STORAGE SYSTEM AND DATA TRANSFER METHOD OF STORAGE SYSTEM - One embodiment provides a storage system and a data transfer method of a storage system, and particularly a storage system and a data transfer method of a storage system that can achieve higher data I/O performance even when hardware resources are limited. | 2013-11-28 |
20130318297 | NETWORK STORAGE SYSTEMS HAVING CLUSTERED RAIDS FOR IMPROVED REDUNDANCY AND LOAD BALANCING - A clustered network-based storage system includes a host server, multiple high availability system controller pairs, and multiple storage devices across multiple arrays. Two independent storage array subsystems each include a quorum drive copy and are each controlled by a HA pair, with remote volume mirroring links coupling the separate HA pairs. The host server includes a virtualization agent that identifies and prioritizes communication paths, and also determines capacity across all system nodes. A system storage management agent determines an overall storage profile across the system. The virtualization agent, storage management agent, quorum drive copies and remote volume minoring link all operate to provide increased redundancy, load sharing, or both between the separate first and second arrays of storage devices. | 2013-11-28 |
20130318298 | MEMORY SYSTEMS AND METHODS FOR CONTROLLING THE TIMING OF RECEIVING READ DATA - Embodiments of the present invention provide memory systems having a plurality of memory devices sharing an interface for the transmission of read data. A controller can identify consecutive read requests sent to different memory devices. To avoid data contention on the interface, for example, the controller can be configured to delay the time until read data corresponding to the second read request is placed on the interface. | 2013-11-28 |
20130318299 | CHANGING POWER STATE WITH AN ELASTIC CACHE - An apparatus and associated method is provided employing data capacity determination logic. The logic dynamically changes a data storage capacity of an electronic data storage memory. The change in capacity is made in relation to a transient energy during a power state change sequence performed by the electronic data storage memory. | 2013-11-28 |
20130318300 | Byte Caching with Chunk Sizes Based on Data Type - Methods and apparatus are provided for performing byte caching using a chunk size based on the object type of the object being cached. Byte caching is performed by receiving at least one data packet from at least one network node; extracting at least one data object from the at least one data packet; identifying an object type associated with the at least one data packet; determining a chunk size associated with the object type; and storing at least a portion of the at least one data packet in a byte cache based on the determined chunk size. The chunk size of the object type can be determined, for example, by evaluating one or more additional criteria, such as network conditions and object size. The object type may be, for example, an image object type; an audio object type; a video object type; and a text object type. | 2013-11-28 |
20130318301 | Virtual Machine Exclusive Caching - Techniques, systems and an article of manufacture for caching in a virtualized computing environment. A method includes enforcing a host page cache on a host physical machine to store only base image data, and enforcing each of at least one guest page cache on a corresponding guest virtual machine to store only data generated by the guest virtual machine after the guest virtual machine is launched, wherein each guest virtual machine is implemented on the host physical machine. | 2013-11-28 |
20130318302 | CACHE CONTROLLER BASED ON QUALITY OF SERVICE AND METHOD OF OPERATING THE SAME - A cache controller includes an entry list determination module and a cache replacement module. The entry list determination module is configured to receive a quality of service (QoS) value of a process, and output a replaceable entry list based on the received QoS value. The cache replacement module is configured to write data in an entry included in the replaceable entry list. The process is one of a plurality of processes, each having a QoS value, and the replaceable entry list is one of a plurality of replaceable entry lists, each including a plurality of entries and each corresponding to one of the QoS values. The number of total entries is allocated to processes based on the QoS values of the processes. | 2013-11-28 |
20130318303 | APPLICATION-RESERVED CACHE FOR DIRECT I/O - Described are embodiments of mediums, methods, and systems for application-reserved use of cache for direct I/O. A method for using application-reserved cache may include reserving, by one of a plurality of cores of a processor, use of a first portion of one of a plurality of levels of cache for an application executed by the one of the plurality of cores, and transferring, by the one of the plurality of cores, data associated with the application from an input/output (I/O) device of a computing device directly to the first portion of the one of the plurality of levels of the cache. Other embodiments may be described and claimed. | 2013-11-28 |
20130318304 | PROVIDING DATA TO A USER INTERFACE FOR PERFORMANCE MONITORING - A method, system, and computer readable storage medium for providing data to a user interface for performance monitoring are disclosed, in which an a data definition is acquired, where the data definition is generated in response to a definition of the user interface. Data is acquired from data sources based on the data definition. The acquired data is processed based on the data definition, and the processed data is cached. | 2013-11-28 |
20130318305 | Method and Apparatus for Optimal Cache Sizing and Configuration for Large Memory Systems - A method for configuring a large hybrid memory subsystem having a large cache size in a computing system where one or more performance metrics of the computing system are expressed as an explicit function of configuration parameters of the memory subsystem and workload parameters of the memory subsystem. The computing system hosts applications that utilize the memory subsystem, and the performance metrics cover the use of the memory subsystem by the applications. A performance goal containing values for the performance metric is identified for the computing system. These values for the performance metrics are used in the explicit function of performance metrics, configuration parameters and workload parameters to calculate values for the configuration parameters that achieve the identified performance goal. The calculated values of the configuration parameters are implemented in the memory subsystem. | 2013-11-28 |
20130318306 | MACROSCALAR VECTOR PREFETCH WITH STREAMING ACCESS DETECTION - A method and system for implementing vector prefetch with streaming access detection is contemplated in which an execution unit such as a vector execution unit, for example, executes a vector memory access instruction that references an associated vector of effective addresses. The vector of effective addresses includes a number of elements, each of which includes a memory pointer. The vector memory access instruction is executable to perform multiple independent memory access operations using at least some of the memory pointers of the vector of effective addresses. A prefetch unit, for example, may detect a memory access streaming pattern based upon the vector of effective addresses, and in response to detecting the memory access streaming pattern, the prefetch unit may calculate one or more prefetch memory addresses based upon the memory access streaming pattern. Lastly, the prefetch unit may prefetch the one or more prefetch memory addresses into a memory. | 2013-11-28 |
20130318307 | MEMORY MAPPED FETCH-AHEAD CONTROL FOR DATA CACHE ACCESSES - An apparatus including a tag comparison logic and a fetch-ahead generation logic. The tag comparison logic may be configured to present a miss address in response to detecting a cache miss. The fetch-ahead generation logic may be configured to select between a plurality of predefined fetch ahead policies in response to a memory access request and generate one or more fetch addresses based upon the miss address and a selected fetch ahead policy. | 2013-11-28 |
20130318308 | SCALABLE CACHE COHERENCE FOR A NETWORK ON A CHIP - Maintaining cache coherence in a System-on-a-Chip with both multiple cache coherent master IP cores (CCMs) and non-cache coherent master IP cores (NCMs). A plug-in cache coherence manager (CM), coherence logic in agents, and an interconnect are used for the SoC to provide a scalable cache coherence scheme that scales to an amount of CCMs in the SoC. The CCMs each includes at least one processor operatively coupled through the CM to at least one cache that stores data for that CCM. The CM maintains cache coherence responsive to a cache miss of a cache line on a first cache of the caches, then broadcasts a request for an instance of the data stored corresponding to cache miss of the cache line in the first cache. Each CCM maintains its own coherent cache and each NCM is configured to issue communication transactions into both coherent and non-coherent address spaces. | 2013-11-28 |
20130318309 | VIRTUALIZED DATA STORAGE IN A NETWORK COMPUTING ENVIRONMENT - Methods and systems for load balancing read/write requests of a virtualized storage system. In one embodiment, a storage system includes a plurality of physical storage devices and a storage module operable within a communication network to present the plurality of physical storage devices as a virtual storage device to a plurality of network computing elements that are coupled to the communication network. The virtual storage device comprises a plurality of virtual storage volumes, wherein each virtual storage volume is communicatively coupled to the physical storage devices via the storage module. The storage module comprises maps that are used to route read/write requests from the network computing elements to the virtual storage volumes. Each map links read/write requests from at least one network computing element to a respective virtual storage volume within the virtual storage device. | 2013-11-28 |
20130318310 | PROCESSOR PROCESSING METHOD AND PROCESSOR SYSTEM - A processor processing method is executed by a memory controller, and includes determining based on a log of access of a shared resource by a first application, whether the first application running on a first processor operates normally; and causing a second processor to run a second application other than the first application upon the first application being determined to not be operating normally. | 2013-11-28 |
20130318311 | SYSTEM-ON-CHIP FOR PROVIDING ACCESS TO SHARED MEMORY VIA CHIP-TO-CHIP LINK, OPERATION METHOD OF THE SAME, AND ELECTRONIC SYSTEM INCLUDING THE SAME - An electronic system including a system-on-chip (SoC) providing access to a shared memory via a chip-to-chip link includes a memory device, a first semiconductor device, and a second semiconductor device. The first semiconductor device includes a first central processing unit (CPU) and a memory access path configured to enable access to the memory device. The second semiconductor device is configured to access the memory device via the memory access path of the first semiconductor device. The second semiconductor device is permitted to access the memory device while the memory access path is active and the first CPU is inactive, and the memory access path is configured to become active without intervention of the first CPU. | 2013-11-28 |
20130318312 | Method for High Performance Dump Data Set Creation - A method, system and computer-usable medium which provides a format in which data is written to a dump data set to allow use of Fast Replication technology for both backing up and restoring of both datasets and volumes. Such a format allows any data that can be captured at a track level to be written to the dump data set via Fast Replication. When using this methodology of backing up and restoring, backups should be made to devices that support Fast Replication technology and restoration of the data should be to devices which are capable of being the target of a Fast Replication for that backup device. | 2013-11-28 |
20130318313 | BACKUP IMAGE DUPLICATION - Various systems and methods for configuring a duplication operation. For example, a method involves specifying a duplication window, a source storage device, and a target storage device. When a duplication operation is executed, data is copied from the source storage device to the target storage device during the duplication window. The method also involves calculating a predicted duplication rate, where the predicted duplication rate is an estimate of a rate at which data can be copied from the source storage device to the target storage device. | 2013-11-28 |
20130318314 | MANAGING COPIES OF DATA ON MULTIPLE NODES USING A DATA CONTROLLER NODE TO AVOID TRANSACTION DEADLOCK - A data controller node receives a request to update data stored at the data controller node for a transaction managed by a transaction originator node. The data controller node locks the data at the data controller node and identifies copies of the data residing at other nodes. The data controller node sends a message to the other nodes to update the copy at the other nodes without locking the copy of the data at the other nodes. The data controller node determines whether an acknowledgment is received from each of the other nodes that the copy of the data are updated for the transaction and updates the locked data at the data controller node for the transaction in response to receiving the acknowledgment from each of the other nodes. | 2013-11-28 |
20130318315 | Garbage Collection Implemented in Hardware - A method of garbage collection in a computing device is provided. The method includes providing a memory module having a memory implemented as at least one hardware circuit. The memory module uses a dual-ported memory configuration. The method includes triggering a garbage collection signal by a sweep engine of the computing device. The sweep engine is in communication with a memory module to reclaim memory. The method includes receiving the garbage collection signal by a root snapshot engine of the computing device. The method includes taking a snapshot of roots from at least one mutator by the root snapshot engine if the garbage collection signal is received. The method includes receiving roots from the root snapshot engine by a trace engine of the computing device. The trace engine is in communication with the memory module to receive data. | 2013-11-28 |
20130318316 | STORAGE DEVICE AND METHOD FOR CONTROLLING STORAGE DEVICE - A storage device includes a memory and a control device. The control device allocates a first storage region out of a plurality of storage regions to a first logical address specified by an external device. The control device stores, in the memory, first information associating the first logical address with a first physical address indicating the first storage region. The control device deletes, upon accepting a request for release of a second storage region indicated by a second physical address associated with a second logical address specified by the external device, second information associating the second logical address with the second physical address from the memory. The control device releases, when copy process of copying first data stored in the second storage region is unexecuted, the second storage region after finishing the copy process. | 2013-11-28 |
20130318317 | Volume Swapping of Point-In-Time Read-Only Target Volumes - A mechanism is provided for adding point-in-time copy relationships to a data processing system. A request is received to establish a first point-in-time copy relationship. Responsive to determining that a first target of the first point-in-time copy relationship is target write inhibited, that a source of the first point-in-time copy relationship is a source of a first continuous synchronous copy relationship, that a target of the first continuous synchronous copy relationship is part of a second point-in-time copy relationship, and that the source of the first point-in-time copy relationship is part of a volume swap configuration, a volume swap relationship is added between the first point-in-time target volume and the second point-in-time target volume to the volume swap configuration. Both point-in-time copy relationships are established and any continuous synchronous copy requirements of the volume swap relationship between the first point-in-time target volume and the second point-in-time target volume are disabled. | 2013-11-28 |
20130318318 | STORAGE CONTROLLER AND STORAGE CONTROL METHOD - Difference information between two snapshots from a first point-in-time snapshot, which has been copied, to an N.sup.th point-in-time snapshot, which constitutes the latest point-in-time snapshot, is acquired to a memory module. The memory module stores two or more pieces of difference information. The two or more pieces of difference information comprise difference information that shows the difference between a first point-in-time snapshot and any snapshot other than the first point-in-time snapshot of N snapshots. Copy difference information, which is information that shows the difference between the first point-in-time snapshot and a specified snapshot from among N snapshots, and which is used in copying the specified snapshot, is created on the basis of the two or more pieces of difference information. | 2013-11-28 |
20130318319 | SYSTEMS AND METHODS FOR MANAGING ZEROED LOGICAL VOLUME - A mechanism for zeroed logical volume management is disclosed. A method includes assigning, by a computing device, a bit value to each of storage blocks in a data volume of an operating system. The method also includes permitting, by the computing device, data in the storage blocks of the data volume to be read if the bit value is set to 1. The method further includes preventing, by the computing device, the data in the storage blocks of the data volume to be read if the bit value is set to 0. | 2013-11-28 |
20130318320 | SYNCHRONIZE A PROCESS STATE - A device to store a process state to a non-volatile storage medium in response to the process state being loaded onto a memory, detect the device entering into a hibernation state, and synchronize stored content from the process state with current content of a current process state before entering the low power state. | 2013-11-28 |
20130318321 | BUFFER CONTROL CIRCUIT OF SEMICONDUCTOR MEMORY APPARATUS - A buffer control circuit of a semiconductor memory apparatus includes a delay unit configured to determine delay amounts for a command in response to a plurality of command latency signals, delay the command according to a clock, and generate a plurality of delayed signals; and a buffer control signal generation unit configured to receive the plurality of command latency signals and the plurality of delayed signals, and generate a buffer control signal. | 2013-11-28 |
20130318322 | Memory Management Scheme and Apparatus - A memory management apparatus includes a first controller adapted to receive an input data sequence including one or more data frames and operative: to separate each of the data frames into a payload data portion and a header portion; to store the payload data portion in at least one available memory location in a physical storage space; and to store in a logical storage space the header portion along with at least one associated index indicating where in the physical storage space the corresponding payload data portion resides. The apparatus further includes a second controller operative, as a function of a data read request, to access the physical storage space using the header portion and associated index from the logical storage space to retrieve the corresponding payload data portion and to combine the header portion with the payload data portion to generate a response to the data read request. | 2013-11-28 |
20130318323 | APPARATUS AND METHOD FOR ACCELERATING OPERATIONS IN A PROCESSOR WHICH USES SHARED VIRTUAL MEMORY - An apparatus and method are described for coupling a front end core to an accelerator component (e.g., such as a graphics accelerator). For example, an apparatus is described comprising: an accelerator comprising one or more execution units (EUs) to execute a specified set of instructions; and a front end core comprising a translation lookaside buffer (TLB) communicatively coupled to the accelerator and providing memory access services to the accelerator, the memory access services including performing TLB lookup operations to map virtual to physical addresses on behalf of the accelerator and in response to the accelerator requiring access to a system memory. | 2013-11-28 |
20130318324 | MINICORE-BASED RECONFIGURABLE PROCESSOR AND METHOD OF FLEXIBLY PROCESSING MULTIPLE DATA USING THE SAME - A minicore-based reconfigurable processor and a method of flexibly processing multiple data using the same are provided. The reconfigurable processor includes minicores, each of the minicores including function units configured to perform different operations, respectively. The reconfigurable processor further includes a processing unit configured to activate two or more function units of two or more respective minicores, among the minicores, that are configured to perform an operation of a single instruction multiple data (SIMD) instruction, the processing unit further configured to execute the SIMD instruction using the activated two or more function units. | 2013-11-28 |
20130318325 | COMPOSITE PROCESSORS - In one example, a composite processor ( | 2013-11-28 |
20130318326 | Self-Similar Processing Network - Self-similar processing by unit processing cells may together solve a problem. A unit processing cell may include a processor, a memory and a plurality of Input/Output (IO) channels coupled to the processor. The memory may include a dictionary having one or more instructions that configure the processor to perform at least one function. The plurality of IO channels may be used to communicably couple the unit processing cell with a plurality of other unit processing cells each including their own respective dictionary. The processor may update the dictionary so that the unit processing cell builds a different dictionary from the plurality of other unit processing cells, thereby being self-similar to the plurality of other unit processing cells. | 2013-11-28 |
20130318327 | Method and apparatus for data processing - A method for processing an operating sequence of instructions of a program in a processor, wherein each instruction is represented by an assigned instruction code which comprises one execution step to be processed by the processor or a plurality of execution steps to be processed successively by the processor, includes determining an actual signature value assigned to a current execution step of the execution steps of the instruction code representing the instruction of the operating sequence; determining, in a manner dependent on an address value, a desired signature value assigned to the current execution step; and if the actual signature value does not correspond to the desired signature value, omitting at least one execution step directly available for execution and/or an execution step indirectly available for execution. | 2013-11-28 |
20130318328 | APPARATUS AND METHOD FOR SHUFFLING FLOATING POINT OR INTEGER VALUES - An apparatus and method are described for shuffling data elements from source registers to a destination register. For example, a method according to one embodiment includes the following operations: reading each mask bit stored in a mask data structure, the mask data structure containing mask bits associated with data elements of a destination register, the values usable for determining whether a masking operation or a shuffle operation should be performed on data elements stored within a first source register and a second source register; for each data element of the destination register, if a mask bit associated with the data element indicates that a shuffle operation should be performed, then shuffling data elements from the first source register and the second source register to the specified data element within the destination register; and if the mask bit indicates that a masking operation should be performed, then performing a specified masking operation with respect to the data element of the destination register. | 2013-11-28 |
20130318329 | CO-PROCESSOR FOR COMPLEX ARITHMETIC PROCESSING, AND PROCESSOR SYSTEM - In order to enable to quickly and efficiently execute, by one system, various modulation/demodulation/synchronous processes in a plurality of radio communication methods, a co-processor ( | 2013-11-28 |
20130318330 | PREDICTING AND AVOIDING OPERAND-STORE-COMPARE HAZARDS IN OUT-OF-ORDER MICROPROCESSORS - A method and information processing system manage load and store operations that can be executed out-of-order. At least one of a load instruction and a store instruction is executed. A determination is made that an operand store compare hazard has been encountered. An entry within an operand store compare hazard prediction table is created based on the determination. The entry includes at least an instruction address of the instruction that has been executed and a hazard indicating flag associated with the instruction. The hazard indicating flag indicates that the instruction has encountered the operand store compare hazard. When a load instruction is associated with the hazard indicating flag, the load instruction becomes dependent upon all store instructions associated with a substantially similar hazard indicating flag. | 2013-11-28 |
20130318331 | START CONTROL APPARATUS, INFORMATION DEVICE, AND START CONTROL METHOD - A CPU includes a code write unit which writes an interrupt generation code into a page in which the instructions stored in the non-volatile memory are not written, among a plurality of the pages included in an instruction area that is an area of the volatile memory into which the instructions are written, the interrupt generation code being a code for generating a software interrupt, an instruction transfer unit which transfers the instructions from the non-volatile memory to a corresponding page of the volatile memory that is a page in which the interrupt generation code generating the software interrupt is stored when the software interrupt is generated by the interrupt generation code, the instructions being to be stored in the corresponding page, and an instruction execution unit which executes the instructions stored in the instruction area, and when the interrupt generation code is executed, generates a software interrupt. | 2013-11-28 |
20130318332 | BRANCH MISPREDICTION BEHAVIOR SUPPRESSION USING A BRANCH OPTIONAL INSTRUCTION - A method for suppressing branch misprediction behavior is contemplated in which a branch-optional instruction that would cause the flow of control to branch around instructions in response to a determination that a predicate vector is null is predicted not taken. However, in response to detecting that the prediction is incorrect, misprediction behavior is inhibited. | 2013-11-28 |
20130318333 | OPERATING PROCESSORS OVER A NETWORK - A client processor can save an execution state of a process that runs on two or more secondary processors in a single file. The single file can be transferred from the client processor over a network to a host processor. The single file is configured to permit the host processor to resume processing of the suspended process. It is emphasized that this abstract is provided to comply with the rules requiring an abstract that will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. | 2013-11-28 |
20130318334 | DYNAMIC INTERRUPT RECONFIGURATION FOR EFFECTIVE POWER MANAGEMENT - Methods, apparatus, and systems for facilitating effective power management through dynamic reconfiguration of interrupts. Interrupt vectors are mapped to various processor cores in a multi-core processor, and interrupt workloads on the processor cores are monitored. When an interrupt workload for a given processor core is detected to fall below a threshold, the interrupt vectors are dynamically reconfigured by remapping interrupt vectors that are currently mapped to the processor core to at least one other processor core, such that there are no interrupt vectors mapped to the processor core after reconfiguration. The core is then enabled to be put in a deeper idle state. Similar operations can be applied to additional processor cores, effecting a collapsing of interrupt vectors onto fewer processor cores. In response to detecting cores emerging from idle states, reconfiguration of interrupt vectors can be performed to rebalance the assignment of the vectors across active cores by remapping a portion of the vectors to those cores. | 2013-11-28 |
20130318335 | COMPUTING DEVICE AND METHOD OF CAPTURING SHUTDOWN CAUSE IN SAME - A computing device includes an embedded controller. The embedded controller suspends a shutdown process in response to detection of a shutdown command, and obtains a shutdown cause according to the shutdown command. The embedded controller includes a memory. The embedded controller stores the shutdown cause in the memory, and resumes the shutdown process after the shutdown cause has been stored in the memory. A method of capturing shutdown cause in a computing device is also provided. | 2013-11-28 |
20130318336 | Method for Executing Bios Tool Program in Non-SMI Mechanism - A method for executing a Basic Input Output System (BIOS) tool program in a non-System Management Interrupt (SMI) mechanism is applicable to a computer and includes: bi-directionally transmitting, by an ACPI ASL module and a service module, a corresponding trigger signal; bi-directionally transmitting, by the service module and a driver, the trigger signal; bi-directionally transmitting, by the driver and a real-time service module of a BIOS, the trigger signal; and performing, by the BIOS, event processing according to the trigger signal to obtain a processing result, or performing, by the BIOS, logic operation on the data to obtain operation data. | 2013-11-28 |
20130318337 | DMI REDUNDANCY IN MULTIPLE PROCESSOR COMPUTER SYSTEMS - In accordance with various aspects of the disclosure, a method and apparatus are disclosed that includes aspects of monitoring a first processor of a computer by a monitoring module for a first processor instability; determining if the first processor is stable based on the monitored first processor instability; routing operational priority to a second processor of the computer through a multiplexer module if the first processor is determined not to be stable, wherein a first interface of the first processor and a second interface of the second processor are in communication with the multiplexer module and wherein the first processor and the second processor are in communication by a processor interconnect; and operating the computer using the second processor. | 2013-11-28 |
20130318338 | Selective Management Controller Authenticated Access Control to Host Mapped Resources - An information handling system includes a host mapped general purpose input output (GPIO), a shared memory, a board management controller, and a cryptography engine. The host mapped GPIO includes a plurality of registers. The board management controller is in communication with the host mapped GPIO and with the shared memory, and is configured to control accessibility to the plurality of registers in the GPIO, and to control write accessibility of the shared memory based on a private key received from a basic input output system requesting accessibility to the plurality of registers and write accessibility of the shared memory. The cryptography engine is in communication with the board memory controller, and is configured to authenticate the private key received from the board management controller. | 2013-11-28 |
20130318339 | Systems and Methods for Protecting Communications Between Nodes - Systems and methods for protecting communications between at least two nodes protect the identity of a node requesting information, provide content of communications being sent and/or obscuring a type of communications being sent. Varying degrees of protection options including encryption, intermediate node termination and direct node communications are provided. | 2013-11-28 |
20130318340 | Flexible Method for Modifying a Cipher to Enable Splitting and Zippering - A cryptographic framework embodies modular methods for securing data, both at rest and in motion, via an extensible encryption method. Key derivation and synchronization methods are defined. Using a small set of initialization values (keys), a multi-dimensional geometric form from which two or more entities (participants) may derive the same discrete set of public and secret keys. Participants can initialize a random number generation method of practically infinite non-repeating length. Furthermore, the random number generator can be used as a One Time Pad synchronized between participants, without ever exchanging said One Time Pad. Furthermore, a method for ciphering and deciphering data including a method for splitting the encrypted data into multiple files or streams and for recombining the original data back. Finally, a method for extending the encryption to include a practically unlimited number of external authentication factors without negatively impacting encryption performance while simultaneously increasing cryptographic strength. | 2013-11-28 |
20130318341 | Highly Scalable Architecture for Application Network Appliances - A highly scalable application network appliance is described herein. According to one embodiment, a network element includes a switch fabric, a first service module coupled to the switch fabric, and a second service module coupled to the first service module over the switch fabric. In response to packets of a network transaction received from a client over a first network to access a server of a data center having multiple servers over a second network, the first service module is configured to perform a first portion of OSI (open system interconnection) compatible layers of network processes on the packets while the second service module is configured to perform a second portion of the OSI compatible layers of network processes on the packets. The first portion includes at least one OSI compatible layer that is not included in the second portion. Other methods and apparatuses are also described. | 2013-11-28 |
20130318342 | Method and System for Generating Implicit Certificates and Applications to Identity-Based Encryption (IBE) - The invention relates to a method of generating an implicit certificate and a method of generating a private key from a public key. The method involves a method generating an implicit certificate in three phases. The public key may be an entity's identity or derived from an entity's identify. Only the owner of the public key possesses complete information to generate the corresponding private key. No authority is required to nor able to generate an entity's private key. | 2013-11-28 |
20130318343 | SYSTEM AND METHOD FOR ENABLING UNCONFIGURED DEVICES TO JOIN AN AUTONOMIC NETWORK IN A SECURE MANNER - A method in an example embodiment includes creating an initial information package for a device in a domain of a network environment when the device is unconfigured. The method further includes communicating the initial information package to a signing authority, receiving an authorization token from the signing authority, and sending the authorization token to the unconfigured device, where the unconfigured device validates the authorization token based on a credential in the unconfigured device. In more specific embodiments, the initial information package includes a unique device identifier of the unconfigured device and a domain identifier of the domain. In further embodiments, the signing authority creates the authorization token by applying an authorization signature to the unique device identifier and the domain identifier. In other embodiments, the method includes receiving an audit history report of the unconfigured device and applying a policy to the device based on the audit history report. | 2013-11-28 |
20130318344 | SYSTEM AND METHOD FOR PROCESSING ENCODED MESSAGES FOR EXCHANGE WITH A MOBILE DATA COMMUNICATION DEVICE - A system and method are provided for pre-processing encrypted and/or signed messages at a host system before the message is transmitted to a wireless mobile communication device. The message is received at the host system from a message sender. There is a determination as to whether any of the message receivers has a corresponding wireless mobile communication device. For each message receiver that has a corresponding wireless mobile communication device, the message is processed so as to modify the message with respect to one or more encryption and/or authentication aspects. The processed message is transmitted to a wireless mobile communication device that corresponds to the first message receiver. The system and method may include post-processing messages sent from a wireless mobile communications device to a host system. Authentication and/or encryption message processing is performed upon the message. The processed message may then be sent through the host system to one or more receivers. | 2013-11-28 |
20130318345 | MULTI-TUNNEL VIRTUAL PRIVATE NETWORK - Systems and methods for controlling Quality-of-Service (“QoS”) in a Virtual Private Network (“VPN”) in a transport network ( | 2013-11-28 |
20130318346 | OBTAINING TARGETED SERVICES USING A UNIQUE IDENTIFICATION HEADER (UIDH) - A system is configured to receive, from a user device, a request for content; obtain, based on receiving the request, an identifier for a subscriber associated with the system and a key; encode the identifier and the key to create a unique identifier; store the unique identifier in the request to create a modified request; provide the modified request to a content provider identified by the request; receive, from the content provider, the content and targeted content, the targeted content being associated with the unique identifier and conforming to an attribute of the subscriber; and provide, to the user device, the content and the targeted content. | 2013-11-28 |
20130318347 | PRIVATE DATA SHARING SYSTEM - A novel architecture for a data sharing system (DSS) is disclosed and seeks to ensure the privacy and security of users' personal information. In this type of network, a user's personally identifiable information is stored and transmitted in an encrypted form, with few exceptions. The only key with which that encrypted data can be decrypted, and thus viewed, remains in the sole possession of the user and the user's friends/contacts within the system. This arrangement ensures that a user's personally identifiable information cannot be examined by anyone other than the user or his friends/contacts. This arrangement also makes it more difficult for the web site or service hosting the DSS to exploit its users' personally identifiable information. Such a system facilitates the encryption, storage, exchange and decryption of personal, confidential and/or proprietary data. | 2013-11-28 |
20130318348 | SYSTEM AND METHOD FOR PROCESSING TRANSACTIONS - Embodiments of the invention include methods, systems, and computer-readable media for processing transactions involving sensitive information, such as a credit card number. Embodiments include a first server authenticating a second server based on a security token and determining whether the security token is expired. Based on the results, the first server may request a transaction token associated with sensitive information. The first server may encrypt the transaction token using a public key of the second server. The first server may send the encrypted transaction token as a parameter to a URL, wherein the URL is configured to cause a browser on a client to send, to the second server, a request for the page and the encrypted transaction token. | 2013-11-28 |
20130318349 | PROCESSING OF COMMUNICATION DEVICE SIGNATURES FOR USE IN SECURING NOMADIC ELECTRONIC TRANSACTIONS - A method for execution in a communication device, which comprises receiving a first data set and a second data set over a first communication path; receiving a series of requests over local communication path different from the first communication path; responding to a first one of the requests by releasing a first response including the first data set over the local communication path; and responding to a second one of the requests by releasing a second response including the second data set over the second communication path. | 2013-11-28 |
20130318350 | INFORMATION PROCESSING APPARATUS AND METHOD, RECORDING MEDIUM AND PROGRAM - The present invention relates to an information processing apparatus allowing proper communication with a communication partner in accordance with a communication time of the communication partner. | 2013-11-28 |
20130318351 | SIMILARITY DEGREE CALCULATION SYSTEM, SIMILARITY DEGREE CALCULATION APPARATUS, COMPUTER PROGRAM, AND SIMILARITY DEGREE CALCULATION METHOD - Based on an encrypted feature vector (comparison ciphertext) encrypted with a public key of a decryption apparatus and an encrypted feature vector (target ciphertext) encrypted with the public key of the decryption apparatus, and a random number (temporary key) generated by a random number generation unit (temporary key generation unit), an encrypted random similarity degree calculation unit (interim similarity degree ciphertext calculation unit) performs calculation for calculating a similarity degree in a first stage, with two encrypted feature vectors kept encrypted, thereby calculating a second challenge. The decryption apparatus decrypts the second challenge with a secret key sk of the decryption apparatus, and performs calculation for calculating the similarity degree in a second stage with a result of the decryption kept encrypted with the temporary key, thereby calculating a second response. A plaintext similarity degree extraction unit (similarity degree calculation unit) decrypts the second response with the temporary key, thereby calculating a similarity degree. | 2013-11-28 |
20130318352 | COMMUNICATION SETUP METHOD AND WIRELESS CONNECTION DEVICE - A method of setting up wireless communication between a client device and a wireless connection device, the method including: establishing non-limited, temporary communication between devices; obtaining an identifier assigned to a client device or an identifier assigned to connection between the client device and a wireless connection device; limiting a device accessing the temporary communication by using the obtained identifier; causing the client device to receive a file for communication settings for the wireless connection device; establishing encrypted communication in conformity with a predetermined protocol; and causing information on communication settings to be exchanged via the encrypted communication. | 2013-11-28 |
20130318353 | Method for Creating and Installing a Digital Certificate - The invention comprises a method of creating a certificate based on the contents of another certificate. The certificate is then automatically installed and configured on the server where it will be used. A further enhancement automatically requests and installs the certificate prior to an existing certificate's expiration. | 2013-11-28 |
20130318354 | METHOD FOR GENERATING A CERTIFICATE - The invention relates to a method for generating a certificate for signing electronic documents by means of an ID token ( | 2013-11-28 |
20130318355 | METHOD FOR MANAGING CONTENT ON A SECURE ELEMENT CONNECTED TO AN EQUIPMENT - The invention concerns a method for managing content on a secure element connected to an equipment, this content being managed on the secure element from a distant administrative platform. According to the invention, the method consists in: establishing, at the level of the administrative platform a secure channel between the equipment and the administrative platform, thanks to session keys generated by the secure element and transmitted to the equipment; transmitting to the administrative platform a request to manage content of the secure element; and verifying at the level of the administrative platform that this request originates from the same secure element that has generated the session keys and, if positive, authorizing the management and, if negative, forbid this management. | 2013-11-28 |
20130318356 | DISTRIBUTION OF DIGITAL CONTENT PROTECTED BY WATERMARK-GENERATING PASSWORD - A receiver receives digital content scrambled using a control word and a user code for the scrambled content. A user inputs the user code that is forwarded to a code extractor that generates the control word and a user identifier from it. The control word is sent to a descrambler, a watermark information generator and a visible watermark insertion unit. The descrambler descrambles the scrambled content using the control word, an invisible watermark insertion unit inserts invisible watermark information obtained from the watermark information generator into the descrambled content and the visible watermark insertion unit inserts the user identifier as a visible watermark. Also provided are a corresponding method for processing digital content and a method and a device for generating the user code. | 2013-11-28 |
20130318357 | System and Method for Secure Software Update - A secure software update provides an update utility with an update definition, a private encryption key and a public signature key to a target device. A software update package is prepared on portable media that includes an executable update program, a checksum for the program that is encrypted with a symmetrical key, an encrypted symmetrical key that is encrypted with a public encryption key and a digital signature prepared with a private signature key. The update process authenticates the digital signature, decrypts the symmetrical key using the private encryption key, and decrypts the checksum using the symmetrical key. A new checksum is generated for the executable update program and compared to the decrypted checksum. If inconsistencies are detected during the update process, the process is terminated. Otherwise, the software update can be installed with a relatively high degree of assurance against corruption, viruses and third party interference. | 2013-11-28 |
20130318358 | APPARATUS FOR GENERATING SECURE KEY USING DEVICE AND USER AUTHENTICATION INFORMATION - A secure key generating apparatus comprising an ID calculating unit receiving a primitive ID from a first storage device and calculating a first media ID, (a unique identifier of the first storage device), from the first primitive ID; a user authentication information providing unit providing user authentication information for authenticating the current; and a secure key generating unit for generating a first Secure Key using both the first media ID and the first user's authentication information. The Secure Key is used to encrypt/decrypt content stored in the first storage device. The secure key generating unit generates a first different Secure Key using a second media ID of a second storage device, and generates a second different Secure Key using second user's user authentication information. Only the first Secure Key can be used to decrypt encrypted content stored in the first storage device that was encrypted using the first Secure Key. | 2013-11-28 |
20130318359 | SYSTEMS AND METHODS FOR VERIFYING UNIQUENESS IN ANONYMOUS AUTHENTICATION - A method for anonymous authentication by an electronic device is described. The method includes obtaining biometric data. The method also includes generating a token. The method also includes blinding the token to produce a blinded token. The method also includes sending the blinded token and biometric information based on the biometric data to a verifier. The method also includes receiving a signature of the blinded token from the verifier if corresponding biometric information is not stored by the verifier. | 2013-11-28 |