08th week of 2013 patent applcation highlights part 46 |
Patent application number | Title | Published |
20130046885 | System and Method for Performing Capacity Planning for Enterprise Applications - A system and method for capacity planning for enterprise networks, such as identifying bottlenecks and removing or replacing the bottleneck device are provided. The device utilization for one or more network devices are measured or read from measured data. A relative load is calculated from the device utilization data and device utilization is compared to a device threshold to determine the bottleneck device. A method is also provided for determining network utilizations, network populations and a relative response times based on only limited measurable device usage data. | 2013-02-21 |
20130046886 | Controlling a Network Connection Status Indicator - This disclosure describes techniques for restricting activity of a status indicator if a received data unit is determined to be a protocol control unit that is selected for filtering. In one embodiment, a method is described that comprises receiving a data unit from a network, determining whether the received data unit is a protocol control unit, and restricting activity of a status indicator if the received data unit is determined to be the protocol control unit, or allowing activity of the status indicator if the received data unit is determined to be data other than the protocol control unit. | 2013-02-21 |
20130046887 | NETWORK CAPACITY PLANNING FOR MULTIPLE INSTANCES OF AN APPLICATION - Data representing application deployment attributes, network topology, and network performance attributes based on a reduced set of element attributes is utilized to simulate application deployment. The data may be received from a user directly, a program that models a network topology or application behavior, and a wizard that implies the data based on an interview process. The simulation may be based on application deployment attributes including application traffic pattern, application message sizes, network topology, and network performance attributes. The element attributes may be determined from a lookup table of element operating characteristics that may contain element maximum and minimum boundary operating values utilized to interpolate other operating conditions. Application response time may be derived using an iterative analysis based on multiple instances of one or more applications wherein a predetermined number of iterations are used or until a substantially steady state of network performance is achieved. | 2013-02-21 |
20130046888 | METHOD AND DEVICE FOR MANAGING DEVICES IN DEVICE MANAGEMENT SYSTEM - A method for managing devices in a device management system includes: sending, by a server, target device condition information to a gateway; and sending, by the server, management information for a target device to the gateway, and triggering the gateway to determine the target device according to the target device condition information and send the management information to the target device. According to a trigger of the server and the target device condition information sent by the server, the gateway searches for the target device; and according to a trigger of the server, the gateway sends the management information sent by the server to the target device. Embodiments of the present disclosure also provide a server and a gateway in a device management system. Thereby, a type of target devices can be managed in batches. | 2013-02-21 |
20130046889 | METHODS AND APPARATUSES FOR SCHEDULING USERS IN WIRELESS NETWORKS - In a method for scheduling a set of active users for transmission in a wireless network, a plurality of scheduling metrics are calculated based on system state information for the wireless network, and the set of active users are scheduled for transmission according to the candidate transmission schedule corresponding to a maximum scheduling metric from among the calculated scheduling metrics. Each of the plurality of scheduling metrics corresponding to a candidate transmission schedule among a plurality of candidate transmission schedules. | 2013-02-21 |
20130046890 | ACTIVITY-BASED BLOCK MANAGEMENT OF A CLUSTERED FILE SYSTEM USING CLIENT-SIDE BLOCK MAPS - A technique for operating a client node in a clustered file system includes allocating a number of blocks during a first time window and tracking the number of blocks allocated during the first time window. The technique further includes transmitting a block allocation request to a server node of the clustered file system for a number of requested blocks in response to a number of free blocks in a client-side block map reaching a first threshold value. In this case, the number of the requested blocks is based on the number of blocks allocated by the client node during the first time window. | 2013-02-21 |
20130046891 | MOVING A PARTITION BETWEEN COMPUTERS - In an embodiment, a request is received that requests to move a first partition from a source computer to a destination computer. In response to the request, charging is halted for a resource used by the first partition at the source computer while the first partition is executing at the source computer. In response to the request, a resource is allocated to a second partition at the destination computer. In response to the request, use of the resource is charged at the destination computer. In response to the request, execution of the second partition is started at the destination computer. | 2013-02-21 |
20130046892 | METHOD AND APPARATUS OF CLUSTER SYSTEM PROVISIONING FOR VIRTUAL MACHING ENVIRONMENT - A method relates to provisioning a cluster system in a virtual machine environment in a storage system. The storage system has a plurality of hosts, a fabric network, a storage array, and a management server. The method includes inputting information on a first cluster system to be defined, the information including selecting a scale unit wherein the first cluster system is to be defined. An inventory database including resource information for the scale unit selected is provided. A virtual I/O (“vIO”) information is provided. The vIO information assigns each of hosts selected for the first cluster system with a vIO device, at least one virtual computer network address, and at least one virtual storage network address. A first cluster definition for the first cluster system in the selected scale unit is created using the vIO information. | 2013-02-21 |
20130046893 | SYSTEM AND METHOD FOR TRANSFER OF AN APPLICATION STATE BETWEEN DEVICES - To enable continuous execution of an application, a system and method for transferring an application state is provided. A gesture corresponding to a transfer act is detected by a gesture detection module in a first device executing the application. The first device communicates with a registration and relay server to determine eligible transfer recipients based on criteria such as location and/or devices that are currently executing the application. A transfer recipient is selected and platform independent application state Data Transfer Objects are generated that describe the state of execution on the first device. The application state DTOs are transferred via the server to the recipient device which enacts the application state DTOs to continue the execution of the application on the recipient device. Because the application state DTOs are platform independent, the application state can be transferred to almost any device that is able to execute the application. | 2013-02-21 |
20130046894 | MODEL-DRIVEN REST CONSUMPTION FRAMEWORK - The present disclosure describes methods, systems, and computer program products for implementing web services. One method includes identifying a REST service for integration with a business application, identifying a set of metadata associated with the REST service, and generating a REST client proxy object associated with the REST service for use in consuming the REST service with the business application, where an instantiation of the REST client proxy object is consumable via the business application. In some instances, the method may include consuming the REST service using an instantiation of the generated REST client proxy object associated with the REST service. Further, the identified set of metadata associated with the REST service may include a service structure document and a metadata document. Generating the REST client proxy object may include generating at least one business configuration object and/or at least one authentication proxy artifact associated with the REST service. | 2013-02-21 |
20130046895 | ANCILLARY SERVICES NETWORK APPARATUS - An apparatus for providing Ancillary Services to an ISO controls multiple controllable resources over a network in a cost effective manner. The apparatus comprises a central server computer and multiple controllers communicative with the central server computer over a network. Each controller is located at a resource site and controls one or more resource devices. The central server computer has a memory with a program which executes the following operations: receives a services request signal from the ISO such as a Ancillary Services request, which requests a change in power consumed or supplied by the resources on the network; calculates the lowest cost means of delivering the requested Ancillary Services request; and sends setpoint control signals to the controller at each resource site where a change is needed, requesting a change in the operation of each resource device. | 2013-02-21 |
20130046896 | SCALABLE TRANSCODING FOR STREAMING AUDIO - Systems and techniques for capturing audio and delivering the audio in digital streaming media formats are disclosed. Several aspects of the systems and techniques operate in a cloud computing environment where computational power is allocated, utilized, and paid for entirely on demand. The systems and techniques enable a call to be made directly from a virtual machine out to a Public Switch Telephone Network (PSTN) via a common Session Interface Protocol (SIP) to PSTN Breakout service, and the audio to be delivered onward to one or more Content Delivery Network (CDN). An audio call capture interface is also provided to initiate and manage the digital streaming media formats, | 2013-02-21 |
20130046897 | SIP COMMUNICATION PROTOCOL - The present invention provides an improved SIP communication protocol. An NAT (Network Address Translator) traversal method is added before the SIP communication protocol, i.e. a client to client (C2C) module function is added to improve the function of SIP communication protocol, so as to solve the problem that the RTP (Real-time Transport Protocol) packets cannot traverse NAT firewall to achieve C2C communication after SIP (Session Initiation Protocol) is ended in VoIP. The major content of the present invention is to conduct a plurality of detection before SIP communication protocol, so as to predict the allocation rules of the port number by the C2C module, and open the RTP channel for C2C. | 2013-02-21 |
20130046898 | SYSTEM WITH MULTIPLE NETWORK PROTOCOL SUPPORT - The present invention provides a system with multiple network protocol support. The system includes: a first memory, the first memory comprising program instructions for processing upper and lower layers of the network protocol; a first processor, where the first processor processes the upper layers of the network protocol for a data packet according to the program instructions in the first memory; and a second processor, where the second processor processes lower layers of the network protocol for the data packet according to the program instructions in the first memory. When the network protocol is changed, instructions for the new protocol is fetched from a second memory and placed in the first memory. Thus, the hardware of the system need not be redesigned when changing protocols, and the same on-system unit is used to implement each protocol. This increases flexibility, provides cost effectiveness, and increases the reliability of the system. | 2013-02-21 |
20130046899 | IPV6 LAN-SIDE ADDRESS ASSIGNMENT POLICY - Techniques are presented for assigning a network address to a computing device. In one embodiment, a network device such as a router may be responsible for assigning a network address (e.g., an IP address) to a connected computing device. For example, in IPv6, the network device provides the computing device with a 64-bit prefix that the computing device then uses to generate a 128-bit unique IP address. The network device typically receives this prefix from another server located in the WAN. In case of a communication failure with the WAN, the network device may be unable to attain the correct prefix. Instead of assigning a random prefix that may cause a conflict if the computing device uses the incorrect prefix on the WAN, the network device may assign a different IP address using a different communication protocol—e.g., IPv4. The computing device can then use IPv4 to access both the LAN and the WAN without risking a conflict. | 2013-02-21 |
20130046900 | DYNAMIC TRANSACTION PROTOCOL UPGRADES - Including support for advanced protocols in propagation information transferred between applications. Transaction managers associated with the applications communicate with each other to complete a transaction. Rather than communicating using a standard protocol, embodiments of the invention enable a first transaction manager to identify advanced protocols supported by the first transaction manager to a second transaction manager using existing propagation tokens. The second transaction manager selects one of the supported protocols to communicate with the first transaction manager to complete the transaction. | 2013-02-21 |
20130046901 | SYSTEM AND METHOD FOR STREAM PROCESSING - A method, computer program product, and system for de-centralized stream processing is provided. The method may include providing a plurality of processing nodes each of said processing nodes configured to transmit and receive a stream of data. The method may further include adding one or more new processing nodes to the computing system. The method may also include determining a source node based upon, at least in part, an activation level being above or below a particular threshold. The method may additionally include, for each of the one or more added processing nodes, automatically determining an appropriate role, based upon, at least in part, a neighboring processing node. | 2013-02-21 |
20130046902 | PROCEDURE AND DEVICE FOR TRANSMISSION OF MULTIMEDIA DIGITAL DATA - A multimedia digital data transmission device which may respond to a request message from a second segment of a second operational data stream in fast forward or rewind mode (trick mode), associated with a first data stream of a selected multimedia digital content transmitted from a client device ( | 2013-02-21 |
20130046903 | SYSTEM AND METHOD FOR STREAM PROCESSING UTILIZING TOTIPOTENT MORPHOGENIC STEM CELLS - A method, computer program product, and system for de-centralized stream processing is provided. The method may include providing a plurality of processing nodes each of said processing nodes configured to transmit and receive a stream of data. The method may further include restricting a subset of the plurality of processing nodes from differentiating into a role. The method may also include identifying a failure at one of the processing nodes and replacing the failed node with one of the processing nodes from the restricted subset. | 2013-02-21 |
20130046904 | MANAGEMENT PROCESSORS, METHODS AND ARTICLES OF MANUFACTURE - Example management processors, methods and articles of manufacture are disclosed. A disclosed example management processor includes a network card interface to communicatively couple the management processor to an operating environment, and a request processor to forward a received external management request to the operating environment via the network card interface, and to combine response information received from the operating environment with response information generated at the management processor. | 2013-02-21 |
20130046905 | FIBRE CHANNEL INPUT/OUTPUT DATA ROUTING SYSTEM AND METHOD - A method of performing an input/output (I/O) processing operation includes obtaining information relating to an I/O operation at a channel subsystem in the host computer system, the channel subsystem including at least one channel having a channel processor and a local channel memory, generating addressing information and forwarding the addressing information to a network interface between the channel subsystem and at least one I/O device, the addressing information specifying a location in the local channel memory. The method also includes forwarding an I/O command message to the at least one I/O device via the network interface, receiving a data transfer request from the network interface that includes the addressing information, accessing one of a plurality of address control words (ACWs), each ACW specifying an address of a location in a host computer memory, and routing the data transfer request to the host memory location specified in the ACW. | 2013-02-21 |
20130046906 | HIERARCHICAL MULTI-TENANCY SUPPORT FOR HOST ATTACHMENT CONFIGURATION THROUGH RESOURCE GROUPS - Exemplary method, system, and computer program embodiments for hierarchy multi-tenancy support for configuration of a plurality of host attachment through a plurality of resource groups in a computing storage environment are provided. In one embodiment, multiple data storage subsystems are configured with multiple operators for configuration and management of multiple host attachments to multiple logical volumes. A logical operator is designated with the responsibility of designating authority to a host attachment operator and the ability to configure multiple logical volumes. Limited authority is provided for the host attachment operator to configure multiple volume groups and multiple host ports to a specific user. | 2013-02-21 |
20130046907 | MEDIA SHARING DEVICE - A media sharing device includes a data bridge device and two switching control modules. The data bridge device has two terminals connected to USB interface ports of two computers and provides bi-directional transmission of media of displayed image, keyboard, cursor, and sound of the computers in USB data format between the computers. Switching control modules are mounted in the computers and are activated by an associated activation device to switch the controlling side and controlled side of the computers. The controlling side computer transmits data of displayed image, keyboard, cursor, and sound to the controlled side computer for computer display, executing the displayed image, keyboard, cursor, and sound supplied from the controlling side computer, or the activation device of the controlled side computer is activated to issue an instruction to the controlling side computer to switch the controlling side and the controlled side of the computers. | 2013-02-21 |
20130046908 | SCALABLE METHOD AND APPARATUS TO CONFIGURE A LINK - Disclosed herein axe reconfigurable ports and methods for doing the same. | 2013-02-21 |
20130046909 | Method and Apparatus of Master-to-Master Transfer of Data on a Chip and System on Chip - A system on chip and associated method facilitates transfer of data between two or more master blocks through a bus on chip. The system creates a direct path for data transferring from a master port of a bus to another master port of the same bus. The bus includes a plurality of signals used to transfer data, address or control information between two or several blocks on chip. The behavior of bus connector block is controlled according to the destination of data coming from a master port. The system includes a master-connector-slave arrangement that enables the direct data communication between two or several master blocks, without taking any slave blocks as the data buffer. A bus connector block is configured to manage bus arbitrating and address decoding, and particularly to create the direct data path between master blocks. | 2013-02-21 |
20130046910 | METHOD FOR MANAGING A PROCESSOR, LOCK CONTENTION MANAGEMENT APPARATUS, AND COMPUTER SYSTEM - A method for managing a processor includes: obtaining an online request of a processor of a computer system; collecting lock contention information of the computer system if a lock contention status flag indicates a non-lock thrashing status; determining whether the computer system is in a lock thrashing status according to the lock contention information; and accepting the online request if it is determined that the computer system is in a non-lock thrashing status. By using the management method according to embodiments of the present application, processor performance degradation and a waste of idle processor resources that are caused by the case that the computer system is in a lock thrashing status are prevented, thereby improving utilization efficiency of processor resources and promoting overall performance of the computer system. | 2013-02-21 |
20130046911 | STORAGE CONTROL APPARATUS - An aspect of the invention is a storage control apparatus, comprising a plurality of processors, a memory, an I/O device coupled to a storage device, a virtualization module that allocates a first processor to a first guest and a second processor to a second guest from among the plurality of processors, and an interrupt control module that receives an interrupt from the I/O device and transmits the interrupt to any one of the plurality of processors, wherein the virtualization module comprises, a state detection module that detects at least one of a state of the first guest and a state of the first processor, and an interrupt delivery destination control module that switches the interrupt with respect to the first processor to the second processor when at least one of the state of the first guest and the state of the first processor becomes a predetermined state. | 2013-02-21 |
20130046912 | METHODS OF MONITORING OPERATION OF PROGRAMMABLE LOGIC - Disclosed is a method of monitoring operation of programmable logic for a streaming processor, the method comprising: generating a graph representing the programmable logic to be implemented in hardware, the graph comprising nodes and edges connecting nodes in the graph; inserting, on each edge, monitoring hardware to monitor flow of data along the edge. Also disclosed is a method of monitoring operation of programmable logic for a streaming processor, the method comprising: generating a graph representing the programmable logic to be implemented in hardware, the graph comprising nodes and edges connecting the nodes in the graph; inserting, on at least one edge, data-generating hardware arranged to receive data from an upstream node and generate data at known values having the same flow control pattern as the received data for onward transmission to a connected node. | 2013-02-21 |
20130046913 | MULTIMEDIA STORAGE CARD SYSTEM - A multimedia storage card system includes a memory card; a dynamic switch coupled electrically and communicatively to the memory card; a first accessor coupled electrically and communicatively to the dynamic switch for accessing to the memory card, thereby storing data into and retrieving data from the memory card; and a second accessor coupled electrically and communicatively to the dynamic switch. Upon receipt of a first access signal transmitted from the second accessor, the dynamic switch determines whether the first accessor is in an idle condition. Upon detecting the first accessor is in the idle condition, the dynamic switch is switched to and in communication link with the second accessor, thereby transmitting the first access signal to the memory card and enabling the second accessor to access the memory card in order to store data into and retrieving data from the memory card. | 2013-02-21 |
20130046914 | CONNECTOR ASSEMBLY - A connector assembly includes first to fourth groups of holes set on a motherboard, first and second peripheral component interconnection express (PCIe) slots, and a number of switches. When the second group of holes are connected to the fourth group of holes, the signals at the second group of holes are transmitted to the second group of pins of the second PCIe slot through the switches and the fourth group of holes. | 2013-02-21 |
20130046915 | Scalable and Configurable System on a Chip Interrupt Controller - Embodiments include a system and method for an interrupt controller that propagates interrupts to a subsystem in a system-on-a-chip (SOC). Interrupts are provided to an interrupt controller that controls access of interrupts to a particular subsystem in the SOC that includes multiple subsystems. Each subsystem in the SOC generates multiple interrupts to other subsystems in the SOC. The interrupt controller processes multiple interrupts and generates an interrupt output. The interrupt output is then transmitted to a particular subsystem. | 2013-02-21 |
20130046916 | FIBRE ADAPTER FOR A SMALL FORM-FACTOR PLUGGABLE UNIT - The disclosure is directed at a fibre adapter for use with small form factor pluggable (SFP) devices comprising a set of cages for receiving the SFP devices and a switch for interconnecting inputs and outputs of the set of cages. | 2013-02-21 |
20130046917 | FLASH MEMORY CONTROLLER - A flash memory controller includes a recording medium and a processing circuit. When the amount of stored data in a flash memory module is less than a first threshold, the processing circuit controls a read and write circuit of the flash memory module to program a target data block using program threshold voltages within a first voltage range so as to write data into the target data block. When the amount of stored data in the flash memory module is greater than a second threshold, the processing circuit controls the read and write circuit to program the target data block using program threshold voltages within a second voltage range so as to write data into the target data block, wherein the second threshold is greater than the first threshold and the first voltage range is less than 50% of the second voltage range. | 2013-02-21 |
20130046918 | METHOD WRITING META DATA WITH REDUCED FREQUENCY - A method of writing meta data in a semiconductor storage device in relation to a maximum number of written meta data pages N. The method stores write data in a buffer and loads meta data in a meta memory, writes the write data to the storage medium and updates the meta data. The updated meta data is stored upon determining a number of written meta data pages in an updated meta data region, and only exceeding the maximum number of written meta data pages N, a meta data write operation is performed. | 2013-02-21 |
20130046919 | NON-VOLATILE MEMORY SYSTEM - In one embodiment, a memory system includes a memory device with a first memory and a second memory, and a controller configured to control storing of data in the memory device. The controller is configured to control an (N− | 2013-02-21 |
20130046920 | NONVOLATILE MEMORY SYSTEM WITH MIGRATION MANAGER - Disclosed is a memory system that includes a nonvolatile memory having a main region and a cache region; and a memory controller having migration manager managing a migration operation that moves data from cache region to the main region by referencing a Most Recently Used/Least Recently Used (MRU/LRU) list. | 2013-02-21 |
20130046921 | METHOD OF CONFIGURING NON-VOLATILE MEMORY FOR A HYBRID DISK DRIVE - A system, method and machine-readable medium are provided to configure a non-volatile memory (NVM) including a plurality of NVM modules, in a system having a hard disk drive (HDD) and an operating system (O/S). In response to a user selection of a hybrid drive mode for the NVM, the plurality of NVM modules are ranked according to speed performance. Boot portions of the O/S are copied to a highly ranked NVM module, or a plurality of highly ranked NVM modules, and the HDD and the highly ranked NVM modules are assigned as a logical hybrid drive of the computer system. Ranking each of the plurality of NVM modules can include carrying out a speed performance test. This approach can provide hybrid disk performance using conventional hardware, or enhance performance of an existing hybrid drive, while taking into account relative performance of available NVM modules. | 2013-02-21 |
20130046922 | CONTENT ADDRESSABLE MEMORY AND METHOD OF SEARCHING DATA THEREOF - The present invention discloses a content addressable memory and a method of searching data thereof. The method includes generating a hash index data item from a received input data item; searching the cache for presence of a row tag of the RAM data row corresponding to the data item of hash index; in response to presence, searching the RAM for a RAM data item corresponding to the input data item according to the corresponding row tag of the RAM data row; in response to absence, searching the RAM for a RAM data item corresponding to the input data item by using the data item of hash index; and in response to finding a RAM data item corresponding to the input data item in the RAM, outputting data corresponding to the RAM data item. The method can accelerate data search in the CAM. | 2013-02-21 |
20130046923 | MEMORY SYSTEM AND METHOD FOR PASSING CONFIGURATION COMMANDS - A memory system is provided. In the system, there are first and second sets of dynamic random access memories (DRAMs) and a system register. Each DRAM has at least a first and a second addressable mode register, where the binary address of the second mode register is the inverted binary address of the first mode register. The system register has an input configured to be coupled to a controller, an output coupled to the first set of DRAMs via first address lines and an inverted output coupled to the second set of DRAMs via second address lines. The system register is configured to receive mode register set commands including address bits and configuration bits at the input and to output the mode register set commands non-inverted via the output to the first set of DRAMs and in inverted form via the inverted output to the second set of DRAMs. | 2013-02-21 |
20130046924 | Mechanisms To Accelerate Transactions Using Buffered Stores - In one embodiment, the present invention includes a method for executing a transactional memory (TM) transaction in a first thread, buffering a block of data in a first buffer of a cache memory of a processor, and acquiring a write monitor on the block to obtain ownership of the block at an encounter time in which data at a location of the block in the first buffer is updated. Other embodiments are described and claimed. | 2013-02-21 |
20130046925 | Mechanisms To Accelerate Transactions Using Buffered Stores - In one embodiment, the present invention includes a method for executing a transactional memory (TM) transaction in a first thread, buffering a block of data in a first buffer of a cache memory of a processor, and acquiring a write monitor on the block to obtain ownership of the block at an encounter time in which data at a location of the block in the first buffer is updated. Other embodiments are described and claimed. | 2013-02-21 |
20130046926 | EDRAM REFRESH IN A HIGH PERFORMANCE CACHE ARCHITECTURE - A method for implementing embedded dynamic random access memory (eDRAM) refreshing in a high performance cache architecture. The method includes receiving a memory access request, via a cache controller, from a memory refresh requestor, the memory access request for a memory address range in a cache memory. The method also includes detecting that the cache memory located at the memory address range is available to receive the memory access request and sending the memory access request to a memory request interpreter. The method further includes receiving the memory access request from the cache controller, determining that the memory access request is a request to refresh contents of the memory address range in the cache memory, and refreshing data in the memory address range. | 2013-02-21 |
20130046927 | Memory Management Unit Tag Memory with CAM Evaluate Signal - A method and data processing system for accessing an entry in a memory array by placing a tag memory unit ( | 2013-02-21 |
20130046928 | Memory Management Unit Tag Memory - A method and data processing system for accessing an entry in a memory array by placing a tag memory unit ( | 2013-02-21 |
20130046929 | INTERFACE MODULE, COMMUNICATION APPARATUS, AND COMMUNICATION METHOD - An interface module includes ports; a first memory that stores identifiers indicating processing operations for data blocks associating with the ports; a content-addressable memory that stores keys, each including at least one port and one identifier; a second memory that stores processing information associated with the keys and indicating processing operations for data blocks; an action code circuit that, when a data block has been received, obtains, from the first memory, an identifier set for a port that has received the data block; a generation circuit that generates a key from the port that has received the data block and the identifier obtained by the action code circuit; and a judgment circuit that judges how to process the received data block in accordance with a piece of the processing information associated with the generated key obtained by searching the content-addressable memory using the key generated by the generation circuit. | 2013-02-21 |
20130046930 | OPTIMIZING LOCATIONS OF DATA ACCESSED BY CLIENT APPLICATIONS INTERACTING WITH A STORAGE SYSTEM - A method for optimizing locations of physical data accessed by one or more client applications interacting with a storage system, with the storage system comprising at least two redundancy groups having physical memory spaces and data bands. Each of the data bands corresponds to physical data stored on several of the physical memory spaces. A virtualized logical address space includes client data addresses utilizable by the one or more client applications. A storage controller is configured to map the client data addresses onto the data bands, such that a mapping is obtained, wherein the one or more client applications can access physical data corresponding to the data bands. | 2013-02-21 |
20130046931 | OPTIMIZING LOCATIONS OF DATA ACCESSED BY CLIENT APPLICATIONS INTERACTING WITH A STORAGE SYSTEM - A method for optimizing locations of physical data accessed by one or more client applications interacting with a storage system, with the storage system comprising at least two redundancy groups having physical memory spaces and data bands. Each of the data bands corresponds to physical data stored on several of the physical memory spaces. A virtualized logical address space includes client data addresses utilizable by the one or more client applications. A storage controller is configured to map the client data addresses onto the data bands, such that a mapping is obtained, wherein the one or more client applications can access physical data corresponding to the data bands. | 2013-02-21 |
20130046932 | INDICATION OF A DESTRUCTIVE WRITE VIA A NOTIFICATION FROM A DISK DRIVE THAT EMULATES BLOCKS OF A FIRST BLOCK SIZE WITHIN BLOCKS OF A SECOND BLOCK SIZE - A disk drive receives a request to write at least one block of a first block size, wherein the disk drive is configured to store blocks of a second block size that is larger in size than the first block size, and wherein the disk drive stores via emulation a plurality of emulated blocks of the first block size in each block of the second block size. The disk drive generates a read error, in response to reading a selected block of the second block size in which the at least block of the first block size is to be written via the emulation. The disk drive performs a destructive write of selected emulated blocks of the first block size that caused the read error to be generated. The disk drive writes the at least one block of the first block size in the selected block of the second block size. The disk drive sends a notification to indicate the performing of the destructive write. | 2013-02-21 |
20130046933 | STORING DATA IN ANY OF A PLURALITY OF BUFFERS IN A MEMORY CONTROLLER - A memory controller containing one or more ports coupled to a buffer selection logic and a plurality of buffers. Each buffer is configured to store write data associated with a write request and each buffer is also coupled to the buffer selection logic. The buffer selection logic is configured to store write data associated with a write request from at least one of the ports in any of the buffers based on a priority of the buffers for each one of the ports. | 2013-02-21 |
20130046934 | SYSTEM CACHING USING HETEROGENOUS MEMORIES - A caching circuit includes tag memories for storing tagged addresses of a first cache. On-chip data memories are arranged in the same die as the tag memories, and the on-chip data memories form a first sub-hierarchy of the first cache. Off-chip data memories are arranged in a different die as the tag memories, and the off-chip data memories form a second sub-hierarchy of the first cache. Sources (such as processors) are arranged to use the tag memories to service first cache requests using the first and second sub-hierarchies of the first cache. | 2013-02-21 |
20130046935 | SHARED COPY CACHE ACROSS NETWORKED DEVICES - A copy cache feature that can be shared across networked devices is provided. Content added to copy cache through a “copy”, a “like”, or similar command through one device may be forwarded to a server providing cloud-based services to a user and/or another device associated with the user such that the content can be inserted into the same or other files on other computing devices by the user. In addition to seamless movement of copy cache content across devices, the content may be made available in a context-based manner and/or sortable manner. | 2013-02-21 |
20130046936 | DATA PROCESSING SYSTEM OPERABLE IN SINGLE AND MULTI-THREAD MODES AND HAVING MULTIPLE CACHES AND METHOD OF OPERATION - Systems and methods are disclosed for a computer system that includes a first load/store execution unit | 2013-02-21 |
20130046937 | TRANSACTIONAL MEMORY SYSTEM WITH EFFICIENT CACHE SUPPORT - A computer implemented method for use by a transaction program for managing memory access to a shared memory location for transaction data of a first thread, the shared memory location being accessible by the first thread and a second thread. A string of instructions to complete a transaction of the first thread are executed, beginning with one instruction of the string of instructions. It is determined whether the one instruction is part of an active atomic instruction group (AIG) of instructions associated with the transaction of the first thread. A cache structure and a transaction table which together provide for entries in an active mode for the AIG are located if the one instruction is part of an active AIG. The next instruction is executed under a normal execution mode in response to determining that the one instruction is not part of an active AIG. | 2013-02-21 |
20130046938 | QoS-Aware Scheduling - In an embodiment, a memory controller includes multiple ports. Each port may be dedicated to a different type of traffic. In an embodiment, quality of service (QoS) parameters may be defined for the traffic types, and different traffic types may have different QoS parameter definitions. The memory controller may be configured to schedule operations received on the different ports based on the QoS parameters. In an embodiment, the memory controller may support upgrade of the QoS parameters when subsequent operations are received that have higher QoS parameters, via sideband request, and/or via aging of operations. In an embodiment, the memory controller is configured to reduce emphasis on QoS parameters and increase emphasis on memory bandwidth optimization as operations flow through the memory controller pipeline. | 2013-02-21 |
20130046939 | COUPLED LOCK ALLOCATION AND LOOKUP FOR SHARED DATA SYNCHRONIZATION IN SYMMETRIC MULTITHREADING ENVIRONMENTS - In a shared memory process different threads may attempt to access a shared data variable in a shared memory. Locks are provided to synchronize access to shared data variables. Each lock is allocated to have a location in the shared memory relative to the instance of shared data that the lock protects. A lock may be allocated to be adjacent to the data that it protects. Lock resolution is facilitated because the memory location of a lock can be determined from an offset with respect to the data variable that is being protected by the lock. | 2013-02-21 |
20130046940 | OPTIMIZATION OF MEMORY BY TAILORED GENERATION OF RUNTIME STRUCTURES - Data structures used to store data in an enterprise resource planning (ERP) system may be configured and custom-generated in a configuration mode of the ERP system where a subset of selectable data fields may be selected to avoid allocating space and resources to unused data fields. The data structures may then be generated in the configuration mode to eliminate the unused data fields at runtime. This in turn saves space and resources that would otherwise be allocated but not used. In ERP systems substantial space and computing resources may be saved by only allocating space and resources to only those resources that a specific customer intends to use. | 2013-02-21 |
20130046941 | WRITE CIRCUIT, READ CIRCUIT, MEMORY BUFFER AND MEMORY MODULE - The present invention provides a write circuit, a read circuit, a memory buffer and a memory module. The write circuit includes: a data collecting unit, a first check unit, a data restoring unit, a first check data generating unit, a first adjusting unit and a write unit; the read circuit includes: a data read unit, a second check unit, an output data generating unit, a second check data generating unit, a second adjusting unit and an output unit; the memory buffer includes the write circuit and the read circuit; the memory module includes the memory buffer and multiple memory chips connected to the memory buffer. Advantages of the present invention lie in that: data can be transmitted with a memory controller in a low power consumption manner, and the data transmitted based on conversion control data can be read out of or written into a DDR4 memory chip. | 2013-02-21 |
20130046942 | CONTROLLER FOR STORAGE DEVICES AND METHOD FOR CONTROLLING STORAGE DEVICES - A controller is connectable to a host system and a plurality of storage devices. A monitor unit monitors operating status of a plurality of storage devices and sets the operating status of the storage devices in a status table. Upon receiving a write command from the host system, a command responding unit receives write data sent from the host system within a certain period of time after the write command, holds the write data received in a buffer memory, instructs a timer to start counting, sets a write destination for data in the status table, outputs a control signal that gives an instruction to write data to the storage device of the write destination, and returns a write completion response corresponding to the write command to the host system when receiving the deadline notification from the timer. | 2013-02-21 |
20130046943 | STORAGE CONTROL SYSTEM AND METHOD, AND REPLACING SYSTEM AND METHOD - A row buffer | 2013-02-21 |
20130046944 | STORAGE APPARATUS AND ADDITIONAL DATA WRITING METHOD - Deduplicated backup data of a plurality of generations are aggregated and stored. | 2013-02-21 |
20130046945 | STORAGE APPARATUS AND STORAGE APPARATUS CONTROL METHOD - A selector calculates a difference between the number of write operations of a first storage medium and that of a second storage medium and takes the difference as a first difference. Further, the selector calculates a difference between the number of write operations of the first storage medium and that of a third storage medium after copying the data within the second storage medium to the third storage medium and takes the difference as a second difference. Then, the selector selects the second storage medium as a target of replacement with which the second difference is larger than the first difference. A setting changer copies the data stored in the second storage medium selected as a target of replacement to the third storage medium, and changes the setting of the second storage medium to a spare and the setting of the third storage medium to a data write destination. | 2013-02-21 |
20130046946 | STORAGE APPARATUS, CONTROL APPARATUS, AND DATA COPYING METHOD - A determining unit selects one storage device each from storage devices of an external storage apparatus and storage devices of a storage apparatus to which the determining unit belongs. At this point, based on a copy request, the determining unit preferentially selects, within each of the external storage apparatus and the storage apparatus, a storage device including a larger number of logical volumes (LVs) which belong to copy unexecuted LV pairs compared to other storage devices therein. Further, the determining unit determines, as a copy execution target, a copy unexecuted LV pair in which a LV provided in one of the selected two storage devices is a copy source and a LV provided in the other storage device is a copy destination. A copy unit copies data stored in the copy source LV, which belongs to the determined LV pair, to the copy destination LV of the LV pair. | 2013-02-21 |
20130046947 | Mechanisms To Accelerate Transactions Using Buffered Stores - In one embodiment, the present invention includes a method for executing a transactional memory (TM) transaction in a first thread, buffering a block of data in a first buffer of a cache memory of a processor, and acquiring a write monitor on the block to obtain ownership of the block at an encounter time in which data at a location of the block in the first buffer is updated. Other embodiments are described and claimed. | 2013-02-21 |
20130046948 | METHOD FOR REPLICATING A LOGICAL DATA STORAGE VOLUME - Replicated data storage units are autonomously identified and assembled into generationally related data storage volumes. A data storage manager, implementing a re-signaturing process executed at defined intervals or manually initiated on a server or client system connected to the storage area network, scans the collection of visible data storage units to identify those related as a data storage volume. Each replicated data storage unit includes metadata that embeds an identification of the replicated data storage unit and volume accessible to the data storage manager. To assemble a set of replicated data storage units into a generational volume, the data storage unit metadata is rewritten to establish a unique data storage volume identity including information to associate the data storage volume in a lineage with the source data storage volume. | 2013-02-21 |
20130046949 | MAPPING IN A STORAGE SYSTEM - A system and method for maintaining a mapping table in a data storage subsystem. A data storage subsystem supports multiple mapping tables. Records within a mapping table are arranged in multiple levels which may be logically ordered by time. Each level stores pairs of a key value and a pointer value. New records are inserted in a created new (youngest) level. All levels other than the youngest may be read only. In response to detecting a flattening condition, a data storage controller is configured to identify a group of two or more adjacent levels of the plurality of levels for flattening which are logically adjacent in time. A new level is created and one or more records stored within the group are stored in the new level, in response to detecting each of the one or more records stores a unique key among keys stored within the group. | 2013-02-21 |
20130046950 | PRIORITY BASED DEPOPULATION OF STORAGE RANKS - Exemplary method, system, and computer program product embodiments for priority based depopulation of ranks in a computing storage environment are provided. In one embodiment, by way of example only, multiple ranks selected for depopulation are prioritized. The highest priority rank of the multiple ranks is depopulated to a target rank. Additional system and computer program product embodiments are disclosed and provide related advantages. | 2013-02-21 |
20130046951 | PARALLEL DYNAMIC MEMORY ALLOCATION USING A NESTED HIERARCHICAL HEAP - One embodiment of the present invention sets forth a technique for dynamically allocating memory using a nested hierarchical heap. A lock-free mechanism is used to access to a hierarchical heap data structure for allocating and deallocating memory from the heap. The heap is organized as a series of levels of fixed-size blocks, where all blocks at given level are the same size. At each lower level of the hierarchy, a collection of N blocks in the lower level equals the size of a single block at the level above. When a thread requests an allocation, one or more blocks at only one level are allocated to the thread. When threads are finished using an allocation, each thread deallocates the respective allocated blocks. When all of the blocks for a level have been deallocated, defragmentation is performed at that level. | 2013-02-21 |
20130046952 | Administering Thermal Distribution Among Memory Modules With Call Stack Frame Size Management - Administering thermal distribution among memory modules in a computing system that includes temperature sensors, where each temperature sensor measures temperature of a memory module and thermal distribution is effected by: determining, in real-time by a user-level application in dependence upon the temperature measurements of the temperature sensors, whether a memory module is overheated; if a memory module is overheated and if a current call stack frame is stored on the overheated memory module, increasing, by the user-level application, a size of the current call stack frame to fill remaining available memory space on the overheated memory module, ensuring a subsequent call stack frame is stored on a different memory module. | 2013-02-21 |
20130046953 | System And Method For Storing Data In A Virtualized High Speed Memory System With An Integrated Memory Mapping Table - A system and method for providing high-speed memory operations is disclosed. The technique uses virtualization of memory space to map a virtual address space to a larger physical address space wherein no memory bank conflicts will occur. The larger physical address space is used to prevent memory bank conflicts from occurring by moving the virtualized memory addresses of data being written to memory to a different location in physical memory that will eliminate a memory bank conflict. A changeable mapping table that maps the virtualized memory addresses to physical memory addresses is stored in the same memory system. | 2013-02-21 |
20130046954 | MULTI-THREADED DFA ARCHITECTURE - Disclosed is an architecture, system and method for performing multi-thread DFA descents on a single input stream. An executer performs DFA transitions from a plurality of threads each starting at a different point in an input stream. A plurality of executers may operate in parallel to each other and a plurality of thread contexts operate concurrently within each executer to maintain the context of each thread which is state transitioning. A scheduler in each executer arbitrates instructions for the thread into an at least one pipeline where the instructions are executed. Tokens may be output from each of the plurality of executers to a token processor which sorts and filters the tokens into dispatch order. | 2013-02-21 |
20130046955 | Local Computation Logic Embedded in a Register File to Accelerate Programs - A system and methods for improving performance of an central processing unit. The central processing unit system includes: a pipeline configured to receive an instruction; and a register file partitioned into a one or more subarrays where (i) the register file includes one or more computation elements and (ii) the one or more computation elements are directly connected to one or more subarrays. | 2013-02-21 |
20130046956 | SYSTEMS AND METHODS FOR HANDLING INSTRUCTIONS OF IN-ORDER AND OUT-OF-ORDER EXECUTION QUEUES - Systems and methods are disclosed that can include a processor having an instruction unit, a decode/issue unit, a first execution queue configured to provide instructions of a first instruction type to a first execution unit, and a second execution queue configured to provide instructions of a second instruction type to a second execution unit. A first instruction (IMUL) of the second instruction type is received. The first instruction is decoded by the decode/issue unit to determine operands of the first instruction. The operands of the first instruction are determined to include a dependency on a second instruction (Id) of the first instruction type stored in a first entry of the first execution queue. The first instruction is stored in a first entry of the second execution queue. In response to determining that the operands of the first instruction include the dependency on the second instruction: a synchronization indicator corresponding to the first instruction in a second entry of the first execution queue is set immediately adjacent the first entry of the first execution queue, which indicates that the first instruction is stored in another execution queue. A synchronization pending indicator is set in the first entry of the second execution queue to indicate that the first instruction has a corresponding synchronization indicator stored in another execution queue. | 2013-02-21 |
20130046957 | SYSTEMS AND METHODS FOR HANDLING INSTRUCTIONS OF IN-ORDER AND OUT-OF-ORDER EXECUTION QUEUES - Processing systems and methods are disclosed that can include an instruction unit which provides instructions for execution by the processor; a decode/issue unit which decodes instructions received from the instruction unit and issues the instructions; and a plurality of execution queues coupled to the decode/issue unit, wherein each issued instruction from the decode/issue unit can be stored into an entry of at least one queue of the plurality of execution queues. The plurality of queues can comprise an independent execution queue, a dependent execution queue, and a plurality of execution units coupled to receive instructions for execution from the plurality of execution queues. The plurality of execution units can comprise a first execution unit, coupled to receive instructions from the dependent execution queue and the independent execution queue which have been selected for execution. When a multi-cycle instruction at a bottom entry of the dependent execution queue is selected for execution, it may not be removed from the dependent execution queue until a result is received from the first execution unit. When a multi-cycle instruction at a bottom entry of the independent execution queue is selected for execution, it can be removed from the independent execution queue without waiting to receive a result from the first execution unit. | 2013-02-21 |
20130046958 | Systems and Methods for Local Iteration Adjustment - Various embodiments of the present invention provide systems and methods for data processing. As an example, a data processing circuit is disclosed that includes: a data decoder circuit and a local iteration adjustment circuit. The data decoder circuit is operable to perform a number of local iterations on a decoder input to yield a data output. The local iteration adjustment circuit is operable to generate a limit on the number of local iterations performed by the data decoder circuit | 2013-02-21 |
20130046959 | METHOD AND APPARATUS FOR PERFORMING LOGICAL COMPARE OPERATION - A method and apparatus for including in a processor instructions for performing logical-comparison and branch support operations on packed or unpacked data. In one embodiment, instruction decode logic decodes instructions for an execution unit to operate on packed data elements including logical comparisons. A register file including 128-bit packed data registers stores packed single-precision floating point (SPFP) and packed integer data elements. The logical comparisons may include comparison of SPFP data elements and comparison of integer data elements and setting at least one bit to indicate the results. Based on these comparisons, branch support actions are taken. Such branch support actions may include setting the at least one bit, which in turn may be utilized by a branching unit in response to a branch instruction. Alternatively, the branch support actions may include branching to an indicated target code location. | 2013-02-21 |
20130046960 | METHOD AND APPARATUS FOR PERFORMING LOGICAL COMPARE OPERATION - A method and apparatus for including in a processor instructions for performing logical-comparison and branch support operations on packed or unpacked data. In one embodiment, instruction decode logic decodes instructions for an execution unit to operate on packed data elements including logical comparisons. A register file including 128-bit packed data registers stores packed single-precision floating point (SPFP) and packed integer data elements. The logical comparisons may include comparison of SPFP data elements and comparison of integer data elements and setting at least one bit to indicate the results. Based on these comparisons, branch support actions are taken. Such branch support actions may include setting the at least one bit, which in turn may be utilized by a branching unit in response to a branch instruction. Alternatively, the branch support actions may include branching to an indicated target code location. | 2013-02-21 |
20130046961 | SPECULATIVE MEMORY WRITE IN A PIPELINED PROCESSOR - An apparatus generally having an interface circuit and a processor. The interface circuit may have a queue and a connection to a memory. The processor may have a pipeline. The processor is generally configured to (i) place an address in the queue in response to processing a first instruction in a first stage of the pipeline, (ii) generate a flag by processing a second instruction in a second stage of the pipeline, the second instruction may be processed in the second stage after the first instruction is processed in the first stage, and (iii) generate a signal based on the flag in a third stage of the pipeline. The third stage may be situated in the pipeline after the second stage. The interface circuit is generally configured to cancel the address from the queue without transferring the address to the memory in response to the signal having a disabled value. | 2013-02-21 |
20130046962 | Operating a Pipeline Flattener in a Semiconductor Device - A semiconductor device comprising a processor having a pipelined architecture and a pipeline flattener and a method for operating a pipeline flattener in a semiconductor device are provided. The processor comprises a pipeline having a plurality of pipeline stages and a plurality of pipeline registers that are coupled between the pipeline stages. The pipeline flattener comprises a plurality of trigger registers for storing a trigger, wherein the trigger registers are coupled between the pipeline stages. | 2013-02-21 |
20130046963 | ACCESS TO CONTEXT INFORMATION IN A HETEROGENEOUS APPLICATION ENVIRONMENT - Various embodiments of systems and methods to provide access to context information in a heterogeneous application environment are described herein. The context information of a source application is received. The context information is based on the execution of the source application. Further, the context information is stored in one or more context vectors of a global context unit, the one or more context vectors corresponding to the source application and one or more target applications. Furthermore, access to the context information of the global context unit is provided for the one or more target applications upon receiving invoking access indication from the one or more target applications. Also, the source application and the one or more target applications are integrated with the global context unit. | 2013-02-21 |
20130046964 | SYSTEM AND METHOD FOR ZERO PENALTY BRANCH MIS-PREDICTIONS - A system and method may execute a branch instruction in a program. The branch instruction may be received defining a plurality of different possible instruction paths. Instructions for an initial predefined one of the paths may be automatically retrieved from a program memory while the correct path is being determined. If the initial path is determined to be correct, the instructions retrieved for the initial path may continue to be processed and if a different path is determined to be correct, instructions from a stored reserve of instructions may be processed for the different path to supply the program with enough correct path instructions to run the program at least until the program retrieves the correct path instructions from the program memory to recover from taking the incorrect path. The system and method may recover from taking the incorrect path with zero computational penalty. | 2013-02-21 |
20130046965 | SYSTEM AND METHOD OF DERIVING APPROPRIATE TARGET OPERATING ENVIRONMENT - The present invention relates to a configurable parameter driven system and method for providing an appropriate target operating environment based on user specific needs and enterprise objectives. The configuration parameters can be changed to account for newer computing environment solutions that could appear and could also be tailored for enterprise specific needs. The method fingerprint the end users' based on characteristics and requirements to derive user needs and enterprise criteria's. The method is systematic and flexible amenable to change in varying enterprise environment. | 2013-02-21 |
20130046966 | Preloader - This disclosure describes techniques and/or apparatuses for reducing the total time used to boot up a computer and load applications onto the computer. | 2013-02-21 |
20130046967 | Proactive Power Management Using a Power Management Unit - Embodiments of the present disclosure provide systems and methods for proactively managing power in a device. A power management unit (PMU) receives information from various subsystems of a device and estimates the total power required by each subsystem of the device. Based on this information, the PMU can predict power requirements for a particular subsystem or for one or more application(s) to execute. Based on this prediction, the PMU can reconfigure the subsystems so that the device executes more efficiently given the current battery life of the device. Proactive power management advantageously gives the PMU the capability to predict power needs of various subsystems of a device so that the power supplied to these subsystems can be managed in an intelligent way before battery resources are exhausted. | 2013-02-21 |
20130046968 | Automobile Data Transmission - A device transmits automobile data to a server in a communication network. The device records the automobile data obtained from a plurality of sensors installed in the automobile. The device transmits a random access preamble on a first plurality of subcarriers of an uplink carrier to a base station, when a pre-defined condition is met. The device encrypts the automobile data using a first encryption key and transmits the encrypted automobile data to a server via a base station. The base station decrypts the automobile data before forwarding it to the server. | 2013-02-21 |
20130046969 | METHODS FOR DECRYPTING, TRANSMITTING AND RECEIVING CONTROL WORDS, RECORDING MEDIUM AND CONTROL WORD SERVER TO IMPLEMENT THESE METHODS - A method of transmitting control words to terminals that are mechanically and electronically independent of one another includes transmitting, to a terminal, an absent control word in response to a request from the terminal that contains a cryptogram corresponding to the absent control word, for the terminal, selectively determining a number of additional control words to be transmitted to the terminal as a function of a probability that security of the additional control words is compromised, and transmitting, to the terminal, in addition to the absent control word, the determined number of additional control words to enable the terminal to descramble at least one additional cryptoperiod of the multimedia content in addition to the cryptoperiod of the multimedia content that can be descrambled using the absent control word. | 2013-02-21 |
20130046970 | PERIPHERAL APPARATUS, INFORMATION PROCESSING APPARATUS, COMMUNICATION CONTROL METHOD, AND STORAGE MEDIUM - A peripheral apparatus is communicably connected to a management apparatus. The management apparatus manages information of jobs in services provided from a providing apparatus via a network to execute processing of the jobs. The peripheral apparatus includes a communication unit. The communication unit transmits, in a series of processes in the services, checking information used to determine whether there is any job in the management apparatus to the management apparatus by a communication method that does not execute encryption. The communication unit transmits, in the series of processes in the services, other information different from the checking information to the management apparatus by a communication method that executes encryption. | 2013-02-21 |
20130046971 | AUTHENTICATION METHOD, SYSTEM AND DEVICE - An authentication method, system and device are provided by the embodiments of the present invention. Said method includes the following steps: an Application Server (AS) receives an AS access request, which carries a user identifier, transmitted by a User Equipment (UE); the AS generates a key generation request based on the user identifier and transmits it to a network side; the AS receives the key transmitted by the network side, and authenticates the UE according to the key. In the present invention, generating the key between a terminal without a card and the AS is implemented, and the AS authenticates the UE using the generated key, and the security of the data transmission is improved. | 2013-02-21 |
20130046972 | Using A Single Certificate Request to Generate Credentials with Multiple ECQV Certificates - A method and apparatus are disclosed for using a single credential request (e.g., registered public key or ECQV certificate) to obtain a plurality of credentials in a secure digital communication system having a plurality of trusted certificate authority CA entities and one or more subscriber entities A. In this way, entity A can be provisioned onto multiple PKI networks by leveraging a single registered public key or implicit certificate as a credential request to one or more CA entities to obtain additional credentials, where each additional credential can be used to derive additional public key-private key pairs for the entity A. | 2013-02-21 |
20130046973 | FACILITATING ACCESS OF A DISPERSED STORAGE NETWORK - A method begins by a dispersed storage (DS) processing module generating a temporary public-private key pair, a restricted use certificate, and a temporary password for a device. The method continues with the DS processing encoding a temporary private key to produce a set of encoded private key shares and encoding the restricted use certificate to produce a set of encoded certificate shares. The method continues with the DS processing module outputting the set of encoded private key shares and the set of encoded certificate shares to a set of authentication units. The method continues with the DS processing module outputting the temporary password to the device such that, when the device retrieves the set of encoded private key shares and the set of encoded certificate shares, the device is able to recapture the temporary private key and the restricted use certificate for accessing a dispersed storage network (DSN). | 2013-02-21 |
20130046974 | DYNAMIC SYMMETRIC SEARCHABLE ENCRYPTION - Described herein is an efficient, dynamic Symmetric Searchable Encryption (SSE) scheme. A client computing device includes a plurality of files and a dictionary of keywords. An index is generated that indicates, for each keyword and each file, whether a file includes a respective keyword. The index is encrypted and transmitted (with encryptions of the files) to a remote repository. The index is dynamically updateable at the remote repository, and can be utilized to search for files that include keywords in the dictionary without providing the remote repository with information that identifies content of the file or the keyword. | 2013-02-21 |
20130046975 | SYSTEM, METHOD, AND PROGRAM FOR INFORMATION MANAGEMENT - A system and method of decrypting is provided. The method includes grouping domain data of the domain for authorized parties, encrypting a group of leaves in the grouped data having a tree structure using a common key, generating first public data, obtaining a common key by decrypting the first public data using a secret key of a link creator and decrypt the groups using the common key and the secret key, generating a, propagating records, generating second public data by encrypting the table using a common key, obtaining a common key by decrypting the first public data and the second public data using a secret key and generating a view by decrypting data received from a method for the link creator using the common key obtained by decrypting the first public data and the second public data using the secret key. | 2013-02-21 |
20130046976 | System and Method for Accessing Private Networks - A system and method are provided for using a mobile device to authenticate access to a private network. The mobile device may operate to receive a challenge from an authentication server, the challenge having being generated according to a request to access a private network; obtain a private value; use the private value, the challenge, and a private key to generate a response to the challenge; and send the response to the authentication server. An authentication server may operate to generate a challenge; send the challenge to a mobile device; receive a response from the mobile device, the response having been generated by the mobile device using a private value, the challenge, and a private key; verify the response; and confirm verification of the response with a VPN gateway to permit a computing device to access a private network. | 2013-02-21 |
20130046977 | SECURE STREAMING CONTAINER - A system and method for securely streaming encrypted digital media content out of a digital container to a user's media player. This streaming occurs after the digital container has been delivered to the user's machine and after the user has been authorized to access the encrypted content. The user's operating system and media player treat the data stream as if it were a being delivered over the Internet (or other network) from a streaming web server. However, no Internet connection is required after the container has been delivered to the user and the data stream suffers no quality loss due to network traffic or web server access problems. Encrypted content files are decrypted and fed to the user's media player in real time and are never written to the user's storage device. This process makes unauthorized copying of the digital content contained in the digital container virtually impossible. | 2013-02-21 |
20130046978 | REPLICATION SERVER SELECTION METHOD - A method for a client computer to find a network address of a server computer by searching for the network address using at a backup search procedure if the address of the server computer cannot be identified using a primary search procedure. The primary and backup search procedures can be performed in parallel and multiple backup search procedures can be performed to identify the address of the server computer. Alternatively, the primary and backup search procedures can be performed in serial wherein the backup search procedure is performed only when the primary search procedure does not identify the address of the server computer. | 2013-02-21 |
20130046979 | PROTECTING THE INFORMATION ENCODED IN A BLOOM FILTER USING ENCODED BITS OF DATA - Illustrated is a system and method that includes identifying data stored as an entry in a list. The system and method also includes truncating the entry to create a truncated entry. It further includes transforming the truncated entry into a hash, the hash used to set an index position value within a Bloom filter. The system and method also includes an interface module to transmit the Bloom filter. | 2013-02-21 |
20130046980 | HOME NODE-B APPARATUS AND SECURITY PROTOCOLS - A method for authenticating a home nodeB/home evolved node B (H(e)NB) with a network is disclosed. The method includes securely storing H(e)NB location information in a Trusted Environment (TrE); and securely sending the stored H(e)NB location information to the network via the TrE | 2013-02-21 |
20130046981 | SECURE PROVISIONING OF INTEGRATED CIRCUITS AT VARIOUS STATES OF DEPLOYMENT, METHODS THEREOF - An integrated circuit is provisioned after the integrated circuit has been sold and integrated into a customer's product. During provisioning, the integrated circuit is booted in a secure manner using a security value, such as a cryptographic key, owned by a manufacturer of the integrated circuit, or by a purchaser of the integrated circuit, to establish a secure communications channel with a provisioning server. Once the secure communications channel is established, the integrated circuit can be provisioned with a security value that is owned by the purchaser of the integrated circuit and the manufacturer's security value is disabled. | 2013-02-21 |
20130046982 | APPARATUS AND METHOD FOR SUPPORTING FAMILY CLOUD IN CLOUD COMPUTING SYSTEM - A method and an apparatus for effective data sharing between users in a cloud computing system are provided. The cloud computing system includes a first cloud hub and a User Equipment (UE). The first cloud hub provides a cloud service to a UE connected by a public cloud access and provides a cloud service to a UE connected to a public personal cloud system installed by a service provider, and is installed by a user. The UE subscribes to the first cloud hub as a main cloud and inquires as to data stored in the first cloud hub. | 2013-02-21 |
20130046983 | AUTHENTICATION METHOD AND DEVICE, AUTHENTICATION CENTRE AND SYSTEM - An authentication method and device, authentication centre and system are provided. The method comprises: receiving at least one access request and obtaining sub-key information from the access request; generating a group key according to the obtained sub-key information, and interacting with the network side according to the group key to perform the group authentication. The solution can solve the problem that the one-to-one authentication causes network load in the present art, implement the authentication of multiple nodes at one time, reduce network resources and the network load of the server, and can be appropriate for the authentication of the terminal nodes in the internet of things, and can greatly improve the availability of services in the internet of things. | 2013-02-21 |
20130046984 | Establishing a Secured Communication Session - The present invention relates to a method for establishing a secured communication session in a communication system between a user using an untrusted device and a server. According to the present invention the user first obtains an authentication algorithm and an encryption algorithm and then creates a session key. Next the user obtains a public key of the server and sends a personal identity number to the server for authentication by using the authentication algorithm, the personal identity number being encrypted by using the encryption algorithm and the public key of the server. The user also sends the session key to the server for encrypting purpose between the user and the server, the session key being encrypted by using the encryption algorithm and the public key of the server. | 2013-02-21 |