36th week of 2011 patent applcation highlights part 50 |
Patent application number | Title | Published |
20110219121 | RESILIENT ROUTING FOR SESSION INITIATION PROTOCOL BASED COMMUNICATION SYSTEMS - Resilient routing management approaches are provided based on primary/backup and failover/failback relationships in a clustered network environment, where each user and/or resource is assigned to a primary cluster and at least one backup cluster. A distributed handover mechanism enables global knowledge of primary/backup relationships between clusters and their assigned users or resources. | 2011-09-08 |
20110219122 | REMOTE CONTENT CLASSIFICATION AND TRANSMISSION USING MULTIPLE TRANSPORT CHANNELS - In various embodiments, methods and systems are disclosed for the implementation of multiple transport channels between the client and server. Each of the channels may be adapted to efficiently communicate data for a particular data type and thus be particularly well suited for its data-element characteristics and the detected link characteristics between the client and server. | 2011-09-08 |
20110219123 | NETWORK FIREWALL AND NAT TRAVERSAL FOR TCP AND RELATED PROTOCOLS - A message passing protocol allows two clients to establish a connection even when the clients are behind different NAT devices such as NAT firewalls. Beneficially, the protocol does not require that either client has knowledge of where the other client is located (e.g., behind the same NAT device or behind a different NAT device). When two clients want to establish a connection, the clients exchange identifying information with each other by passing the information through a rendezvous server. Based on the identifying information, each client determines and sends a plurality of synchronization packets to a number of different predicted addresses. When synchronization packets reach the actual addresses of both devices, a connection can be established between the clients. | 2011-09-08 |
20110219124 | SYSTEM AND METHOD FOR TWO WAY COMMUNICATION AND CONTROLLING CONTENT IN A WEB BROWSER - A system and method for connected devices over a network includes: receiving, by an address registration server, a communication from a host device and a communication from an endpoint device; determining whether the host device and the endpoint device are connected to a single local network and whether the host device and the endpoint device are each executing a compatible application; and facilitating a network connection between the endpoint device and the host device over the local network by providing a private network address of the endpoint device to the host device. | 2011-09-08 |
20110219125 | Endoscopy device with integrated RFID and external network capability - A unit of equipment designed for use in endoscopic surgery includes radio frequency identification (RFID) circuitry and a network interface. The RFID circuitry can be used to store information of various types, such as component usage tracking information, user preferences, usage logs, error logs, device settings, etc. The network interface allows the unit to communicate over an external network with a remote server. Information, such as information stored in the RFID circuitry or in a separate memory, may be sent over the network to a desired destination, such as a server operated by the manufacturer of the equipment. | 2011-09-08 |
20110219126 | MOBILE TERMINAL, FORWARDING INTERMEDIATE NODE AND MOBILE COMMUNICATIONS SYSTEM | 2011-09-08 |
20110219127 | Method and Apparatus for Selecting Network Services - An approach is provided for selecting a network server. An apparatus comprising at least one processor, and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause, at least in part, the apparatus to load from one or more network servers configuration information of one or more network servers used by a service provider network. The apparatus is also caused to select a network server in the service provider network based at least in part on at least one of network server latency and the network server load. The apparatus is further caused to set the network server as default network server used for at least one of current and future session on one or more user equipment. | 2011-09-08 |
20110219128 | FAST VIRTUAL CONCATENATION SETUP - The invention is directed to optimizing setup of a VCAT connections using (largest) CCAT containers so as to minimize the number of cross-connection commands needed to enable data transfer. A system and method are provided for enhancing VCAT networks to include faster service restoration rates and faster connection setup times. One embodiment includes expanding available VCAT timeslots to include available CCAT timeslots. A routing and signaling control module alerts a source network element, internal network elements and a destination network element that the data transmission includes VCAT payloads rather than the expected CCAT payloads. By issuing this alert, the routing and signaling control module instructs an end-point monitoring function to overlook any mismatch between the expected CCAT rate and the received VCAT traffic. Otherwise, if the mismatch is not overlooked, then the end-point monitoring function will squelch the received VCAT traffic, which terminates the data communication | 2011-09-08 |
20110219129 | SYSTEM AND METHOD FOR CONNECTING NETWORK SOCKETS BETWEEN APPLICATIONS - A system and method for establishing communication over a network includes devices, instructions, and/or operations for: executing a browser application within a web browser, the web browser including a security mechanism for restricting access to and from the browser application; receiving, by the browser application, a private network address of an endpoint device; establishing a first network socket connection and a second network socket connection between the browser application and an application; and sending loss-sensitive network traffic over the first network socket connection and loss-tolerant network traffic over the second network socket connection. | 2011-09-08 |
20110219130 | SYSTEM AND METHOD FOR TWO WAY COMMUNICATION AND CONTROLLING CONTENT IN A GAME - A method for two way communication and control of a game may include executing, by a host device in communication with a computer network, a game application within a web browser. A communication channel is established over the computer network and between the game application and a controller application running on an endpoint device. Data is sent over the communication channel for controlling and playing the game application within the game application. A system for two way communication and control of a game is also provided. | 2011-09-08 |
20110219131 | SYSTEM AND METHOD FOR TWO WAY COMMUNICATION AND CONTROLLING A REMOTE APPARATUS - A system for controlling a remotely controlled apparatus includes executing, by a remotely controlled apparatus, a program for controlling actions of the remotely controlled apparatus. The system includes a first network connection between the remotely controlled apparatus and a network hub device configured to extend the range of a computer network may be established. The system also includes second network connection between the network hub device and an endpoint device executing an application for controlling the remotely controlled apparatus may be established. An endpoint device may send data over the first and second network connections for controlling the remotely controlled apparatus. An application executing within a web browser may process the data. A method for controlling a remotely controlled apparatus is also provided. | 2011-09-08 |
20110219132 | METHOD, SYSTEM AND APPARATUS FOR CONFIGURING A DEVICE FOR INTERACTION WITH A SERVER - A method to a method, system and apparatus for configuring a device for interaction with a server is provided. An intermediation infrastructure mediates registration traffic between any of a plurality of application servers hosting a server-side application and any of a plurality of computing devices executing a client-side application that corresponds to the server-side application. The intermediation infrastructure receives account registration information, including an account identifier and a server identifier, from an application server that is hosting an account. Any one of the computing devices can access the intermediation infrastructure using the account identifier and thereby determine the server identifier and thereafter direct communications with the application server can be effected. | 2011-09-08 |
20110219133 | REGISTER CLUSTERING IN A SIP-BASED NETWORK - In one embodiment, a method can include: receiving a request for service in a first edge proxy; applying a hash function to a source address of an endpoint; and forwarding the request to a second edge proxy in response to a first result of the hash function, or servicing the request in the first edge proxy in response to a second result of the hash function. | 2011-09-08 |
20110219134 | Method and Arrangment for Controlling Sessions in a Communication Network - A method and an apparatus in a multimedia network node ( | 2011-09-08 |
20110219135 | INFORMATION PROCESSING DEVICE, COMMUNICATION ADDRESS PROVIDING SYSTEM, METHOD AND PROGRAM USED FOR SAME - An information processing device includes: communication address providing means | 2011-09-08 |
20110219136 | INTELLIGENT AUDIO AND VISUAL MEDIA HANDLING - Methods, apparatus, and articles of manufacture for transmitting data. A first device defining a preferred language may be configured to receive a media stream from a second device. The second device may be configured to make public broadcasts in a plurality of languages to the first device and other devices. The second device interrupts the media stream at the first device only during transmission of the public broadcast in the preferred language. | 2011-09-08 |
20110219137 | PEER-TO-PEER LIVE CONTENT DELIVERY - A peer-to-peer live content delivery system and method enables peer-to-peer sharing of live content such as, for example, streaming video or audio. Nodes receive broadcasts of available data from neighboring nodes and determine which data blocks to request. Nodes receiving requests for data determine whether or not to accept the requests and provide the requested blocks when accepted. To enable sharing of live content, sharing of data blocks is constrained such that a node attempts to receive a particular data block prior to a playback deadline for the data block. This allows a node continuously provide an output stream of the received data such as, for example, an output of live video content to a display. | 2011-09-08 |
20110219138 | APPARATUS AND METHOD FOR PROVIDING STREAMING SERVICE IN A DATA COMMUNICATION NETWORK - Provided is an apparatus and method for providing an adaptive streaming service based on a Moving Picture Experts Group (MPEG) file in a data communication network. An MPEG file format defined to support an adaptive streaming service between a server and a client is processed to support an adaptive streaming service between a client and a client. An MPEG file format for a particular message used in a procedure for the adaptive streaming service between the server and the client and a procedure for the adaptive streaming service between the clients is newly defined. In particular, as to the newly defined MPEG file format, necessary information for supporting the adaptive streaming service between clients is defined. As the MPEG file format, both a Transport Stream file format defined in MPEG-2 and MPEG-4 transport standards are considered. | 2011-09-08 |
20110219139 | USING END-TO-END CREDIT FLOW CONTROL TO REDUCE NUMBER OF VIRTUAL LANES IMPLEMENTED AT LINK AND SWITCH LAYERS - A method and circuit for implementing enhanced transport layer flow control, and a design structure on which the subject circuit resides are provided. The transport layer provides multiple virtual lanes to application layers, and provides buffering and credit control for the multiple virtual lanes. A source transport layer sends a credit request message to a destination transport layer for an outstanding packets transmission. The packets are sent only responsive to the credit request being granted by the destination transport layer. Respective switch and link layer are constructed to support only a single virtual lane, regardless of how many virtual lanes are supported at the application and transport layers. As a result, the routing, buffering, and flow control at the respective switch and link layer are simplified. | 2011-09-08 |
20110219140 | SYSTEM AND METHOD FOR NETWORK BANDWIDTH SIZING - The present invention relates to a system and method for network bandwidth sizing at a branch level by considering both sizing for throughput and sizing for response times. The invention provides a system and method for bandwidth allocation by performing network capacity planning by applying a model based on Approximate Mean Value Analysis (AMVA). The invention also relates to system and method for bandwidth allocation by performing network capacity planning especially for large enterprises having heavy workloads and diverse applications by applying a model based on Approximate Mean Value Analysis (AMVA). | 2011-09-08 |
20110219141 | Modification of Small Computer System Interface Commands to Exchange Data with a Networked Storage Device Using AT Attachment Over Ethernet - A process executed by a computing device uses commands having a first format to exchange data through a network with a storage device configured to execute commands having a second format. A storage device controller identifies a command type associated with a command received from the process and identifies one or more physical memory addresses associated with the command. The storage device controller identifies a command having a second format associated with the received command and generates a network request including the command having the second format, the one or more physical memory addresses, a device identifier associated with the storage device and a tag. The network request is transmitted through a network to the storage device which executes the command having the second format. For example, an AoE request including an ATA command is generated from a received SCSI command. | 2011-09-08 |
20110219142 | Path Selection In Streaming Video Over Multi-Overlay Application Layer Multicast - A method and a tool based on achievable bandwidth as a metric are provided for selecting paths for overlay construction in an application layer multicast system. An in-band bandwidth probing tool according to the invention can estimate achievable bandwidth, i.e., the data throughput that can be realized between two peers over the transport protocol employed. The tool can determine the amount of extra bandwidth available in the target network path so that excess data traffic can be diverted from congested path without causing new congestion in the target path. | 2011-09-08 |
20110219143 | PATH CALCULATION ORDER DECIDING METHOD, PROGRAM AND CALCULATING APPARATUS - A path calculation order deciding method that is implemented by a calculating apparatus ( | 2011-09-08 |
20110219144 | SYSTEMS AND METHODS FOR COMPRESSION OF DATA FOR BLOCK MODE ACCESS STORAGE - Methods and systems for creating, reading, and writing compressed data for use with a block mode access storage. The compressed data are packed into a plurality of compressed units and stored in a storage logical unit (LU). One or more corresponding compressed units may be read and/or updated with no need of restoring the entire storage logical unit while maintaining a de-fragmented structure of the LU. | 2011-09-08 |
20110219145 | NETWORK INTERFACE AND PROTOCOL - A communication interface for providing an interface between a data link and a data processor, the data processor being capable of supporting an operating system and a user application, the communication interface being arranged to: support a first queue of data received over the link and addressed to a logical data port associated with a user application; support a second queue of data received over the link and identified as being directed to the operating system; and analyse data received over the link and identified as being directed to the operating system or the data port to determine whether that data meets one or more predefined criteria, and if it does meet the criteria transmit an interrupt to the operating system. | 2011-09-08 |
20110219146 | VIRTUAL SOFTWARE APPLICATION DEPLOYMENT CONFIGURATIONS - Configuration items for a software application can be automatically and/or manually discovered, and the application can be packaged to form a virtual application package. A deployment configuration can include settings for the configuration items. The deployment configuration can be set after packaging the software application. For example, a selected configuration item in the deployment configuration may be changed in response to user input. The virtual application package can be deployed to instantiate the application one or more times, and the deployment configuration can be applied in the instantiated application. | 2011-09-08 |
20110219147 | Method And System For Determining Characteristics Of An Attached Ethernet Connector And/Or Cable - A connector comprising a storage device that stores configuration information, may be coupled to a twisted pair cable and may communicate the configuration information to a host device via a corresponding connector. The configuration information may comprise characteristics, features and/or configurations of the connector and/or the cable, for example, wire gauge, safety information, cable category, verification of testing, inner shielding, outer shielding, no shielding, type of use, and/or country of manufacture. The storage device may comprise an EPROM. The configuration information may be communicated utilizing one or more configured pins. The corresponding connector may sense and/or read the configuration information from the connector. The corresponding connector may be mechanically ganged and/or communicatively coupled to other connectors that are integrated in the host device. A single controller may control acquisition of configuration information. A data rate for communicating via the connector and/or cable may be determined based on the configuration information. | 2011-09-08 |
20110219148 | Method for implementing and application of a secure processor stick (SPS) - Systems and methods for implementing a secure processor stick are described. In one aspect, the system for implementing a secure processor stick with a computer, the system comprising: a secure processor stick, including: a processor; a memory coupled to said processor; a smart chip coupled to said processor, said smart chip storing data for implementing a secure environment; and an operating system adapted to run on said memory and said processor, wherein said operating system is adapted to provide a secure environment for display on a computer using said data. | 2011-09-08 |
20110219149 | SYSTEMS AND METHODS FOR MANAGING I/O THROUGHPUT FOR LARGE SCALE COMPUTING SYSTEMS - System and methods for managing I/O throughput for large scale computing systems are provided. In one embodiment, an operating system for a computer system having a processor, a memory and at least one data storage device is provided. The operating system comprises: an operating system kernel; at least one filesystem controlling access to the at least one data storage device; and a toolkit module installed within the operating system kernel. The toolkit module monitors input/output (I/O) calls communicated via a datapath between at least one software application being executed on the processor and the filesystem. The toolkit module inserts one or more tools into the datapath, the one or more tools each executing a predefined function based on observation of a first set of the I/O calls being communicated in the datapath. | 2011-09-08 |
20110219150 | DMA ENGINE CAPABLE OF CONCURRENT DATA MANIPULATION - Disclosed is a method and device for concurrently performing a plurality of data manipulation operations on data being transferred via a Direct Memory Access (DMA) channel managed by a DMA controller/engine. A Control Data Block (CDB) that controls where the data is retrieved from, delivered to, and how the plurality of data manipulation operations are performed may be fetched by the DMA controller. A CDB processor operating within the DMA controller may read the CDB and set up the data reads, data manipulation operations, and data writes in accord with the contents of the CDB. Data may be provided from one or more sources and data/modified data may be delivered to one or more destinations. While data is being channeled through the DMA controller, the DMA controller may concurrently perform a plurality of data manipulation operations on the data, such as, but not limited to: hashing, HMAC, fill pattern, LFSR, EEDP check, EEDP generation, XOR, encryption, and decryption. The data modification engines that perform the data manipulation operations may be implemented on the DMA controller such that the use of memory during data manipulation operations uses local RAM so as to avoid a need to access external memory during data manipulation operations. | 2011-09-08 |
20110219151 | Generation of a Formatted Unique Device Identifier From an AT Attachment Serial Number - A Network Address Authority (“NAA”) identifier associated with a storage device is generated from an Advanced Technology Attachment (“ATA”) serial number, or other identifier, associated with the storage device. The ATA serial number is received from the storage device and used to generate a unique string having a predefined length. In one embodiment, a hash function is applied to the ATA serial number to produce a unique value from the ATA serial number and a portion of the unique value, such as the least significant three bytes, is used as the string having the predefined length. Additional identifying data is combined with the predefined length string and reformatted to generate the NAA identifier. For example, an eight-byte data packet including a four-bit type identifier, a three-byte OUI and the three-byte predefined length string is generated and subsequently used to identify the storage device to processes or devices. | 2011-09-08 |
20110219152 | DATA TRANSFER CONTROL APPARATUS - In a data transfer control apparatus, a transfer start address and a transfer size are acquired from a peripheral circuit. A command is issued in response to an activation signal from the peripheral circuit. When data transfer is performed between the main memory unit and the peripheral circuit, completion of issuance of all of commands corresponding to the transfer start address and transfer size is detected. The transfer size is retained until the end of data transfer. A next command is issued prior to completion of data transfer for one command, and a next activation signal is received upon detection of completion of issuance of all of the commands corresponding to the one transfer start address and transfer size. Next transfer start address and transfer size are acquired upon detection of completion of issuance of all of the commands corresponding to the one transfer start address and transfer size. | 2011-09-08 |
20110219153 | SYSTEMS AND METHODS FOR COMPRESSION OF DATA FOR BLOCK MODE ACCESS STORAGE - Systems and methods for creating, reading, and writing compressed data for use with a block mode access storage. The compressed data are packed into plurality of compressed units and stored in a storage logical unit (LU). One or more corresponding compressed units may be read and/or updated with no need of restoring the entire storage logical unit while maintaining de-fragmented structure of the LU. | 2011-09-08 |
20110219154 | ABSTRACT PROTOCOL INDEPENDENT DATA BUS - An abstraction layer (e.g., transport) between consumer logic (e.g., presentation) and provider logic (e.g., business) that makes composition of, for example, many presentation technologies to many business logic data providers possible without imposing strict interface boundaries to each. The abstraction layer can be an abstract transport data model bus that provides serialization, transformation, and transport services. A core concept of the data access library implementation is a transmittable data object based on a flexible property bag data structure and abstract type system. Pluggable data providers declare the associated data model, and pluggable consumer clients declare the data model consumed (a many-to-many implementation). In other words, declarative (codeless) combinations of front ends and back ends are employed. Moreover, the abstraction layer is hidden from the developer. | 2011-09-08 |
20110219155 | PROCESSING SYSTEM AND METHOD FOR TRANSMITTING DATA - A method for exchanging data between first and second functional units includes the following steps. In a first handshake procedure, data is exchanged corresponding to a communication thread selected by the first functional unit, while independently in a second handshake procedure, information relating to a status of at least one communication thread is exchanged from the second to the first functional unit. The information enables the first functional unit to anticipate the possibility of exchanging data for the at least one communication thread. | 2011-09-08 |
20110219156 | BUS ARBITRATION APPARATUS AND METHOD - A bus arbitration apparatus according to this invention appropriately arbitrates bus rights of use between a plurality of masters and a plurality of slaves so as to efficiently perform requested data transfer. An arbiter A | 2011-09-08 |
20110219157 | DATA PROCESSING DEVICE, SEMICONDUCTOR INTEGRATED CIRCUIT DEVICE, AND ABNORMALITY DETECTION METHOD - A data processing device for detecting the abnormal operation of a CPU is provided. The data processing device comprises a CPU, an interrupt counter, and a counter-abnormal-value detection circuit. The interrupt counter increments a count value based on an interrupt start signal which is outputted in response to an interrupt signal indicative of an interrupt request to the CPU and which indicates that the interrupt request has been accepted, and decrements the count value based on an end-of-interrupt signal which indicates that processing corresponding to the interrupt has completed. The counter-abnormal-value detection circuit detects abnormalities by comparing the count value with a predetermined value. | 2011-09-08 |
20110219158 | STORAGE ASSEMBLY, A PHYSICAL EXPANDER AND A METHOD - A storage assembly includes a physical expander for connection in use to two or more SCSI initiators, and two or more storage devices, wherein the expander is controlled such that it presents plural virtual expanders. A method for connecting two or more storage devices to two or more SCSI initiators within a storage assembly, includes providing a physical expander for connection in use to the two or more SCSI initiators, and two or more storage devices, and controlling the single expander such that it presents plural virtual expanders. | 2011-09-08 |
20110219159 | USB DONGLE DEVICE AND OPERATION METHOD THEREOF, DONGLE EXPANDED DEVICE CONNECTED TO USB DONGLE DEVICE - A universal serial bus (USB) dongle device and method support a connection to a dongle expanded device to perform high speed and multiple communications while observing a USB standard form. The USB dongle device includes a body and a plug formed in an edge of one side of the body. The plug includes a basic connection unit corresponding to a USB basic connection port and an expansion connection unit surrounding the basic connection unit, in which at least one signal line is formed. | 2011-09-08 |
20110219160 | FAST TWO WIRE INTERFACE AND PROTOCOL FOR TRANSFERRING DATA - An apparatus and method for exchanging data between devices. An interface between at least two devices features a serial clock line coupled to each device and a bidirectional serial data line coupled to each device. A delay relative to the clock signal is added to an edge of an output enable signal to prevent a collision between devices when control of the data line is switched. Multiple masters and slaves may be connected to the interface. | 2011-09-08 |
20110219161 | SYSTEM AND METHOD FOR PROVIDING ADDRESS DECODE AND VIRTUAL FUNCTION (VF) MIGRATION SUPPORT IN A PERIPHERAL COMPONENT INTERCONNECT EXPRESS (PCIE) MULTI-ROOT INPUT/OUTPUT VIRTUALIZATION (IOV) ENVIRONMENT - The present invention is a method for providing address decode and Virtual Function (VF) migration support in a Peripheral Component Interconnect Express (PCIE) multi-root Input/Output Virtualization (IOV) environment. The method may include receiving a Transaction Layer Packet (TLP) from the PCIE multi-root IOV environment. The method may further include comparing a destination address of the TLP with a plurality of base address values stored in a Content Addressable Memory (CAM), each base address value being associated with a Virtual Function (VF), each VF being associated with a Physical Function (PF). The method may further include when a base address value included in the plurality of base address values matches the destination address of the TLP, providing the matching base address value to the PCIE multi-root IOV environment by outputting from the CAM the matching base address value. The method may further include constructing a requestor ID for the VF associated with the matching base address value, the requestor ID being based upon the output matching base address value and a bus number for a PF which owns the CAM. | 2011-09-08 |
20110219162 | Adaptive-Allocation Of I/O Bandwidth Using A Configurable Interconnect Topology - Apparatus and methods allocate I/O bandwidth of an electrical component, such as an IC, by configuring an I/O interface into various types of interfaces. In an embodiment of the present invention, an I/O interface is configured into either a bi-directional contact, unidirectional contact (including either a dedicated transmit or dedicated receive contact) or a maintenance contact used in a maintenance or calibration mode of operation. The I/O interface is periodically reconfigured to optimally allocate I/O bandwidth responsive to system parameters, such as changing data workloads in the electronic components. System parameters include, but are not limited to, 1) number of transmit-receive bus turnarounds; 2) number of transmit and/or receive data packets; 3) user selectable setting 4) number of transmit and/or receive commands; 5) direct requests from one or more electronic components; 6) number of queued transactions in one or more electronic components; 7) transmit burst-length setting, 8) duration or cycle count of bus commands, and control strobes such as address/data strobe, write enable, chip select, data valid, data ready; 9) power and/or temperature of one or more electrical components; 10) information from executable instructions, such as a software application or operating system; 11) multiple statistics over respective periods of time to determine if using a different bandwidth allocation would result in better performance. The importance of a system parameter may be weighted over time in an embodiment of the present invention. | 2011-09-08 |
20110219163 | USB 3 Bridge With Embedded Hub - A bridge device for connecting a USB 3 host device with a plurality of downstream, non-USB 3 mass storage devices, such as SATA or PATA devices. The bridge device comprises an embedded hub having a plurality of internal USB 3 devices. The internal USB 3 devices do not have a physical USB 3 interface. The bridge device also has at least one downstream physical non-USB 3 device, to which a mass storage device may be attached. The internal USB 3 devices enable the host device to be presented with a plurality of USB 3 devices. This, in turn, allows transfer to the plurality of downstream physical non-USB 3 devices, via the internal USB 3 devices at an increased rate. The bridge may also include a downstream physical USB 3 interface. This can allow multiple bridge devices to be connected together in a cascade. | 2011-09-08 |
20110219164 | I/O SYSTEM AND I/O CONTROL METHOD - Virtual Functions (VFs) | 2011-09-08 |
20110219165 | PORTABLE COMPUTER - The portable computer includes a PCIe controller, a DisplayPort connector, and a combination switch. The DisplayPort connector includes a hot plug pin. The combination switch is connected between the PCIe controller and the DisplayPort connector. The combination switch includes a selecting pin electronically connected to the hot plug pin. When the DisplayPort connector is electronically coupled to a discrete graphics card using PCIe, the hot plug pin sends a hot plug voltage signal to the selecting pin, and the combination switch electronically connects the DisplayPort connector to the PCIe controller after receiving the signal. | 2011-09-08 |
20110219166 | USB CONTROLLER AND EXECUTION METHOD THEREOF - A universal serial bus (USB) controller and an execution method thereof are presented. The USB controller stores settings of different sensors in an external memory, or stores modified program codes when an originally stored program has bugs. With the execution of the set configurations, the program section to be execute is dynamically loaded into the random access memory (RAM) of the USB controller, so as to reduce the size of the RAM, thereby providing a large program modification space and avoiding the entire chip (the USB controller) from being stretched by an excessive large RAM. | 2011-09-08 |
20110219167 | NON-VOLATILE HARD DISK DRIVE CACHE SYSTEM AND METHOD - A non-volatile hard disk drive cache system is coupled between a processor and a hard disk drive. The cache system includes a control circuit, a non-volatile memory and a volatile memory. The control circuit causes a subset of the data stored in the hard disk drive to be written to the non-volatile memory. In response to a request to read data from the hard disk drive, the control circuit first determines if the requested read data are stored in the non-volatile memory. If so, the requested read data are provided from the non-volatile memory. Otherwise, the requested read data are provided from the hard disk drive. The volatile memory is used as a write buffer and to store disk access statistics, such as the disk drive locations that are most frequently read, which are used by the control circuit to determine which data to store in the non-volatile memory. | 2011-09-08 |
20110219168 | Flash Memory Hash Table - Implementations and techniques for flash memory-type hash tables are generally disclosed. | 2011-09-08 |
20110219169 | Buffer Pool Extension for Database Server - Aspects of the subject matter described herein relate to a buffer pool for a database system. In aspects, secondary memory such as solid state storage is used to extend the buffer pool of a database system. Thresholds such as hot, warm, and cold for classifying pages based on access history of the pages may be determined via a sampling algorithm. When a database system needs to free space in a buffer pool in main memory, a page may be evicted to the buffer pool in secondary memory or other storage based on how the page is classified and conditions of the secondary memory or other storage. | 2011-09-08 |
20110219170 | Method and Apparatus for Optimizing the Performance of a Storage System - Methods and apparatuses for optimizing the performance of a storage system comprise a FLASH storage system, a hard drive storage system, and a storage controller. The storage controller is adapted to receive READ and WRITE requests from an external host, and is coupled to the FLASH storage system and the hard drive storage system. The storage controller receives a WRITE request from an external host containing data and an address, forwards the received WRITE request to the FLASH storage system and associates the address provided in the WRITE request with a selected alternative address, and provides an alternative WRITE request, including the selected alternative address and the data received in the WRITE request, to the hard drive storage system, wherein the alternative address is selected to promote sequential WRITE operations within the hard drive storage system. | 2011-09-08 |
20110219171 | VIRTUAL CHANNEL SUPPORT IN A NONVOLATILE MEMORY CONTROLLER - A controller uses N dedicated ports to receive N signals from N non-volatile memories independent of each other, and uses a bus in a time shared manner to transfer data to and from the N non-volatile memories. The controller receives from a processor, multiple operations to perform data transfers, and stores the operations along with a valid bit set active by the processor. When a signal from a non-volatile memory is active indicating its readiness and when a corresponding operation has a valid bit active, the controller starts performance of the operation. When the readiness signal becomes inactive, the controller internally suspends the operation and starts performing another operation on another non-volatile memory whose readiness signal is active and for which an operation is valid. A suspended operation may be resumed any time after the corresponding readiness signal becomes active and on operation completion the valid bit is set inactive. | 2011-09-08 |
20110219172 | NON-VOLATILE MEMORY ACCESS METHOD AND SYSTEM, AND NON-VOLATILE MEMORY CONTROLLER - A non-volatile memory access method and system, and a non-volatile memory controller are provided for accessing a plurality of physical blocks in a non-volatile memory chip, and each physical block has a plurality of physical pages. The method includes determining whether there is enough space in a first physical block to write a plurality of specific physical pages when data stored in one of the specific physical pages are to be updated; and writing valid data and data to be updated into the first physical block when the first physical block has enough space to write the specific physical pages. | 2011-09-08 |
20110219173 | SEMICONDUCTOR MEMORY SYSTEM - According to one embodiment, there is provided a semiconductor memory system including a controller and a memory unit. The controller includes a generation unit, an association unit, a retaining unit, an encoding/decoding unit, and a determination unit. When the access request information is managed, the encoding/decoding unit performs, without generating an obfuscation information by the generation unit, an encoding processing or a decoding processing by using the obfuscation information retained in the retaining unit. And when the access request information is not managed, the encoding/decoding unit performs, after the generation unit generates obfuscation information based on the access request information, the encoding processing or the decoding processing. | 2011-09-08 |
20110219174 | Non-Volatile Memory and Method with Phased Program Failure Handling - In a memory with block management system, program failure in a block during a time-critical memory operation is handled by continuing the programming operation in a breakout block. Later, at a less critical time, the data recorded in the failed block prior to the interruption is transferred to another block, which could also be the breakout block. The failed block can then be discarded. In this way, when a defective block is encountered during programming, it can be handled without loss of data and without exceeding a specified time limit by having to transfer the stored data in the defective block on the spot. This error handling is especially critical for a garbage collection operation so that the entire operation need not be repeated on a fresh block during a critical time. Subsequently, at an opportune time, the data from the defective block can be salvaged by relocation to another block. | 2011-09-08 |
20110219175 | STORAGE CAPACITY STATUS - In one embodiment of the present invention, a memory device is disclosed to include memory organized into blocks, each block having a status associated therewith and all of the blocks of the nonvolatile memory having collectively a capacity status associated therewith and a display for showing the capacity status even when no power is being applied to the display. | 2011-09-08 |
20110219176 | FILE-COPYING APPARATUS OF PORTABLE STORAGE MEDIA - The present invention provides a portable file-copying apparatus which includes a first connecting unit, a second connecting unit, and a control unit. The first connecting unit can receive a first portable storage media which includes an original file. The second connecting unit can receive a second portable storage media. Furthermore, the control unit is connected to the first connecting unit, the second connecting unit, and a memory. The control unit is applied for storing the original file in the memory, and copying the file to the second portable storage media in accordance with a control signal. | 2011-09-08 |
20110219177 | MEMORY SYSTEM AND CONTROL METHOD THEREOF - A memory system includes a nonvolatile memory including blocks as data erase units, a measuring unit which measures an erase time at which data in each block is erased, a block controller having a block table which associates a state value indicating one of a free state and a used state with the erase time for each block, a detector which detects blocks in which rewrite has collectively occurred within a short period, a first selector which selects a free block having an old erase time as a first block, a second selector which selects a block in use having an old erase time as a second block, and a leveling unit which moves data in the second block to the first block if the first block is included in the blocks detected by the detector. | 2011-09-08 |
20110219178 | ERASE BLOCK DATA SPLITTING - A Flash memory device, system, and data handling routine is detailed with a distributed erase block sector user/overhead data scheme that splits the user data and overhead data and stores them in differing associated erase blocks. The erase blocks of the Flash memory are arranged into associated erase block pairs in “super blocks” such that when user data is written to/read from the user data area of a sector of an erase block of the super block pair, the overhead data is written to/read from the overhead data area of a sector of the other associated erase block. This data splitting enhances fault tolerance and reliability of the Flash memory device. | 2011-09-08 |
20110219179 | FLASH MEMORY DEVICE AND FLASH MEMORY SYSTEM INCLUDING BUFFER MEMORY - A flash memory device includes a flash memory and a buffer memory. The flash memory is divided into a main region and a spare region. The buffer memory is a random access memory and has the same structure as the flash memory. In addition, the flash memory device further includes control means for mapping an address of the flash memory applied from a host so as to divide a structure of the buffer memory into a main region and a spare region and for controlling the flash memory and the buffer memory to store data of the buffer memory in the flash memory or to store data of the flash memory in the buffer memory. | 2011-09-08 |
20110219180 | FLASH MEMORY DEVICE WITH MULTI-LEVEL CELLS AND METHOD OF WRITING DATA THEREIN - In one aspect, a method of writing data in a flash memory system is provided. The flash memory system forms an address mapping pattern according to a log block mapping scheme. The method includes determining a writing pattern of data to be written in a log block, and allocating one of SLC and MLC blocks to the log block in accordance with the writing pattern of the data. | 2011-09-08 |
20110219181 | PRE-FETCHING DATA INTO A MEMORY - Systems and methods for pre-fetching of data in a memory are provided. By pre-fetching stored data from a slower memory into a faster memory, the amount of time required for data retrieval and/or processing may be reduced. First, data is received and pre-scanned to generate a sample fingerprint. Fingerprints stored in a faster memory that are similar to the sample fingerprint are identified. Data stored in the slower memory associated with the identified stored fingerprints is copied into the faster memory. The copied data may be compared to the received data. Various embodiments may be included in a network memory architecture to allow for faster data matching and instruction generation in a central appliance. | 2011-09-08 |
20110219182 | SYSTEM AND METHOD FOR MANAGING SELF-REFRESH IN A MULTI-RANK MEMORY - Multi-rank memories and methods for self-refreshing multi-rank memories are disclosed. One such multi-rank memory includes a plurality of ranks of memory and self-refresh logic coupled to the plurality of ranks of memory. The self-refresh logic is configured to refresh a first rank of memory in a self-refresh state in response to refreshing a second rank of memory not in a self-refresh state in response to receiving a non-self-refresh refresh command for the second rank of memory. | 2011-09-08 |
20110219183 | SUB-AREA FCID ALLOCATION SCHEME - Certain embodiments of the present disclosure generally relate to allocating a sub-area of Fibre Channel addresses (FCIDs) to a device. A range of addresses may be assigned to the device using a mask address, where the most significant bits represent a mask and the least significant bits represent a sub-range of FCIDs available to be assigned to the device. Therefore, routing information may be stored efficiently in a Ternary Content Addressable Memory (TCAM) by storing a single entry in the TCAM for each sub-area of FCIDs allocated to a device, instead of storing an entry for each FCID. The single entry may indicate the mask address and the width of the mask. | 2011-09-08 |
20110219184 | SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR PROVIDING HIGH AVAILABILITY METADATA ABOUT DATA - In one embodiment, a method includes receiving metadata corresponding to data on a removable storage device/medium, storing the metadata to a metadata repository that is not on the removable storage device/medium, associating an identifier with the stored metadata (the identifier corresponding to the removable storage medium/device), and storing the identifier to the metadata repository. According to another embodiment, a computer program product includes a computer readable storage medium having computer readable program code embodied therewith. The computer readable program code comprises computer readable program code configured to: receive metadata corresponding to data on a removable storage device/medium, store the metadata to a metadata repository, associate an identifier corresponding to the removable storage device/medium with the stored metadata, and store the identifier to the metadata repository. Other methods, systems, and devices are presented as well. | 2011-09-08 |
20110219185 | OPTIMIZING EXECUTION OF I/O REQUESTS FOR A DISK DRIVE IN A COMPUTING SYSTEM - An I/O Optimizer receives an I/O request specifying a plurality of disk blocks of the disk drive for access. A plurality of I/O sub-requests is determined from the I/O request, each I/O sub-request specifying a set of one or more adjacent disk blocks of the plurality of disk blocks along the same cylinder. A plurality of execution sequences for performing the plurality of I/O sub-requests is determined. For each of the plurality of execution sequences, a total estimated execution time for performing the I/O sub-requests according to the execution sequence is calculated. One of the plurality of execution sequences for performing the I/O sub-requests is selected based, at least in part, on the total estimated execution times for the plurality of execution sequences. A disk drive controller is instructed to perform the I/O sub-requests according to the selected execution sequence. | 2011-09-08 |
20110219186 | SYSTEMS AND METHODS FOR COMPRESSION OF DATA FOR BLOCK MODE ACCESS STORAGE - Systems and methods for creating, reading, and writing compressed data for use with a block mode access storage. The compressed data are packed into plurality of compressed units and stored in a storage logical unit (LU). One or more corresponding compressed units may be read and/or updated with no need of restoring the entire storage logical unit while maintaining de-fragmented structure of the LU. | 2011-09-08 |
20110219187 | CACHE DIRECTORY LOOKUP READER SET ENCODING FOR PARTIAL CACHE LINE SPECULATION SUPPORT - In a multiprocessor system, with conflict checking implemented in a directory lookup of a shared cache memory, a reader set encoding permits dynamic recordation of read accesses. The reader set encoding includes an indication of a portion of a line read, for instance by indicating boundaries of read accesses. Different encodings may apply to different types of speculative execution. | 2011-09-08 |
20110219188 | CACHE AS POINT OF COHERENCE IN MULTIPROCESSOR SYSTEM - In a multiprocessor system, a conflict checking mechanism is implemented in the L2 cache memory. Different versions of speculative writes are maintained in different ways of the cache. A record of speculative writes is maintained in the cache directory. Conflict checking occurs as part of directory lookup. Speculative versions that do not conflict are aggregated into an aggregated version in a different way of the cache. Speculative memory access requests do not go to main memory. | 2011-09-08 |
20110219189 | STORAGE SYSTEM AND REMOTE COPY CONTROL METHOD FOR STORAGE SYSTEM - A storage system maintains consistency of the stored contents between volumes even when a plurality of remote copying operations are executed asynchronously. A plurality of primary storage control devices and a plurality of secondary storage control devices are connected by a plurality of paths, and remote copying is performed asynchronously between respective first volumes and second volumes. Write data transferred from the primary storage control device to the secondary storage control device is held in a write data storage portion. Update order information, including write times and sequential numbers, is managed by update order information management portions. An update control portion collects update order information from each update order information management portion, determines the time at which update of each second volume is possible, and notifies each-update portion. By this means, the stored contents of each second volume can be updated up to the time at which update is possible. | 2011-09-08 |
20110219190 | CACHE WITH RELOAD CAPABILITY AFTER POWER RESTORATION - A method and apparatus for repopulating a cache are disclosed. At least a portion of the contents of the cache are stored in a location separate from the cache. Power is removed from the cache and is restored some time later. After power has been restored to the cache, it is repopulated with the portion of the contents of the cache that were stored separately from the cache. | 2011-09-08 |
20110219191 | READER SET ENCODING FOR DIRECTORY OF SHARED CACHE MEMORY IN MULTIPROCESSOR SYSTEM - In a parallel processing system with speculative execution, conflict checking occurs in a directory lookup of a cache memory that is shared by all processors. In each case, the same physical memory address will map to the same set of that cache, no matter which processor originated that access. The directory includes a dynamic reader set encoding, indicating what speculative threads have read a particular line. This reader set encoding is used in conflict checking. A bitset encoding is used to specify particular threads that have read the line. | 2011-09-08 |
20110219192 | PERFORMING A DATA WRITE ON A STORAGE DEVICE - A method of performing a data write on a storage device comprises instructing a device driver for the device to perform a write to the storage device, registering the device driver as a transaction participant with a transaction co-ordinator, executing a flashcopy of the storage device, performing the write on the storage device, and performing a two-phase commit between device driver and transaction co-ordinator. Preferably, the method comprises receiving an instruction to perform a rollback, and reversing the data write according to the flashcopy. In a further refinement, a method of scheduling a flashcopy of a storage device comprises receiving an instruction to perform a flashcopy, ascertaining the current transaction in relation to the device, registering the device driver for the device as a transaction participant in the current transaction with a transaction co-ordinator, receiving a transaction complete indication from the co-ordinator, and executing the flashcopy for the device. | 2011-09-08 |
20110219193 | PROCESSOR AND MEMORY CONTROL METHOD - A processor and a memory management method are provided. The processor includes a processor core, a cache which transceives data to/from the processor core via a single port, and stores the data accessed by the processor core, and a Scratch Pad Memory (SPM) which transceives the data to/from the processor core via at least one of a plurality of multi ports. | 2011-09-08 |
20110219194 | DATA RELAYING APPARATUS AND METHOD FOR RELAYING DATA BETWEEN DATA - A data relaying apparatus and method capable of relaying data in a highly efficient manner. Data of a predetermined read-ahead size is acquired from the storage apparatus from a top address indicated by a data read request to temporarily store the data as temporary storage data and, each time a subsequent data read request is made, data of a transmission data size corresponding to a type of the subsequent data read request is read out sequentially from a top position of the temporary storage data to relay the data to a data processing apparatus. | 2011-09-08 |
20110219195 | PRE-FETCHING OF DATA PACKETS - Some of the embodiments of the present disclosure provide a method comprising receiving a data packet, and storing the received data packet in a memory; generating a descriptor for the data packet, the descriptor including information for fetching at least a portion of the data packet from the memory; and in advance of a processing core requesting the at least a portion of the data packet to execute a processing operation on the at least a portion of the data packet, fetching the at least a portion of the data packet to a cache based at least in part on information in the descriptor. Other embodiments are also described and claimed. | 2011-09-08 |
20110219196 | MEMORY HUB WITH INTERNAL CACHE AND/OR MEMORY ACCESS PREDICTION - A computer system includes a memory hub for coupling a processor to a plurality of synchronous dynamic random access memory (“SDRAM”) devices. The memory hub includes a processor interface coupled to the processor and a plurality of memory interfaces coupled to respective SDRAM devices. The processor interface is coupled to the memory interfaces by a switch. Each of the memory interfaces includes a memory controller, a cache memory, and a prediction unit. The cache memory stores data recently read from or written to the respective SDRAM device so that it can be subsequently read by processor with relatively little latency. The prediction unit prefetches data from an address from which a read access is likely based on a previously accessed address. | 2011-09-08 |
20110219197 | Memory Controllers, Systems, and Methods Supporting Multiple Request Modes - A memory system includes a memory controller with a plurality N of memory-controller blocks, each of which conveys independent transaction requests over external request ports. The request ports are coupled, via point-to-point connections, to from one to N memory devices, each of which includes N independently addressable memory blocks. All of the external request ports are connected to respective external request ports on the memory device or devices used in a given configuration. The number of request ports per memory device and the data width of each memory device changes with the number of memory devices such that the ratio of the request-access granularity to the data granularity remains constant irrespective of the number of memory devices. | 2011-09-08 |
20110219198 | MEMORY CONTROL SYSTEM AND METHOD - A memory control system includes a first queue unit, a second queue unit, a first transforming unit, a second transforming unit, an arbiter and a control unit. The first queue unit temporarily stores multiple first request instructions. The second queue unit temporarily stores multiple second request instructions. The first transforming unit selectively re-assigns memory addresses corresponding to these first request instructions. The second transforming unit selectively re-assigns memory addresses corresponding to these second request instructions. The arbiter performs immediate scheduling of the first request instructions and the second request instructions to the memory. The control unit compares bandwidths of the first request instructions with bandwidths of the second request instructions, and controls the first transforming unit and the second transforming unit to perform re-assigning operations or not according to compared results. | 2011-09-08 |
20110219199 | VOLUME COHERENCY VERIFICATION FOR SEQUENTIAL-ACCESS STORAGE MEDIA - A method for determining volume coherency is disclosed herein. Upon completing a first write job to a volume partition, the method makes a copy of a volume change reference (VCR) value associated with the volume. The VCR value is configured to change in a non-repeating manner each time content on the volume is modified. Prior to initiating a second write job to the volume partition, the method retrieves the copy and compares the copy to the VCR value. If the copy matches the VCR value, the method determines that a logical object on the partition was not modified between the first and second write jobs. If the copy does not match the VCR value, the method determines that the logical object on the partition was modified between the first and second write jobs. A corresponding system and computer program product are also disclosed herein. | 2011-09-08 |
20110219200 | SYSTEM AND METHOD TO ARCHIVE EMAIL MESSAGES IN A SOFTWARE AS A SERVICE SYSTEM - A system includes a client machine or first server, and a second server. The first server is coupled via a network connection to the second server. The second server is configured to provide an electronic mail service. The first server includes a processor configured to receive an electronic mail message from the second server via a network browser rendered on the client machine, to apply an archive policy to the electronic mail message, and to store the electronic mail message in a computer data storage medium coupled to the first server. | 2011-09-08 |
20110219201 | COPY ON WRITE STORAGE CONSERVATION SYSTEMS AND METHODS - Systems and methods for copy on write storage conservation are presented. In one embodiment a copy on write storage conservation method includes creating and mounting a snapshot; mounting a snapshot; monitoring interest in the snapshot; initiating a copy on write discard process before a backup or replication is complete; and deleting the snapshot when the backup or replication is complete. In one embodiment the method also includes marking a file as do not copy on write. In one embodiment, the copy on write discard process includes discarding copy on write data when a corresponding read on the file in the snapshot is successful. Initiating a copy on write discard process can be done at a variety of levels (e.g., a file level, an extent level, a block-level, etc.). | 2011-09-08 |
20110219202 | SPEICHERMEDIUM MIT UNTERSCHIEDLICHEN ZUGRIFFSMOGLICHKEITEN / MEMORY MEDIUM HAVING DIFFERENT WAYS OF ACCESSING - The invention provides a portable memory medium with a memory area and a memory management system for managing the memory area, wherein different options for access to the memory area are provided. The memory management system comprises a configuration command, the execution of which causes an activation of one of at least two different activatable memory configurations. | 2011-09-08 |
20110219203 | METHOD AND DEVICE FOR TEMPERATURE-BASED DATA REFRESH IN NON-VOLATILE MEMORIES - The invention relates to a method comprising measuring the temperature of at least one location of a non-volatile memory; determining if said temperature measurement indicates that the data retention time of data stored at said at least one location is reduced below a threshold; and re-writing said data to said non-volatile memory in response to a positive determination. | 2011-09-08 |
20110219204 | GPU SUPPORT FOR GARBAGE COLLECTION - A system and method for efficient garbage collection. A general-purpose central processing unit (CPU) partitions an allocated heap according to a generational garbage collection technique. The generations are partitioned into fixed size cards. The CPU marks indications of qualified dirty cards during application execution since the last garbage collection. When the CPU detects a next garbage collection start condition is satisfied, the CPU sends a notification to a special processing unit (SPU) corresponding to a determination of one or more card root addresses, each card root address corresponding to one of said marked indications. The SPU has a single instruction multiple data (SIMD) parallel architecture and may be a graphics processing unit (GPU). The SPU may utilize the parallel architecture of its SIMD core to simultaneously compute multiple card root addresses. Following, the SPU sends these addresses to the CPU to be used in a garbage collection algorithm. | 2011-09-08 |
20110219205 | Distributed Data Storage System Providing De-duplication of Data Using Block Identifiers - An access request including a client address for data is received. A metadata server determines a mapping between the client address and storage unit identifiers for the data. Each of the one or more storage unit identifiers uniquely identifies content of a storage unit and the metadata server stores mappings on storage unit identifiers that are referenced by client addresses. The one or more storage unit identifiers are sent to one or more block servers. The one or more block servers service the request using the one or more storage unit identifiers where the one or more block servers store information on where a storage unit is stored on a block server for a storage unit identifier. Also, multiple client addresses associated with a storage unit with a same storage unit identifier are mapped to a single storage unit stored in a storage medium for a block server. | 2011-09-08 |
20110219206 | DISPOSITION INSTRUCTIONS FOR EXTENDED ACCESS COMMANDS - A computer system that generates a disposition instruction and an associated access command directed to a block of data at a logical address is described. The disposition instruction and the access command are communicated to a memory system in the computer system via a communication link. Note that the memory system includes different types of memory having different performance characteristics, and the disposition instruction is generated based on the different performance characteristics. In response to the access command, the memory system accesses the block of data at the logical address in a first type of memory in the different types of memory. Furthermore, based on the disposition instruction, the memory system moves the block of data to a second type of memory in the different types of memory to facilitate subsequent accesses to the block of data. | 2011-09-08 |
20110219207 | RECONFIGURABLE PROCESSOR AND RECONFIGURABLE PROCESSING METHOD - A reconfigurable processor for efficiently performing a vector operation, and a method of controlling the reconfigurable processor are provided. The reconfigurable processor designates at least one of a plurality of processing elements as a vector lane based on vector lane configuration information, and allocates a vector operation to the designated vector lane. | 2011-09-08 |
20110219208 | MULTI-PETASCALE HIGHLY EFFICIENT PARALLEL SUPERCOMPUTER - A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency. | 2011-09-08 |
20110219209 | DYNAMIC ATOMIC BITSETS - Embodiments of the present invention provide techniques, including systems, methods, and computer readable medium, for dynamic atomic bitsets. A dynamic atomic bitset is a data structure that provides a bitset that can grow or shrink in size as required. The dynamic atomic bitset is non-blocking, wait-free, and thread-safe. | 2011-09-08 |
20110219210 | System Core for Transferring Data Between an External Device and Memory - Details of a highly cost effective and efficient implementation of a manifold array (ManArray) architecture and instruction syntax for use therewith are described herein. Various aspects of this approach include the regularity of the syntax, the relative ease with which the instruction set can be represented in database form, the ready ability with which tools can be created, the ready generation of self-checking codes and parameterized test cases. Parameterizations can be fairly easily mapped and system maintenance is significantly simplified. | 2011-09-08 |
20110219211 | CPU CORE UNLOCKING DEVICE APPLIED TO COMPUTER SYSTEM - A CPU core unlocking device applied to a computer system is provided. The core unlocking device includes a CPU having a plurality of signal terminals and a core unlocking executing unit having a plurality of GPIO ports connected with the corresponding signal terminals of the CPU. The GPIO ports of the core unlocking executing unit generate and transmit and transmit a combination of core unlocking signal to the signal terminals of the CPU to unlock the CPU core. | 2011-09-08 |
20110219212 | System and Method of Processing Hierarchical Very Long Instruction Packets - A system and method of processing a hierarchical very long instruction word (VLIW) packet is disclosed. In a particular embodiment, a method of processing instructions is disclosed. The method includes receiving a hierarchical VLIW packet of instructions and decoding an instruction from the packet to determine whether the instruction is a single instruction or whether the instruction includes a subpacket that includes a plurality of sub-instructions. The method also includes, in response to determining that the instruction includes the subpacket, executing each of the sub-instructions. | 2011-09-08 |
20110219213 | INSTRUCTION CRACKING BASED ON MACHINE STATE - A method, information processing system, and computer program product manage instruction execution based on machine state. At least one instruction is received. The at least one instruction is decoded. A current machine state is determined in response to the decoding. The at least one instruction is organized into a set of unit of operations based on the current machine state that has been determined. The set of unit of operations is executed. | 2011-09-08 |
20110219214 | Microprocessor having novel operations - A processor. The processor includes a first register for storing a first packed data, a decoder, and a functional unit. The decoder has a control signal input. The control signal input is for receiving a first control signal and a second control signal. The first control signal is for indicating a pack operation. The second control signal is for indicating an unpack operation. The functional unit is coupled to the decoder and the register. The functional unit is for performing the pack operation and the unpack operation using the first packed data. The processor also supports a move operation. | 2011-09-08 |
20110219215 | ATOMICITY: A MULTI-PRONGED APPROACH - In a multiprocessor system with speculative execution, atomicity can be approached in several fashions. One approach is to have atomic instructions that achieve multiple functions and are guaranteed to complete. Another approach is to have blocks of code that are grouped to succeed or fail together. A system can incorporate more than one such approach. In implementing more than one approach, the system may prioritize one over another. When conflict detection is done through a directory lookup in cache memory, atomic instructions and atomicity related operations may be implemented in a cache data array access pipeline in that cache memory. This implementation may include feedback to the pipeline for implementing multiple functions within an atomic instruction and also for cascading atomic instructions. | 2011-09-08 |
20110219216 | Mechanism for Performing Instruction Scheduling based on Register Pressure Sensitivity - A mechanism for performing instruction scheduling based on register pressure sensitivity is disclosed. A method of embodiments of the invention includes performing a preliminary register pressure minimization on program points during a compilation process of a software program running on a virtual machine of a computer system. The method further includes calculating a register pressure at each of the program points, detecting an instruction to be scheduled, and performing instruction scheduling of the instruction based on a current register pressure at a current scheduling point and potential register pressures at subsequent scheduling points. | 2011-09-08 |
20110219217 | System on Chip Breakpoint Methodology - A system-on-chip (SoC) with a debugging methodology. The system-on-chip (SoC) includes a central processing unit (CPU) and multiple computing elements connected to the CPU. The CPU is configured to program the computing elements with task descriptors and the computing elements are configured to receive the task descriptors and to perform a computation based on the task descriptors. The task descriptors include a field which specifies a breakpoint state of the computing element. A system level event status register (ESR) attaches to and is accessible by the CPU and the computing elements. Each of the computing elements has a comparator configured to compare the present state of the computing element to the breakpoint state. The computing element is configured to drive a breakpoint event to the event status register (ESR) if the present state of the computing element is the breakpoint state. Each of the computing elements has a halt logic unit operatively attached thereto, wherein the halt logic unit is configured to halt operation of the computing element. The ESR is configurable to drive a breakpoint event to the halt logic units to halt at least one of the computing elements other than the computing element driving the breakpoint event. | 2011-09-08 |
20110219218 | DISTRIBUTED ORDER ORCHESTRATION SYSTEM WITH ROLLBACK CHECKPOINTS FOR ADJUSTING LONG RUNNING ORDER MANAGEMENT FULFILLMENT PROCESSES - A computer-readable medium, computer-implemented method, and system are provided. In one embodiment, a rollback checkpoint for a step in an executable process is established, and the executable process is executed. A change request is received, and the step with the established rollback checkpoint is adjusted. Any subsequent steps of the executable process are also adjusted. | 2011-09-08 |
20110219219 | SEMICONDUCTOR INTEGRATED CIRCUIT AND REGISTER ADDRESS CONTROLLER - This invention provides with a semiconductor integrated circuit, comprising a register map that makes correspondence between a register to which a CPU accesses and an address which specifies the register, wherein the register map includes a plurality of register maps in which assignments of address bits are rearranged in correspondence with each of a plurality of modes, and wherein any of the register maps is selected from the plurality of register maps according to the respective modes. | 2011-09-08 |
20110219220 | Link Stack Repair of Erroneous Speculative Update - Whenever a link address is written to the link stack, the prior value of the link stack entry is saved, and is restored to the link stack after a link stack push operation is speculatively executed following a mispredicted branch. This condition is detected by maintaining a count of the total number of uncommitted link stack write instructions in the pipeline, and a count of the number of uncommitted link stack write instructions ahead of each branch instruction. When a branch is evaluated and determined to have been mispredicted, the count associated with it is compared to the total count. A discrepancy indicates a link stack write instruction was speculatively issued into the pipeline after the mispredicted branch instruction, and pushed a link address onto the link stack. The prior link address is restored to the link stack from the link stack restore buffer. | 2011-09-08 |