Patent application number | Description | Published |
20100016008 | PRIORITIZATION OF GROUP COMMUNICATIONS AT A WIRELESS COMMUNICATION DEVICE - An embodiment is directed to switching between server-arbitrated group communication sessions at an access terminal (AT) within a wireless communications system. The AT participates in a first group communication session when it receives an announce message announcing a second group communication session, acquires priority levels for the first and/or second group communication sessions and determines whether to switch between sessions based on the priority level(s). In another embodiment, the AT participates in a given processing task (e.g., a gaming session, a voice call, a group session, etc.) when it receives an alert that relates to a group communication session. If the alert is specially configured to force the given access terminal to drop the given processing task, the AT drops the given processing task irrespective of whether the AT later joins the announced group communication session. | 01-21-2010 |
20100246468 | REGULATING THE SCOPE OF SERVICE GEOGRAPHICALLY IN WIRELESS NETWORKS - In an embodiment, a network communication entity obtains a location associated with an access terminal that is attempting to participate in a communication service, determines whether the obtained location satisfies a relationship with a defined location region, the defined location region establishing a first level of service restriction for the communication service within the defined location region and establishing at least a second level of service restriction for the communication service outside of the defined location region, and restricts the access terminal in accordance with the first or second level of service restriction for the communication service based on the determination. The network communication entity may correspond to the access terminal, an access network or an application server. If the access terminal detects a current, imminent or future service restriction, the given access terminal can initiate handoff to another service mechanism and/or inform the user of the service restriction. | 09-30-2010 |
20100248742 | REGULATING THE SCOPE OF SERVICE GEOGRAPHICALLY IN WIRELESS NETWORKS BASED ON PRIORITY - A network communication entity (e.g., an access terminal, access network and/or application server) obtains a location associated with a given access terminal that is attempting to participate in a given communication service, obtains a priority level of the given access terminal, determines a given level of service restriction for the given access terminal's participation in the given communication service based on the obtained location and the obtained priority level and restricts the given access terminal's participation in the given communication service based on the given level of service restriction. In an example, the priority levels can be established such that low-priority access terminals obtain a first level of service restriction within a defined location region, and a second level of service restriction outside of the defined location region, whereas high-priority access terminals obtain the first level of service restriction both inside and outside of the defined location region. | 09-30-2010 |
20100248771 | SELECTIVELY ALLOCATING DATA CHANNEL RESOURCES TO WIRELESS COMMUNICATION DEVICES WITHIN A WIRELESS COMMUNICATIONS SYSTEM - In an embodiment, an access network (AN) receives a request to allocate a given data channel (e.g., a traffic channel (TCH), Quality-of-Service, an Internet Protocol (IP) address, etc.) to a given wireless communication device (e.g., a call originator, a call target, etc.). The AN determines whether the given data channel is available for allocation to the wireless communication device. If the AN determines the given data channel not to be available for allocation to the wireless communication device, the AN determines a priority score to be associated with the received request. The AN initiates one of a plurality of data channel acquisition procedures (e.g., a preemption procedure, a queuing procedure, etc.) if the determined priority score is above at least one priority score threshold, each of the plurality of data channel acquisition procedures configured to obtain the given data channel in order to service the received request. | 09-30-2010 |
Patent application number | Description | Published |
20080318610 | SYSTEM AND METHOD FOR SHARING MEDIA IN A GROUP COMMUNICATION AMONG WIRELESS COMMUNICATION DEVICES - A system, method, and wireless communication device for sharing media in a group communication among a plurality of wireless communications devices, such as among a Push-to-Talk (PTT) group. A wireless communication device that is a member of the communication group can send group-directed media, such as graphics, multimedia and applications, to other members of the communication group, either during a ongoing PTT communication, or independently therefrom. In one embodiment, a group communication computer device stores information on communication groups on the wireless communication network that includes the member wireless communication devices of one or more communication groups, and receives group-directed media from a sending wireless communication device sends the group-directed media either directly to the other member wireless communication devices of the communication group or stores the group-directed media such that the other member devices can access and download the group-directed media. | 12-25-2008 |
20090296904 | Setting Up A Communication Session Within A Wireless Communications System - In an embodiment, an originating communication device within a wireless communications system sends a call request to a server to initiate a communication session with a target communication device, and also sends, along with the call request, a session description request, the session description request requesting alerting data to be sent to the target communication device from the server in addition to a call announce message for announcing the communication session to the at least one target communication device, the alerting data describing a nature of the communication session. The server sends the call announce message and the alerting data to the target communication device. The target communication device receives the call announce message and the alerting data, notifies a user of the target communication device of the announced communication session and outputs the alerting data to the user of the target communication device in conjunction with the notification. | 12-03-2009 |
20090325620 | METHOD AND APPARATUS FOR RETRIEVING DATA FROM ONE OR MORE WIRELESS COMMUNICATION DEVICES - A system, method, and apparatus that retrieve data from one or more wireless communication devices that are at least members of a communication group without user intervention are disclosed. In one embodiment, a request for data is sent to an intermediate device, which forwards the request to a target wireless device. The target wireless device determines if the request is allowed and preferably responds with the requested data or a failure notice. The location of the requested data may be known to the requesting device and included in the request, or it may be known to the target wireless device, which locates the data upon receiving the request. In another embodiment the requesting device and the target wireless device directly communicate without the intervention of an intermediate device. The data requested can help the requesting device or its user determine if and when to initiate group communications. | 12-31-2009 |
20100179880 | SYSTEM AND METHOD FOR PURCHASING GOODS OR SERVICES USING A GROUP COMMUNICATION FROM A WIRELESS COMMUNICATION DEVICE - A system and method for purchasing a good and/or service through a group communication, such as a push-to-talk communication, to one or more sellers. A sellers list of the desired good or service is created using a group communication server or another computer device and sent to the wireless communication device of the user, either at the time the user wireless communication device requests to send a group communication to order a good and/or service, or prior thereto. The user may then send a group communication from the wireless communication device to the seller list requesting a desired good and/or service, and any answering seller will communicate back to the requesting wireless communication device to bid to provide the requested good and/or service. | 07-15-2010 |
20100190478 | SYSTEM AND METHOD FOR PUSH-TO-SHARE FILE DISTRIBUTION WITH PREVIEWS - A system and method for transmitting previews for media objects that are shared in a group communication, such as a push-to-talk session, are disclosed. Media objects can be stored at a media server and/or an originating device. A preview for the media object can be generated by the originating device and transmitted during a group session. The preview can contain metadata. The preview and metadata can be used by a recipient to determine whether the user wants to download the media object. | 07-29-2010 |
20100190518 | SECONDARY DATA TRANSMISSION IN A GROUP COMMUNICATION TRANSMISSION DATA STREAM - A system, method, and wireless communication device that allow the transmission of secondary data in a group-communication data stream between wireless communication devices across a wireless communication network. The wireless communication device selectively transmits at least group-directed voice communication data to other members of a communication group, such as a push-to-talk (PTT) group, in a communication channel having a limited bandwidth thereof, and can selectively transmit secondary data in the same communication channel. A group-communication server preferably receives the voice communication data and secondary data and selectively transmits at least the voice communication data to other member wireless communication devices of the communication group. In one embodiment, the wireless communication device reduces the data size of the voice communication data to a second data size that is less than the bandwidth of the communication channel such that secondary data can be transmitted within the communication channel. | 07-29-2010 |
Patent application number | Description | Published |
20140337234 | SYSTEMS AND METHODS FOR SECURE COMMUNICATION - In some embodiments, fast and secure communication can be achieved (e.g., in a fueling environment payment system) with systems and methods that validate an authentication request based on one or more pre-validated cryptographic keys. | 11-13-2014 |
20150143116 | SYSTEMS AND METHODS FOR CONVENIENT AND SECURE MOBILE TRANSACTIONS - Systems and methods for conducting convenient and secure mobile transactions between a payment terminal and a mobile device, e.g., in a fueling environment, are disclosed herein. In some embodiments, the payment terminal and the mobile device conduct a mutual authentication process that, if successful, produces a session key which can be used to encrypt sensitive data to be exchanged between the payment terminal and the mobile device. Payment and loyalty information can be securely communicated from the mobile device to the payment terminal using the session key. This can be done automatically, without waiting for the user to initiate a transaction, to shorten the overall transaction time. The transaction can also be completed without any user interaction with the mobile device, increasing the user's convenience since the mobile device can be left in the user's pocket, purse, vehicle, etc. | 05-21-2015 |
Patent application number | Description | Published |
20100052888 | INFORMATION DISPLAY SYSTEMS AND METHODS FOR HYBRID VEHICLES - Information display systems capable of iconically representing the components of a hybrid powertrain and method thereof. In operation, the information display systems indicate the specific powertrain components in the hybrid system that are active in various hybrid operational modes (e.g., electric launch, blended torque, etc.). In particular, active components are highlighted (i.e., increased intensity) by the display and non-active components are faded (i.e., decreased intensity). In one embodiment, the vehicle wheels are depicted with a static intensity in-between that of the active components and the non-active components. This allows the vehicle operator to clearly see which components are active during each hybrid system mode, and to gain a simplified picture of hybrid system behavior during normal operation at a glance. | 03-04-2010 |
20100057280 | INFORMATION DISPLAY SYSTEMS AND METHODS FOR HYBRID VEHICLES - An information display system is provided that presents the configuration of a vehicle's axles along with one or more other powertrain components in iconic format to the vehicle operator. The information display system may also present information that allows the driver to increase fuel efficiency. | 03-04-2010 |
20100057281 | INFORMATION DISPLAY SYSTEMS AND METHODS FOR HYBRID VEHICLES - Information display systems present information that allows the driver to increase fuel efficiency of a vehicle, such as a hybrid vehicle. In one example, the systems present information in a manner that allows the driver to maximize the time that the hybrid vehicle is able to operate in electric launch mode. One manner is employing a graphical display that generates visual indicators and easily understood graphical representations that display the actual fuel efficiency currently being achieved in comparison to the driver's application of the throttle. As a result, the driver may be able to modify driving habits in order to keep the hybrid vehicle in electric launch mode for as long as possible. | 03-04-2010 |
20110209092 | GRAPHICAL DISPLAY WITH SCROLLABLE GRAPHICAL ELEMENTS - Aspects of the disclosed subject matter are directed to a graphical display that efficiently conveys information to a vehicle operator. In accordance with one embodiment, a method is provided that presents scrollable graphical elements on a shared screen area. More specifically, the method includes assigning a priority level to scrollable graphical elements that convey vehicle readings on the graphical display. Then, the one or more scrollable graphical elements are rendered on the graphical display at locations that change locations periodically. When an abnormal vehicle reading is identified, the method dynamically assigns an enhanced priority level to the scrollable graphical element that is configured to convey the abnormal vehicle reading. If the scrollable graphical element is currently assigned an off-screen location, the method causes the scrollable graphical element to be rendered. | 08-25-2011 |
Patent application number | Description | Published |
20090055596 | MULTI-PROCESSOR SYSTEM HAVING AT LEAST ONE PROCESSOR THAT COMPRISES A DYNAMICALLY RECONFIGURABLE INSTRUCTION SET - A multi-processor system comprises at least one host processor, which may comprise a fixed instruction set, such as the well-known x86 instruction set. The system further comprises at least one co-processor, which comprises dynamically reconfigurable logic that enables the co-processor's instruction set to be dynamically reconfigured. In this manner, the at least one host processor and the at least one dynamically reconfigurable co-processor are heterogeneous processors having different instruction sets. Further, cache coherency is maintained between the heterogeneous host and co-processors. And, a single executable file may contain instructions that are processed by the multi-processor system, wherein a portion of the instructions are processed by the host processor and a portion of the instructions are processed by the co-processor. | 02-26-2009 |
20090064095 | COMPILER FOR GENERATING AN EXECUTABLE COMPRISING INSTRUCTIONS FOR A PLURALITY OF DIFFERENT INSTRUCTION SETS - A software compiler is provided that is operable for generating an executable that comprises instructions for a plurality of different instruction sets as may be employed by different processors in a multi-processor system. The compiler may generate an executable that includes a first portion of instructions to be processed by a first instruction set (such as a first instruction set of a first processor in a multi-processor system) and a second portion of instructions to be processed by a second instruction set (such as a second instruction set of a second processor in a multi-processor system). Such executable may be generated for execution on a multi-processor system that comprises at least one host processor, which may comprise a fixed instruction set, such as the well-known x86 instruction set, and at least one co-processor, which comprises dynamically reconfigurable logic that enables the co-processor's instruction set to be dynamically reconfigured. | 03-05-2009 |
20090070553 | DISPATCH MECHANISM FOR DISPATCHING INSTURCTIONS FROM A HOST PROCESSOR TO A CO-PROCESSOR - A dispatch mechanism is provided for dispatching instructions of an executable from a host processor to a heterogeneous co-processor. According to certain embodiments, cache coherency is maintained between the host processor and the heterogeneous co-processor, and such cache coherency is leveraged for dispatching instructions of an executable that are to be processed by the co-processor. For instance, in certain embodiments, a designated portion of memory (e.g., “UCB”) is utilized, wherein a host processor may place information in such UCB and the co-processor can retrieve information from the UCB (and vice-versa). The UCB may thus be used to dispatch instructions of an executable for processing by the co-processor. In certain embodiments, the co-processor may comprise dynamically reconfigurable logic which enables the co-processor's instruction set to be dynamically changed, and the dispatching operation may identify one of a plurality of predefined instruction sets to be loaded onto the co-processor. | 03-12-2009 |
20090177843 | MICROPROCESSOR ARCHITECTURE HAVING ALTERNATIVE MEMORY ACCESS PATHS - The present invention is directed to a system and method which employ two memory access paths: 1) a cache-access path in which block data is fetched from main memory for loading to a cache, and 2) a direct-access path in which individually-addressed data is fetched from main memory. The system may comprise one or more processor cores that utilize the cache-access path for accessing data. The system may further comprise at least one heterogeneous functional unit that is operable to utilize the direct-access path for accessing data. In certain embodiments, the one or more processor cores, cache, and the at least one heterogeneous functional unit may be included on a common semiconductor die (e.g., as part of an integrated circuit). Embodiments of the present invention enable improved system performance by selectively employing the cache-access path for certain instructions while selectively employing the direct-access path for other instructions. | 07-09-2009 |
20100115233 | DYNAMICALLY-SELECTABLE VECTOR REGISTER PARTITIONING - The present invention is directed generally to dynamically-selectable vector register partitioning, and more specifically to a processor infrastructure (e.g., co-processor infrastructure in a multi-processor system) that supports dynamic setting of vector register partitioning to any of a plurality of different vector partitioning modes. Thus, rather than being restricted to a fixed vector register partitioning mode, embodiments of the present invention enable a processor to be dynamically set to any of a plurality of different vector partitioning modes. Thus, for instance, different vector register partitioning modes may be employed for different applications being executed by the processor, and/or different vector register partitioning modes may even be employed for use in processing different vector oriented operations within a given applications being executed by the processor, in accordance with certain embodiments of the present invention. | 05-06-2010 |
20100115237 | CO-PROCESSOR INFRASTRUCTURE SUPPORTING DYNAMICALLY-MODIFIABLE PERSONALITIES - A co-processor is provided that comprises one or more application engines that can be dynamically configured to a desired personality. For instance, the application engines may be dynamically configured to any of a plurality of different vector processing instruction sets, such as a single-precision vector processing instruction set and a double-precision vector processing instruction set. The co-processor further comprises a common infrastructure that is common across all of the different personalities, such as an instruction decode infrastructure, memory management infrastructure, system interface infrastructure, and/or scalar processing unit (that has a base set of instructions). Thus, the personality of the co-processor can be dynamically modified (by reconfiguring one or more application engines of the co-processor), while the common infrastructure of the co-processor remains consistent across the various personalities. | 05-06-2010 |
20130332711 | SYSTEMS AND METHODS FOR EFFICIENT SCHEDULING OF CONCURRENT APPLICATIONS IN MULTITHREADED PROCESSORS - Systems and methods which provide a modular processor framework and instruction set architecture designed to efficiently execute applications whose memory access patterns are irregular or non-unit stride as disclosed. A hybrid multithreading framework (HMTF) of embodiments provides a framework for constructing tightly coupled, chip-multithreading (CMT) processors that contain specific features well-suited to hiding latency to main memory and executing highly concurrent applications. The HMTF of embodiments includes an instruction set designed specifically to exploit the high degree of parallelism and concurrency control mechanisms present in the HMTF hardware modules. The instruction format implemented by a HMTF of embodiments is designed to give the architecture, the runtime libraries, and/or the application ultimate control over how and when concurrency between thread cache units is initiated. For example, one or more bit of the instruction payload may be designated as a context switch bit (CTX) for expressly controlling context switching. | 12-12-2013 |
20150143350 | MULTISTATE DEVELOPMENT WORKFLOW FOR GENERATING A CUSTOM INSTRUCTION SET RECONFIGURABLE PROCESSOR - Systems and methods which implement workflows for providing reconfigurable processor core algorithms operable with associated capabilities using description files, thereby facilitating the development and generation of instruction sets for use with reconfigurable processors, are shown. Embodiments implement a multistage workflow in which program code is parsed into custom instructions and corresponding capability descriptions for generating reconfigurable processor loadable instruction sets. The multistage workflow of embodiments includes a hybrid threading complier operable to compile input program code into custom instructions using a hardware timing agnostic approach. A timing manager of the multistage workflow of embodiments utilizes capabilities information provided in association with the custom instructions generated by the hybrid threading complier to impose hardware timing on the custom instructions. A framework generator and hardware description language complier are also included in the multistage workflow of embodiments. | 05-21-2015 |
Patent application number | Description | Published |
20100036997 | MULTIPLE DATA CHANNEL MEMORY MODULE ARCHITECTURE - The present invention is directed generally to systems and methods which provide a memory module having multiple data channels that are independently accessible (i.e., a multi-data channel memory module). According to one embodiment, the multi-data channel memory module enables a plurality of independent sub-cache-block accesses to be serviced simultaneously. In addition, the memory architecture also supports cache-block accesses. For instance, multiple ones of the data channels may be employed for servicing a cache-block access. In one embodiment a DIMM architecture that comprises multiple data channels is provided. Each data channel supports a sub-cache-block access, and multiple ones of the data channels may be used for supporting a cache-block access. The plurality of data channels to a given DIMM may be used simultaneously to support different, independent memory access operations. | 02-11-2010 |
20100037024 | MEMORY INTERLEAVE FOR HETEROGENEOUS COMPUTING - A memory interleave system for providing memory interleave for a heterogeneous computing system is provided. The memory interleave system effectively interleaves memory that is accessed by heterogeneous compute elements in different ways, such as via cache-block accesses by certain compute elements and via non-cache-block accesses by certain other compute elements. The heterogeneous computing system may comprise one or more cache-block oriented compute elements and one or more non-cache-block oriented compute elements that share access to a common main memory. The cache-block oriented compute elements access the memory via cache-block accesses (e.g., 64 bytes, per access), while the non-cache-block oriented compute elements access memory via sub-cache-block accesses (e.g., 8 bytes, per access). A memory interleave system is provided to optimize the interleaving across the system's memory banks to minimize hot spots resulting from the cache-block oriented and non-cache-block oriented accesses of the heterogeneous computing system. | 02-11-2010 |
20100220742 | SYSTEM AND METHOD FOR ROUTER QUEUE AND CONGESTION MANAGEMENT - In a multi-QOS level queuing structure, packet payload pointers are stored in multiple queues and packet payloads in a common memory pool. Algorithms control the drop probability of packets entering the queuing structure. Instantaneous drop probabilities are obtained by comparing measured instantaneous queue size with calculated minimum and maximum queue sizes. Non-utilized common memory space is allocated simultaneously to all queues. Time averaged drop probabilities follow a traditional Weighted Random Early Discard mechanism. Algorithms are adapted to a multi-level QOS structure, floating point format, and hardware implementation. Packet flow from a router egress queuing structure into a single egress port tributary is controlled by an arbitration algorithm using a rate metering mechanism. The queuing structure is replicated for each egress tributary in the router system. | 09-02-2010 |
20120079177 | MEMORY INTERLEAVE FOR HETEROGENEOUS COMPUTING - A memory interleave system for providing memory interleave for a heterogeneous computing system is provided. The memory interleave system effectively interleaves memory that is accessed by heterogeneous compute elements in different ways, such as via cache-block accesses by certain compute elements and via non-cache-block accesses by certain other compute elements. The heterogeneous computing system may comprise one or more cache-block oriented compute elements and one or more non-cache-block oriented compute elements that share access to a common main memory. The cache-block oriented compute elements access the memory via cache-block accesses (e.g., 64 bytes, per access), while the non-cache-block oriented compute elements access memory via sub-cache-block accesses (e.g., 8 bytes, per access). A memory interleave system is provided to optimize the interleaving across the system's memory banks to minimize hot spots resulting from the cache-block oriented and non-cache-block oriented accesses of the heterogeneous computing system. | 03-29-2012 |
20150206561 | MULTIPLE DATA CHANNEL MEMORY MODULE ARCHITECTURE - The present invention is directed generally to systems and methods which provide a memory module having multiple data channels that are independently accessible (i.e., a multi-data channel memory module). According to one embodiment, the multi-data channel memory module enables a plurality of independent sub-cache-block accesses to be serviced simultaneously. In addition, the memory architecture also supports cache-block accesses. For instance, multiple ones of the data channels may be employed for servicing a cache-block access. In one embodiment a DIMM architecture that comprises multiple data channels is provided. Each data channel supports a sub-cache-block access, and multiple ones of the data channels may be used for supporting a cache-block access. The plurality of data channels to a given DIMM may be used simultaneously to support different, independent memory access operations. | 07-23-2015 |