Patent application number | Description | Published |
20100118041 | Shared virtual memory - Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings. | 05-13-2010 |
20100122264 | Language level support for shared virtual memory - Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU. | 05-13-2010 |
20110153957 | SHARING VIRTUAL MEMORY-BASED MULTI-VERSION DATA BETWEEN THE HETEROGENOUS PROCESSORS OF A COMPUTER PLATFORM - A computer system may comprise a computer platform and input-output devices. The computer platform may include a plurality of heterogeneous processors comprising a central processing unit (CPU) and a graphics processing unit) GPU and a shared virtual memory supported by a physical private memory space of at least one heterogeneous processor or a physical shared memory shared by the heterogeneous processor. The CPU (producer) may create shared multi-version data and store such shared multi-version data in the physical private memory space or the physical shared memory. The GPU (consumer) may acquire or access the shared multi-version data. | 06-23-2011 |
20120023296 | Recording Dirty Information in Software Distributed Shared Memory Systems - A page table entry dirty bit system may be utilized to record dirty information for a software distributed shared memory system. In some embodiments, this may improve performance without substantially increasing overhead because the dirty bit recording system is already available in certain processors. By providing extra bits, coherence can be obtained with respect to all the other uses of the existing page table entry dirty bits. | 01-26-2012 |
20120279503 | Breathing Apparatus With Ultraviolet Light Emitting Diode - A breathing apparatus according to embodiments of the invention includes a facemask portion sized to cover a lower portion of a wearer's face. The facemask portion includes a flow chamber defined by a support layer and a cover. The flow chamber has a first opening disposed near a first end of the flow chamber and a second opening disposed near a second end of the flow chamber. At least one light emitting diode configured to emit light having a peak wavelength in the ultraviolet range is disposed between the first opening and the second opening in the flow chamber. | 11-08-2012 |
20130061240 | TWO WAY COMMUNICATION SUPPORT FOR HETEROGENOUS PROCESSORS OF A COMPUTER PLATFORM - A computer system may comprise a computer platform and input-output devices. The computer platform may include a plurality of heterogeneous processors comprising a central processing unit (CPU) and a graphics processing unit) GPU, for example. The GPU may be coupled to a GPU compiler and a GPU linker/loader and the CPU may be coupled to a CPU compiler and a CPU linker/loader. The user may create a shared object in an object oriented language and the shared object may include virtual functions. The shared object may be fine grain partitioned between the heterogeneous processors. The GPU compiler may allocate the shared object to the CPU and may create a first and a second enabling path to allow the GPU to invoke virtual functions of the shared object. Thus, the shared object that may include virtual functions may be shared seamlessly between the CPU and the GPU. | 03-07-2013 |
20130173894 | SHARING VIRTUAL FUNCTIONS IN A SHARED VIRTUAL MEMORY BETWEEN HETEROGENEOUS PROCESSORS OF A COMPUTING PLATFORM - A computing platform may include heterogeneous processors (e.g., CPU and a GPU) to support sharing of virtual functions between such processors. In one embodiment, a CPU side vtable pointer used to access a shared object from the CPU | 07-04-2013 |
20130235083 | Information Processing Method, Method For Driving Image Collection Unit And Electrical Device - An information processing method and an electrical device are described. The information processing method is applied to an electrical device having at least a processing unit, the electrical device has a plurality of usage modes and further includes a plurality of sensing units. The method includes acquiring, by the processing unit, data collected by the plurality of the sensing units; judging whether the electrical device is in a first usage mode according to the acquired data collected by the plurality of the sensing units; wherein the first usage mode is one of the plurality of the usage modes. With the present method, objects of integrating a plurality of usage modes into one electrical device and judging a usage mode corresponding to a current application scene efficiently are realized. | 09-12-2013 |
20140306972 | Language Level Support for Shared Virtual Memory - Embodiments of the invention provide language support for CPU-GPU platforms. In one embodiment, code can be flexibly executed on both the CPU and GPU. CPU code can offload a kernel to the GPU. That kernel may in turn call preexisting libraries on the CPU, or make other calls into CPU functions. This allows an application to be built without requiring the entire call chain to be recompiled. Additionally, in one embodiment data may be shared seamlessly between CPU and GPU. This includes sharing objects that may have virtual functions. Embodiments thus ensure the right virtual function gets invoked on the CPU or the GPU if a virtual function is called by either the CPU or GPU. | 10-16-2014 |
20140375662 | SHARED VIRTUAL MEMORY - Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings. | 12-25-2014 |
20150019825 | SHARING VIRTUAL MEMORY-BASED MULTI-VERSION DATA BETWEEN THE HETEROGENEOUS PROCESSORS OF A COMPUTER PLATFORM - A computer system may comprise a computer platform and input-output devices. The computer platform may include a plurality of heterogeneous processors comprising a central processing unit (CPU) and a graphics processing unit (GPU) and a shared virtual memory supported by a physical private memory space of at least one heterogeneous processor or a physical shared memory shared by the heterogeneous processor. The CPU (producer) may create shared multi-version data and store such shared multi-version data in the physical private memory space or the physical shared memory. The GPU (consumer) may acquire or access the shared multi-version data. | 01-15-2015 |
20150113255 | SHARING VIRTUAL FUNCTIONS IN A SHARED VIRTUAL MEMORY BETWEEN HETEROGENEOUS PROCESSORS OF A COMPUTING PLATFORM - A computing platform may include heterogeneous processors (e.g., CPU and a GPU) to support sharing of virtual functions between such processors. In one embodiment, a CPU side vtable pointer used to access a shared object from the CPU | 04-23-2015 |
20150123978 | Shared Virtual Memory - Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings. | 05-07-2015 |
20150212832 | TECHNIQUES FOR DYNAMICALLY REDIRECTING DEVICE DRIVER OPERATIONS TO USER SPACE - Various embodiments are generally directed an apparatus and method for configuring an execution environment in a user space for device driver operations and redirecting a device driver operation for execution in the execution environment in the user space including copying instructions of the device driver operation from the kernel space to a user process in the user space. In addition, the redirected device driver operation may be executed in the execution environment in the user space. | 07-30-2015 |
Patent application number | Description | Published |
20080268787 | Methods and Apparatus for Service Acquisition in a Broadcast System - Methods and apparatus for service acquisition in a broadcast system. In an aspect, a method includes detecting whether a loss of service has occurred, and initiating acquisition attempts during an aggressive acquisition phase if a loss of service has occurred, wherein a backoff time interval between successive acquisition attempts is constant or increased, and wherein the aggressive acquisition phase ends when service acquisition is achieved or a selected number of acquisition attempts have been performed. An apparatus includes interface logic configured to detect whether a loss of service has occurred, and processing logic configured to initiate acquisition attempts during an aggressive acquisition phase if a loss of service has occurred, wherein a backoff time interval between successive acquisition attempts is constant or increased, and wherein the aggressive acquisition phase ends when service acquisition is achieved or a selected number of acquisition attempts have been performed. | 10-30-2008 |
20090019460 | APPLICATION PROGRAMMING INTERFACE (API) FOR HANDLING ERRORS IN PACKETS RECEIVED BY A WIRELESS COMMUNICATIONS RECEIVER - Packets of information may be received in accordance with a protocol stack having a first portion ( | 01-15-2009 |
20090019461 | APPLICATION PROGRAMMING INTERFACE (API) FOR RESTORING A DEFAULT SCAN LIST IN A WIRELESS COMMUNICATIONS RECEIVER - A signal may be received in accordance with a protocol stack having a first portion ( | 01-15-2009 |
20090197604 | METHODS AND APPARATUS FOR RF HANDOFF IN A MULTI-FREQUENCY NETWORK - Methods and apparatus for RF handoff in a multi-frequency network. A method includes generating a handoff table that comprises RF channels of current and neighboring local operations infrastructures (LOIs) carrying the same content as a current RF channel; detecting a handoff event; disqualifying one or more of the RF channels from the handoff table based on disqualification criteria; selecting a selected RF channel from remaining RF channels in the handoff table that have not been disqualified; and performing a handoff from the current RF channel to the selected RF channel. Another method includes detecting a handoff event; identifying a start of a handoff time interval; determining if RSSI measurements are available at the start of the handoff time interval for RF channels carrying desired content; and performing a handoff to a selected RF channel having a greatest RSSI measurement. | 08-06-2009 |
20100067416 | RE-PROGRAMMING MEDIA FLOW PHONE USING SPEED CHANNEL SWITCH TIME THROUGH SLEEP TIME LINE - A multicasts wireless telecommunication system to reprogram preset sleep time line to earlier point to wake up ASIC at right moment to obtain new channel OIS to prevent screen in the device display from going black, and save power by saving extra wakeup time and extra going to sleep cycles. | 03-18-2010 |
20100091695 | METHOD AND APPARATUS FOR OPTIMIZING IDLE MODE STAND-BY TIME IN A MULTICAST SYSTEM - Methods and apparatus for optimizing idle mode stand-by time in wireless device operable in a multicast system are disclosed. In order to maximize or optimize the stand-by time for idle mode, a time line for decoding of overhead information symbol (OIS) data received in one or more superframes in the wireless device. Based on the determined time line, an offset time period can be determined for setting an idle mode timer period used by the wireless device to decode the OIS information. By offsetting the timer period, a wireless device can be ensured to wake up and prepared to latch OIS information before the start of a superframe boundary, thus minimizing the wake up time of the device operating in an idle mode and, in turn, optimizing stand-by time. | 04-15-2010 |
Patent application number | Description | Published |
20140115169 | OPERATING GROUP RESOURCES IN SUB-GROUPS AND NESTED GROUPS - The present invention provides a method, a group server, and an apparatus for operating a group resource; a member resource operation request sent to a member device carries an operation request identifier, so that the member device that the member resource belongs to determines, according to the operation request identifier, whether operation request identifiers stored by the member device include the operation request identifier, and processes the member resource operation request according to a determination result. Therefore, repeated processing or cyclic processing of the member resource operation request may be avoided. | 04-24-2014 |
20150074280 | OPERATING GROUP RESOURCES IN SUB-GROUPS AND NESTED GROUPS - The present invention provides a method, a group server, and an apparatus for operating a group resource; a member resource operation request sent to a member device carries an operation request identifier, so that the member device that the member resource belongs to determines, according to the operation request identifier, whether operation request identifiers stored by the member device include the operation request identifier, and processes the member resource operation request according to a determination result. Therefore, repeated processing or cyclic processing of the member resource operation request may be avoided. | 03-12-2015 |
20150257170 | METHOD AND APPARATUS FOR GROUP MANAGEMENT DURING MACHINE-TO-MACHINE COMMUNICATION - The present invention provides a method and an apparatus for group management during M2M communication. The method for group management during M2M communication includes receiving a group creation request sent by a requesting apparatus and carrying a group type of a group requested to be created, checking consistency between member types of members in the group and the group type and setting a consistency check flag of the group according to a consistency check result, and returning a group creation response that carries the consistency check result to the requesting apparatus. | 09-10-2015 |
Patent application number | Description | Published |
20140074793 | SERVICE ARCHIVE SUPPORT - Embodiments of the present invention are directed to systems and methods for providing archival support for one or more services provided by a cloud infrastructure system. One such method comprises receiving a message corresponding to an archive trigger event, and determining based on the message one or more services subscribed to by a customer of a cloud infrastructure system which are to be archived. The method further comprises sending an instruction to the one or more services to archive customer information, and storing each archive in an archive directory accessible to the customer. | 03-13-2014 |
20140075031 | SEPARATION OF POD PROVISIONING AND SERVICE PROVISIONING - A method for POD provisioning and service provisioning is disclosed. The method may comprise storing, by a cloud infrastructure system, subscription order information from a customer identifying a service from a set of cloud services provided by the cloud infrastructure system, the cloud infrastructure system comprising one or more computing devices, wherein the subscription order information includes customer-specific configuration. Additionally, the method may comprise determining, by a computing device from the one or more computing devices, a service associated with the subscription order information. Moreover, the method may comprise mapping a pre-provisioned anonymous deployment to the subscription order information, wherein the pre-provisioned anonymous deployment is specifically pre-provisioned for the determined service. Furthermore, the method may comprise creating, by a computing device from the one or more computing devices, a service instance specifically for the customer by configuring the pre-provisioned anonymous deployment with the customer-specific configuration. | 03-13-2014 |
20140075033 | SERVICE ASSOCIATION MODEL - Enabling associations between cloud services in a computer network cloud infrastructure system is described. Cloud services can include infrastructure as a service (IAAS) storage and processing services, platform as a service (PAAS) database and Java services, and software as a service (SAAS) customer resource management services. Associations between the services can include automatically sharing security certificate-based keys and tokens or otherwise sharing data. Upon subscribing to a cloud system through an automated system, a user is prompted to select allowable associations between the selected services. The services are then provisioned and the user-selected associations are enabled. | 03-13-2014 |