Patent application number | Description | Published |
20090221368 | Method and system for creating a shared game space for a networked game - Techniques for creating a shared virtual space based on one or more real-world spaces are disclosed. Representations of the real-world spaces are combined in different ways to create a shared virtual game space within which each person's real-world movements are interpreted to create a shared feeling of physical proximity and physical interaction with other people on the network. One or more video cameras in one real-world area are provided to generate video data capturing the users as well as the environment of the users. The shared virtual space is created in reference to the respective real-world spaces that may be combined in various ways. Depending on a particular application, the shared virtual space will be embedded with various virtual objects and representative objects. Together with various rules and scoring mechanisms, such a shared virtual space may be used in a videogame that can be played by multiple players in a game space within which player's movements are interpreted to create a shared feeling of physical proximity and physical interaction with other players on the network. | 09-03-2009 |
20090221374 | Method and system for controlling movements of objects in a videogame - Techniques for controlling movements of an object in a videogame are disclosed. At least one video camera is used at a location where at least a player plays the videogame, the video camera captures various movements of the player. A designated device (e.g., a game console or computer) is configured to the video data to derive the movements of the player from the video data, and cause the object to respond to the movements of the player. When the designated device receives video data from more than one locations, players at the respective locations can play a networked videogame that may be built upon a shared space representing some or all of the real-world spaces of the locations. The video game is embedded with objects, some of which respond to the movements of the players and interact with other objects in accordance with rules of the video games. | 09-03-2009 |
20090288064 | Method and apparatus for non-disruptive embedding of specialized elements - Techniques for non-disruptive embedding of specialized elements are disclosed. In one aspect of the techniques, ontology is defined to specify an application domain. A program interface (API) is also provided for creating raw features by a developer. Thus a module is provided for at least one form of statistical analysis within the ontology. The module is configured automatically in a computing device with the API in response to a system consistent with the ontology, wherein the system has no substantial requirement for specialized knowledge of that form of statistical analysis, and the module has no substantial requirement for specialized knowledge of particular functions provided by the system. | 11-19-2009 |
20110043443 | Systems and methods for utilizing personalized motion control in virtual environment - Techniques for controlling motions using motion recognizers generated in advance by users are described. According to embodiment, the motion recognizers created by end users are utilized to control virtual objects displayed in a virtual environment. By manipulating one or more motion sensitive devices, end users could command what the objects to do in the virtual environment. Motion signals from each of the motion sensitive devices are recognized in accordance with the motion recognizers created in advance by the users. One or more of the motion signals are at the same time utilized to tune the motion recognizers or create additional motion recognizers. As a result, the motion recognizers are constantly updated to be more accommodating to the user(s) | 02-24-2011 |
20110044501 | Systems and methods for personalized motion control - End users, unskilled in the art, generating motion recognizers from example motions, without substantial programming, without limitation to any fixed set of well-known gestures, and without limitation to motions that occur substantially in a plane, or are substantially predefined in scope. From example motions for each class of motion to be recognized, a system automatically generates motion recognizers using machine learning techniques. Those motion recognizers can be incorporated into an end-user application, with the effect that when a user of the application supplies a motion, those motion recognizers will recognize the motion as an example of one of the known classes of motion. Motion recognizers can be incorporated into an end-user application; tuned to improve recognition rates for subsequent motions to allow end-users to add new example motions. | 02-24-2011 |
20110109548 | Systems and methods for motion recognition with minimum delay - Techniques for performing motion recognition with minimum delay are disclosed. A processing unit is provided to receive motion signals from at least one motion sensing device, where the motion signal describes motions made by a user. The processing unit is configured to access a set of prototypes included in a motion recognizer to generate corresponding recognition signals from the motion signals in response to the motion recognizer without considering one or more of the prototypes completely in the motion recognizer. Movements of at least one of the objects in a virtual interactive environment is responsive to the recognition signals such that feedback from the motions to control the one of the objects is immediate and substantially correct no matter how much of the motion signals have been received. | 05-12-2011 |
20110112996 | Systems and methods for motion recognition using multiple sensing streams - Techniques for motion recognition using multiple data streams are disclosed. Multiple data streams from inertia sensors as well as non-inertial sensors are received to derive a motion recognition signal from motion recognizers. These motion recognizers are originally constructed from a training set of motion signals and may be updated with received multiple sensing signals. In one aspect, multiple data streams are converted to device-independent motion signals that are applied with the motion recognizers to provide a generalized motion recognition capability. | 05-12-2011 |
20120169887 | Method and system for head tracking and pose estimation - Techniques for performing accurate and automatic head pose estimation are disclosed. According to one aspect of the techniques, head pose estimation is integrated with a scale-invariant head tracking method along with facial features detected from a located head in images. Thus the head pose estimation works efficiently even when there are large translational movements resulting from the head motion. Various computation techniques are used to optimize the process of estimation so that the head pose estimation can be applied to control one or more objects in a virtual environment and virtual character gaze control. | 07-05-2012 |
20120208639 | Remote control with motion sensitive devices - Techniques for using a variety of motion sensitive signals to remotely control an existing electronic device or system are described. Output signals from a motion sensitive device may be in a different form from those of a pre-defined controlling device. According to one aspect of the present invention, a controlled device is designed to respond to signals from a touch screen or touch screen-like signals. The output signals from a motion sensitive device include motion sensitive inputs to a controlled device and converted into touch-screen like signals that are coupled to the controlled device or programs being executed in the controlled device, subsequently causing the behavior of the controlled device to change or respond thereto, without reconfiguration of the applications running on the controlled device. | 08-16-2012 |
20120256835 | Motion control used as controlling device - Techniques for using a motion sensitive device as a controller are disclosed. A motion controller as an input/control device is used to control an existing electronic device (a.k.a., controlled device) previously configured for taking inputs from a pre-defined controlling device. The signals from the input device are in a different form from the pre-defined controlling device. According to one aspect of the present invention, the controlled device was designed to respond to signals from a pre-defined controlling device (e.g., a touch-screen device). The inputs from the motion controller are converted into touch-screen like signals that are then sent to the controlled device or programs being executed in the controlled device to cause the behavior of the controlled device to change or respond thereto, without reconfiguration of the applications running on the controlled device. | 10-11-2012 |
20140320691 | Method and system for head tracking and pose estimation - Techniques for performing accurate and automatic head pose estimation are disclosed. According to one aspect of the techniques, head pose estimation is integrated with a scale-invariant head tracking method along with facial features detected from a located head in images. Thus the head pose estimation works efficiently even when there are large translational movements resulting from the head motion. Various computation techniques are used to optimize the process of estimation so that the head pose estimation can be applied to control one or more objects in a virtual environment and virtual character gaze control. | 10-30-2014 |
20140342830 | Method and system for providing backward compatibility - Techniques for providing compatibility between two different game controllers are disclosed. When a new or more advanced controller is introduced, it is important that such a new controller works with a system originally configured for an existing or old controller. The new controller may provide more functionalities than the old one does. In some cases, the new controller provides more sensing signals than the old one does. The new controller is configured to work with the system to transform the sensing signals therefrom to masquerade as though they were coming from the old controller. The transforming of the sensing signals comprises: replicating operational characteristics of the old controller, and relocating virtually the sensing signals to appear as though the sensing signals were generated from inertial sensors located in a certain location in the new controller responsive to a certain location of the inertial sensors in the old controller. | 11-20-2014 |