Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Motion planning or control

Subclass of:

345 - Computer graphics processing and selective visual display systems

345418000 - COMPUTER GRAPHICS PROCESSING

345473000 - Animation

Patent class list (only not empty are listed)

Deeper subclasses:

Entries
DocumentTitleDate
20100156912MOTION SYNTHESIS METHOD - A motion synthesis method includes: analyzing a character's gait in motion capture data; creating motion capture data at different speeds having the analyzed gait; and storing the motion capture data at different speeds in a motion capture database. The method further includes: designating restrictions of a sketch including a trajectory and a speed tag of a desired motion; searching and extracting motion capture data corresponding to the speed tag from the motion capture database; and creating a motion satisfying the trajectory through synthesis by blending the motion capture data extracted from the motion capture database.06-24-2010
20110193867METHOD AND APPARATUS FOR PRODUCING DYNAMIC EFFECT OF CHARACTER CAPABLE OF INTERACTING WITH IMAGE - A method for producing motion effects of a character capable of interacting with a background image in accordance with the characteristics of the background image is provided, including extracting the characteristics of the background image; determining a character to be provided with the motion effects in the background in accordance with the extracted characteristics of the background image; recognizing external signals including a user input; determining the motion of the character in accordance with the characteristics of the background image and the recognized external signals; and reproducing an animation for executing the motion of the character in the background image.08-11-2011
20130083037MOVING A DISPLAY OBJECT WITHIN A DISPLAY FRAME USING A DISCRETE GESTURE - A method, system, and computer program product for moving objects such as a display window about a display frame by combining classical mechanics of motion. A window nudging method commences by receiving a discrete user interface gesture from a human interface device such as a mouse click or a keystroke, and based the discrete user interface gesture, instantaneously accelerating the window object to an initial velocity. Once the window is in motion, then the method applies a first animation to animate the window object using realistic motion changes. Such realistic motion changes comprise a friction model that combines sliding friction with fluid friction to determine frame-by-frame changes in velocity. The friction model that combines sliding friction with fluid friction can be applied to any object in the display frame. Collisions between one object and another object or between one object and its environment are modeled using a critically-damped spring model.04-04-2013
20130033500DYNAMIC COLLISION AVOIDANCE FOR CROWD SIMULATION OVER STRUCTURED PATHS THAT INTERSECT AT WAYPOINTS - One embodiment of the invention sets forth a technique for identifying and avoiding impending collisions between moving objects in an animation. Paths traversed by the moving objects intersect at pre-determined intersection points. As a moving object approaches an intersection point, a collision avoidance module determines whether the object is on course to collide with another moving object also approaching the intersection point. If a collision is detected, then the collision avoidance module modifies the speed of the moving object to avoid the collision.02-07-2013
20130033501SYSTEM AND METHOD FOR ANIMATING COLLISION-FREE SEQUENCES OF MOTIONS FOR OBJECTS PLACED ACROSS A SURFACE - Embodiments of the invention set forth a technique for animating objects placed across a surface of a graphics object. A CAD application receives a set of motions and initially applies a different motion in the set of motions to each object placed across the surface of the graphics object. The CAD application calculates bounding areas of each object according to the current motion applied thereto, which are subsequently used by the CAD application to identify collisions that are occurring or will occur between the objects. Identified collisions are cured by identifying valid motions in the set of motions that can be applied to a colliding object and then calculating bounding areas for the valid motions to select a valid motion that, when applied to the object, does not cause the object to collide with any other objects.02-07-2013
20100134501DEFINING AN ANIMATION OF A VIRTUAL OBJECT WITHIN A VIRTUAL WORLD - A method of defining an animation of a virtual object within a virtual world, wherein the animation comprises performing, at each of a series of time points, an update that updates values for object attributes of the virtual object, the method comprising: allowing a user to define the update by specifying, on a user interface, a structure representing the update, wherein the structure comprises a plurality of items and one or more connections between respective items, wherein each item represents a respective operation that may be performed when performing the update and wherein a connection between two items represents that data output by the operation represented by one of those two items is input to the operation represented by the other of those two items; allowing the user to specify that the structure comprises one or more items in a predetermined category, the predetermined category being associated with a predetermined process such that an item belongs to the predetermined category if performing the respective operation represented by that item requires execution of the predetermined process, wherein said predetermined process may be executed at most a predetermined number of times at each time point; and applying one or more rules that (a) restrict how the user may specify the structure to ensure that performing the defined update does not require execution of the predetermined process more than the predetermined number of times, (b) do not require the user to specify that an item in the predetermined category is at a particular location within the structure relative to other items and (c) do not require the user to explicitly specify which operations need to be performed before execution of the predetermined process when performing the update nor which operations need to be performed after execution of the predetermined process when performing the update.06-03-2010
20100134500APPARATUS AND METHOD FOR PRODUCING CROWD ANIMATION - An apparatus for producing crowd animation includes: a user-input controller for receiving from a user level of detail (LOD) of each individual in a picture of the crowd animation; a simulation controller for performing simulation of the crowd animation for a specific time period to update simulation information on each individual; and a display controller for displaying the picture of the crowd animation by using display information corresponding to the LOD of each individual, the display information being selected among the simulation information. The LOD of each individual indicates: displaying the individual with location information thereof only; displaying the individual with the location and model information thereof; or displaying the individual with the location, model and motion information thereof. The simulation information includes location information, model information and motion information of each individual.06-03-2010
20130057556Avatars in Social Interactive Television - Virtual environments are presented on displays along with multimedia programs to permit viewers to participate in a social interactive television environment. The virtual environments include avatars that are created and maintained in part using continually updated animation data that may be captured from cameras that monitor viewing areas in a plurality of sites. User input from the viewers may be processed in determining which viewers are presented in instances of the virtual environment. Continually updating the animation data results in avatars accurately depicting a viewer's facial expressions and other characteristics. Presence data may be collected and used to determine when to capture background images from a viewing area that may later be subtracted during the capture of animation data. Speech recognition technology may be employed to provide callouts within a virtual environment.03-07-2013
20090267951Method for rendering fluid - A method for rendering fluid is provided. First, state information of a plurality of fluid particles is provided, wherein the state information records whether the fluid particles are located above or under a fluid surface and the interactions between the fluid particles and a terrain or the dynamic objects. Then, whether to render the fluid particles in a direction facing a viewer or in a direction parallel to the flow direction is determined according to the information that whether the fluid particles are located above or under the fluid surface. Next, the fluid particles are rendered as a plurality of two-dimensional metaballs according to the interactions between the fluid particles and the terrain or the dynamic objects, and these metaballs are stacked to reconstruct the fluid.10-29-2009
20110012903SYSTEM AND METHOD FOR REAL-TIME CHARACTER ANIMATION - A method for generating a motion sequence of a character object in a rendering application. The method includes selecting a first motion clip associated with a first motion class and selecting a second motion clip associated with a second motion class, where the first and second motion clips are stored in a memory. The method further includes generating a registration curve that temporally and spatially aligns one or more frames of the first motion clip with one or more frames of the second motion clip, and rendering the motion sequence of the character object by blending the one or more frames of the first motion clip with one or more frames of second motion clip based on the registration curve. One advantage of techniques described herein is that they provide for creating motion sequences having multiple motion types while minimizing or even eliminating motion artifacts at the transition points.01-20-2011
20090267950FIXED PATH TRANSITIONS - A computer implemented method, apparatus, and computer program product for fixed path transitions in a virtual universe environment. In one embodiment, tracking data that identifies a location of an avatar in relation to a range of an object in a virtual universe is received. The range comprises a viewable field. In response to the tracking data indicating an occurrence of a trigger condition associated with a fixed path rule, a fixed path defined by the fixed path rule is identified. A speed of movement and an orientation of the object associated with the fixed path rule is identified. Movement of the object along the fixed path defined by the fixed path rule is initiated. The object then moves along the fixed path at the identified speed and with the orientation associated with the fixed path rule.10-29-2009
20090033667Method and Apparatus to Facilitate Depicting an Object in Combination with an Accessory - At a personally portable wireless two-way communicator (02-05-2009
20090046102METHOD AND APPARATUS FOR SPAWNING PROJECTED AVATARS IN A VIRTUAL UNIVERSE - The present invention provides a computer implemented method and apparatus to project a projected avatar associated with an avatar in a virtual universe. A computer receives a command to project the avatar, the command having a projection point. The computer transmits a request to place a projected avatar at the projection point to a virtual universe host. The computer renders a tab associated with the projected avatar.02-19-2009
20090009522INFORMATION PROCESSING APPARATUS AND METHOD - In order to improve the operationality of a walk-through system using panorama photography images, the system is provided with a view calculating unit for calculating view information in accordance with a user instruction from an operation unit, the view information including view position information and view direction information; a panorama image storing unit for storing a plurality of panorama images; a path storing unit for storing path information of the panorama images; an advancable path calculating unit for calculating advancable path information at a next dividing point in accordance with the view information and the path information; and an image generating unit for generating a cut-out image from the panorama image in accordance with the view information, generating a sign figure representative of the advancable path in accordance with the advancable path information, and synthesizing the cut-out image and the sign figure to generate a display image.01-08-2009
20130162654System and method for hiding latency in computer software - A system and method hides latency in the display of a subsequent user interface by animating the exit of the current user interface and animating the entrance of the subsequent user interface, causing continuity in the display of the two user interfaces. During either or both animations, information used to produce the user interface, animation of the entrance of the subsequent user interface, or both may be retrieved or processed or other actions may be performed.06-27-2013
20110043529INTERACTIVE ANIMATION - An interactive animation environment. The interactive animation environment includes at least one user-controlled object, and the animation method for providing this environment includes determining a position of the user-controlled object, defining a plurality of regions about the position, detecting a user input to move the position of the user-controlled object, associating the detected user input to move the position of the user-controlled object with a region in the direction of movement, and providing an animation of the user-controlled object associated with the mapped region. A system and controller for implementing the method is also disclosed. A computer program and computer program product for implementing the invention is further disclosed.02-24-2011
20130162655Systems and Methods for Creating, Displaying, and Using Hierarchical Objects with Nested Components - Methods involving the creation and use of nested components with hierarchical objects are disclosed. One exemplary method comprises displaying a container symbol and defining a movement for the container symbol. The method further comprises defining a nested object within the container symbol, i.e. on a coordinate space associated with the container symbol rather than the general canvas area, and defining a movement for the nested object. Either or both of the movements may involve an inverse kinematics procedure based movement of a hierarchical object, e.g., movement of a bone that causes a shape or rigid body to move. For example, a container symbol could display a car and include a nested hierarchical object that is used to define a person within the car. The movement of the car and the movement of the person can be defined separately by a developer.06-27-2013
20110298810MOVING-SUBJECT CONTROL DEVICE, MOVING-SUBJECT CONTROL SYSTEM, MOVING-SUBJECT CONTROL METHOD, AND PROGRAM - A moving-subject control device controls a motion of a moving subject based on motion data indicating the motion of the moving subject, and includes an input unit which receives an input of attribute information indicating an attribute of the moving subject, a generation unit which generates motion data for a user based on the attribute information the input of which is received by the input unit, as motion data for controlling a motion of a moving subject for the user generated based on the attribute information input by the user of the moving-subject control device, and a control unit which varies the motion of the moving subject for the user based on the motion data for the user generated by the generation unit.12-08-2011
20100182328METHOD TO ANIMATE ON A COMPUTER SCREEN A VIRTUAL PEN WHICH WRITES AND DRAWS - A method to animate on a computer screen a virtual pen which writes and draws on a virtual blackboard in order to simulate a real pen writing on a real blackboard. Graphemes and drawings (07-22-2010
20090153568LOCOMOTION GENERATION METHOD AND APPARATUS FOR DIGITAL CREATURE - A locomotion generation method for a digital creature includes: imaging and capturing movements of a creature placed on a base plate having a printed pattern; extracting body position information, body posture information, leg posture information, and footprint information of the creature by analyzing captured images; and generating creature movement by applying inverse kinematics to the body position information, the body posture information, the leg posture information, and the footprint information of the creature. The movements of the creature are imaged and captured by using two or more cameras without camera calibration06-18-2009
20100085364Foot Roll Rigging - A system and method enables animators to efficiently pose character models' feet. An initial foot model position is received. The initial foot model position specifies a foot model contact point. One or more foot roll parameters are specified that change the relative angle between at least a portion of the foot model and an initial orientation of an alignment plane. Foot roll parameters specify the rotation of the foot model around foot model contact points. Foot roll parameters can include heel roll, ball roll, and toe roll, which specify the rotation of the foot model around contact points on the heel, ball, and toe, respectively, of a foot model. To maintain the position of the foot model contact point, the foot model position is adjusted based on the foot roll parameter. The repositioned foot model is realigned with alignment plane, which restores contact at the foot model contact point.04-08-2010
20100194763User Interface for Controlling Animation of an Object - A user can control the animation of an object via an interface that includes a control area and a user-manipulable control element. In one embodiment, the control area includes an ellipse, and the user-manipulable control element includes an arrow. In yet another embodiment, the control area includes an ellipse, and the user-manipulable control element includes two points on the circumference of the ellipse. In yet another embodiment, the control area includes a first rectangle, and the user-manipulable control element includes a second rectangle. In yet another embodiment, the user-manipulable control element includes two triangular regions, and the control area includes an area separating the two regions.08-05-2010
20100079467TIME DEPENDENT VIRTUAL UNIVERSE AVATAR RENDERING - Methods, devices, program products and systems are disclosed for displaying multiple virtual universe avatar states. Each of a plurality of avatar states of a first avatar of a first virtual universe user are stored in a storage medium as a function of a time of each state. A first avatar is displayed in a current state to a second user of an engaging second avatar, the engaging instigating a selecting and a retrieving of a subset of plurality of states from the storage medium, each of the subset states different from each other and the current state. Selected subset states are visually displayed to the second user, each of the displayed states visually distinct from another and the current state. The first avatar current state is stored in the storage medium associated with the engagement.04-01-2010
20090295809Real-Time Goal-Directed Performed Motion Alignment For Computer Animated Characters - A method for real-time, goal-directed performed motion alignment for computer animated characters. A sequence of periodic locomotion may be seamlessly aligned with an arbitrarily placed and rotated non-periodic performed motion. A rendering application generates a sampling of transition locations for transition from a locomotion motion space to a performed motion space. The sampling is parameterized by control parameters of the locomotion motion space. Based on the location and rotation of a goal location at which the performed motion is executed, a particular transition location may be selected to define a motion plan to which a performed motion sequence may then appended. Advantageously, by utilizing a look-up of pre-computed values for the control parameters of the motion plan, the rendering application may minimize the computational cost of finding the motion plan to move the character to a location to transition to a performed motion.12-03-2009
20090295808Real-Time Goal-Directed Performed Motion Alignment For Computer Animated Characters - A method for real-time, goal-directed performed motion alignment for computer animated characters. A sequence of periodic locomotion may be seamlessly aligned with an arbitrarily placed and rotated non-periodic performed motion. A rendering application generates a sampling of transition locations for transition from a locomotion motion space to a performed motion space. The sampling is parameterized by control parameters of the locomotion motion space. Based on the location and rotation of a goal location at which the performed motion is executed, a particular transition location may be selected to define a motion plan to which a performed motion sequence may then appended. Advantageously, by utilizing a look-up of pre-computed values for the control parameters of the motion plan, the rendering application may minimize the computational cost of finding the motion plan to move the character to a location to transition to a performed motion.12-03-2009
20120293518DETERMINE INTENDED MOTIONS - It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like.11-22-2012
20100201693SYSTEM AND METHOD FOR AUDIENCE PARTICIPATION EVENT WITH DIGITAL AVATARS - A system and method for capturing the voice and motion of a user and mapping the captured voice and motion to an avatar is disclosed. Other aspects include displaying the avatar in the virtual world of a movie or animation chosen by the user.08-12-2010
20100060650MOVING IMAGE PROCESSING METHOD, MOVING IMAGE PROCESSING PROGRAM, AND MOVING IMAGE PROCESSING DEVICE - A moving image processing method includes: an operation item setting step of setting operation items to be operated on the moving image; a time interval setting step of setting which time interval in a reproducing period of the moving image should be defined as an interval in which the operation items that have been set are executable; a display area setting step of setting display areas for images for operations corresponding to the operation items that have been set; an image combining step of combining the images for operations corresponding to the operation items that have been set with the respective frame images, in accordance with the time interval setting step and the display area setting step; and an associating step of associating, with each combined frame image, information concerning the display areas of the images for operations in the frame image and information concerning the operation items, and storing each combined frame image and the associated information.03-11-2010
20100060649AVOIDING NON-INTENTIONAL SEPARATION OF AVATARS IN A VIRTUAL WORLD - A method for avoiding non-intentional separation of avatars in a virtual world may include detecting a first avatar seeking to enter a first location and determining if a second avatar is related to the first avatar based on a first predetermined rule. The method may also include determining that the first and second avatars are seeking to enter the first location together. The method may further include determining whether to allow the first avatar and the second avatar to enter the first location based on a second predetermined rule.03-11-2010
20100060648Method for Determining Valued Excursion Corridors in Virtual Worlds - A computer implemented method, computer program product, and a data processing system determine an excursion corridor within a virtual environment. A time-stamped snapshot of a location of at least one avatar within the virtual universe is recorded. An avatar tracking data structure is then updated. The avatar tracking data structure provides a time-based history of avatar locations within the virtual universe. A weighted density map is generated. The weighted density map is then correlated with virtual object locations. Each virtual object location corresponds to a virtual object. Excursion corridors are identified. The excursion corridor identifies frequently taken routes between the virtual object locations. Waypoints are identified. Each waypoint corresponds to a virtual object. Each waypoint is an endpoint for one of the excursion corridors.03-11-2010
20110267358ANIMATING A VIRTUAL OBJECT WITHIN A VIRTUAL WORLD - A method of animating a virtual object within a virtual world, wherein the virtual object comprises a plurality of object parts, wherein for a first object part there is one or more associated second object parts, the method comprising: at an animation update step: specifying a target frame in the virtual world; and applying control to the first object part, wherein the control is arranged such that the application of the control in isolation to the first object part would cause a movement of the first object part in the virtual world that reduces a difference between a control frame and the target frame, the control frame being a frame at a specified position and orientation in the virtual world relative to the first object part, wherein applying control to the first object part comprises moving the one or more second object parts within the virtual world to compensate for the movement of the first object part in the virtual world caused by applying the control to the first object part.11-03-2011
20080278497PROCESSING METHOD FOR CAPTURING MOVEMENT OF AN ARTICULATED STRUCTURE - The invention concerns a method of obtaining simulated parameters ( 11-13-2008
20080316213TOPOLOGY NAVIGATION AND CHANGE AWARENESS - An apparatus and method are described for displaying a topological graph that allows a user to navigate through a history of previous topology displays to increase the user's understanding and awareness of the state of the topology. In a preferred embodiment, a topology display mechanism receives state changes to a topology of a computer network and stores a sequence of graphs that reflect the changes that are made to the topology. The topology display mechanism also allows the user to step through the sequence of stored topology graphs using “video” type controls to change the display of the topology graphs. In other embodiments, the topology display mechanism displays the changes in the topology as a sequence of graphs that form an animation to give the user a graphical visualization of the changes from one topology graph in the sequence to the next.12-25-2008
20080284784Image processing device, method, and program, and objective function - An image processing device that models, based on a plurality of frame images being results of time-sequential imaging of an object in motion, a motion of the object using a three-dimensional (3D) body configured by a plurality of parts is disclosed. The device includes: acquisition means for acquiring the frame images being the imaging results; estimation means for computing a first matrix of coordinates of a joint of the 3D body and a second matrix of coordinates of each of the parts of the 3D body, and generating a first motion vector; computing means for computing a second motion vector; and determination means for determining the 3D body.11-20-2008
20080273038LOOPING MOTION SPACE REGISTRATION FOR REAL-TIME CHARACTER ANIMATION - A method for generating a looping motion space for real-time character animation may include determining a plurality of motion clips to include in the looping motion space and determining a number of motion cycles performed by a character object depicted in each of the plurality of motion clips. A plurality of looping motion clips may be synthesized from the motion clips, where each of the looping motion clips depicts the character object performing an equal number of motion cycles. Additionally, a starting frame of each of the plurality of looping motion clips may be synchronized so that the motion cycles in each of the plurality of looping motion clips are in phase with one another. By rendering an animation sequence using multiple passes through the looping motion space, an animation of the character object performing the motion cycles may be extended for arbitrary length of time.11-06-2008
20090091575Method and apparatus for animating the dynamics of hair and similar objects - Animating strands (such as long hair), for movies, videos, etc. is accomplished using computer graphics by use of differential algebraic equations. Each strand is subject to simulation by defining its motion path, then evaluating dynamic forces acting on the strand. Collision detection with any objects is performed, and collision response forces are evaluated. Then for each frame a differential algebraic equations solver is invoked to simulate the strands.04-09-2009
20090160862Method and Apparatus for Encoding/Decoding - The present invention relates to a multimedia data encoding/decoding method and apparatus. The encoding method includes generating a data area including a plurality of media data areas; generating a plurality of track areas corresponding to the plurality of media data areas, respectively; and generating an animation area including at least one of grouping information on an animation effect, opacity effect information, size information on an image to which the animation effect is to be applied, and geometrical transform effect information. According to the present invention, the multimedia data encoding/decoding method and apparatus has an effect of being capable of constructing a slide show by only a small amount of multimedia data. Thus, a time taken to process and transmit the multimedia data can reduce.06-25-2009
20090128568VIRTUAL VIEWPOINT ANIMATION - In one aspect, images of an event are obtained from a first video camera and a second camera, where the second camera captures images at a higher resolution than the first video camera. A particular image of interest is identified from the images obtained by the first video camera, e.g., based on an operator's command. A corresponding image which has been obtained by the second camera is then identified. The second image is used to depict virtual viewpoints which differ from the real viewpoints of the first and second camera, such as by combining data from a textured 3d model of the event with data from the second image. In another aspect, a presentation includes images from a first camera, followed by an animation of different virtual viewpoints, followed by images from a second camera which has a different real viewpoint of the event than the first camera.05-21-2009
20090184969Rigless retargeting for character animation - Motion may be transferred between portions of two characters if those portions have a minimum topological similarity. The portions or structures of the source and target character topologies may be represented as one or more descriptive files comprised of a hierarchy of data objects including portion identifiers and functionality descriptors associated with portions of the respective source or target topology. To transfer motion between the source and target characters, the motion associated with the portions or structures of the source character identified by a subset of source portion identifiers having corresponding target portion identifiers is determined. This motion is retargeted to and attached to the corresponding portions or structures of the target character identifiers. As a result, the animation of the portions of the target character effectively animates the target character with motion that is similar to that of the source character.07-23-2009
20090251471GENERATION OF ANIMATED GESTURE RESPONSES IN A VIRTUAL WORLD - Responding to gestures made by third parties in a virtual world by receiving a gesture from a first avatar directed to at least one second avatar. For at least one second avatar, a reply gesture may be selected that corresponds to the received gesture. The reply gesture may be output for communication to the first avatar.10-08-2009
20090295807Real-Time Goal-Directed Performed Motion Alignment For Computer Animated Characters - A method for real-time, goal-directed performed motion alignment for computer animated characters. A sequence of periodic locomotion may be seamlessly aligned with an arbitrarily placed and rotated non-periodic performed motion. A rendering application generates a sampling of transition locations for transition from a locomotion motion space to a performed motion space. The sampling is parameterized by control parameters of the locomotion motion space. Based on the location and rotation of a goal location at which the performed motion is executed, a particular transition location may be selected to define a motion plan to which a performed motion sequence may then appended. Advantageously, by utilizing a look-up of pre-computed values for the control parameters of the motion plan, the rendering application may minimize the computational cost of finding the motion plan to move the character to a location to transition to a performed motion.12-03-2009
20090201299Pack Avatar for Shared Inventory in a Virtual Universe - Generally speaking, systems, methods and media for providing a pack avatar for sharing inventory in a virtual universe are disclosed. Embodiments of a method may include receiving a request to create a pack avatar carrying one or more shared inventory items in a virtual universe and creating a pack avatar based on the received requests. Embodiments may include rendering the pack avatar in the virtual universe. Embodiments may also include, in response to receiving a request from a virtual universe user to borrow one or more shared inventory items carried by the pack avatar, accessing the one or more requested shared inventory items and rendering the one or more requested shared inventory items in the virtual universe. Further embodiments may include associating the pack avatar with a user and moving the pack avatar within the virtual universe.08-13-2009
20110221755BIONIC MOTION - A camera that can sense motion of a user is connected to a computing system (e.g., video game apparatus or other type of computer). The computing system determines an action corresponding to the sensed motion of the user and determines a magnitude of the sensed motion of the user. The computing system creates and displays an animation of an object (e.g., an avatar in a video game) performing the action in a manner that is amplified in comparison to the sensed motion by a factor that is proportional to the determined magnitude. The computing system also creates and outputs audio/visual feedback in proportion to a magnitude of the sensed motion of the user.09-15-2011
20090207176FAST OCEANS AT NEAR INFINITE RESOLUTION - The surface of a body of water can be animated by deconstructing a master wave model into several layer models and then reconstructing the layer models to form an optimized wave model. A wave model is obtained, which describes the wave surfaces in a body of water. The wave model is comprised of a range of wave model frequencies over a given area. A primary layer model, secondary and tertiary layer models are constructed based on portions of the wave model frequencies. An optimized wave model is constructed by combining the primary, secondary, and tertiary layer models. A wave surface point location is determined within the given area. A wave height value is computed for the wave surface point location using the optimized wave model. The wave height value that is associated with the surface point location is stored.08-20-2009
20100013838COMPUTER SYSTEM AND MOTION CONTROL METHOD - [PROBLEMS] To naturally and smoothly move a control target such as a virtual actor by using a small data amount and effectively perform data setting for it.01-21-2010
20100156911TRIGGERING ANIMATION ACTIONS AND MEDIA OBJECT ACTIONS - A request may be received to trigger an animation action in response to reaching a bookmark during playback of a media object. In response to the request, data is stored defining a new animation timeline configured to perform the animation action when playback of the media object reaches the bookmark. When the media object is played back, a determination is made as to whether the bookmark has been encountered. If the bookmark is encountered, the new animation timeline is started, thereby triggering the specified animation action. An animation action may also be added to an animation timeline that triggers a media object action at a location within a media object. When the animation action is encountered during playback of the animation timeline, the specified media object action is performed on the associated media object.06-24-2010
20100182329INFORMATION STORAGE MEDIUM, SKELETON MOTION CONTROL DEVICE, AND SKELETON MOTION CONTROL METHOD - A skeleton motion control device that control a motion of a skeleton model in which a parent bone and a child bone are linked via a joint. A movable range setting section sets a movable range of the joint on a projection plane, the projection plane being a plane that is orthogonal to an axis that connects a center point of a sphere and a focus that is a given point on the sphere surface, the center point of the sphere being the joint. A coordinate transformation section projects a point on the sphere surface onto the projection plane based on the focus, the point indicates a direction of the child bone. A skeleton motion calculation section calculates the direction of the child bone with respect to the parent bone within the movable range set by the movable range setting section based on a position of the point projected onto the projection plane.07-22-2010
20100259547WEB PLATFORM FOR INTERACTIVE DESIGN, SYNTHESIS AND DELIVERY OF 3D CHARACTER MOTION DATA - Systems and methods are described for animating 3D characters using synthetic motion data generated by motion models in response to a high level description of a desired sequence of motion provided by an animator. In a number of embodiments, the synthetic motion data is streamed to a user device that includes a rendering engine and the user device renders an animation of a 3D character using the streamed synthetic motion data. In several embodiments, an animator can upload a custom model of a 3D character or a custom 3D character is generated by the server system in response to a high level description of a desired 3D character provided by the user and the synthetic motion data generated by the generative model is retargeted to animate the custom 3D character.10-14-2010
20090079745System and method for intuitive interactive navigational control in virtual environments - A human-computer-interface design scheme makes possible the creation of an interactive intuitive user navigation system that allows user to issue his intended direction and speed for traversing in the virtual environment with just appropriately positioning a tracker within the operating space. The interface system contains the information about the boundary and center of an arbitrarily-defined static zone within the operating space of the tracker. If the tracker is positioned inside this static zone, the system would interpret it as no traverse is intended. When the user decides to move in a particular direction, he just needs to move the tracker outside the static zone in that direction, and the computer would be able to calculate the intended traverse vector by finding the vector from the center of the static zone to the position of the tracker. The further the tracker is positioned from the static zone, the greater the speed of the intended traverse.03-26-2009
20100238182CHAINING ANIMATIONS - In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.09-23-2010
20090040231Information processing apparatus, system, and method thereof - An information processing apparatus includes a bio-information obtaining unit configured to obtain bio-information of a subject; a kinetic-information obtaining unit configured to obtain kinetic information of the subject; and a control unit configured to determine an expression or movement of an avatar on the basis of the bio-information obtained by the bio-information obtaining unit and the kinetic information obtained by the kinetic-information obtaining unit and to perform a control operation so that the avatar with the determined expression or movement is displayed.02-12-2009
20090322763Motion Capture Apparatus and Method - Provided are an apparatus and a method of effectively creating real-time movements of a three dimensional virtual character by use of a small number of sensors. More specifically, the motion capture method, which maps movements of a human body into a skeleton model to generate movements of a three-dimensional (3D) virtual character, includes measuring a distance between a portion of a human body to which a measurement sensor is positioned and a reference position and rotation angles of the portion, and estimating relative rotation angles and position coordinates of each portion of the human body by use of the measured distance and rotation angles.12-31-2009
20090109228TIME-DEPENDENT CLIENT INACTIVITY INDICIA IN A MULTI-USER ANIMATION ENVIRONMENT - A method for managing a multi-user animation platform is disclosed. A three-dimensional space within a computer memory is modeled. An avatar of a client is located within the three-dimensional space, the avatar being graphically represented by a three-dimensional figure within the three-dimensional space. The avatar is responsive to client input commands, and the three-dimensional figure includes a graphical representation of client activity. The client input commands are monitored to determine client activity. The graphical representation of client activity is then altered according to an inactivity scheme when client input commands are not detected. Following a predetermined period of client inactivity, the inactivity scheme varies non-repetitively with time.04-30-2009
20090109229REDUCING A DISPLAY QUALITY OF AN AREA IN A VIRTUAL UNIVERSE TO CONSERVE COMPUTING RESOURCES - Described herein are processes and devices that reduce a display quality of an area of a virtual universe to conserve computing resources. One of the devices described is a virtual resource conserver. The virtual resource conserver determines, or selects, an area in the virtual universe. A computing resource processes data for presenting the area in the virtual universe. The virtual resource conserver evaluates significance factors about the area to determine a significance of how the area is being used, or an extent to which an area is being viewed by an avatar. The virtual resource conserver reduces a display quality of the area based on the significance of how the area is being used or viewed. The virtual resource conserver thus reduces usage of the computing resource.04-30-2009
20110128292DYNAMICS-BASED MOTION GENERATION APPARATUS AND METHOD - A dynamics-based motion generation apparatus includes: a dynamics model conversion unit for automatically converting character model data into dynamics model data of a character to be subjected to a dynamics simulation; a dynamics model control unit for modifying the dynamics model data and adding or modifying an environment model; a dynamics motion conversion unit for automatically converting reference motion data of the character, which has been created by using the character model data, into dynamics motion data through the dynamics simulation by referring to the dynamics model data and the environment model; and a motion editing unit for editing the reference motion data to decrease a gap between reference motion data and dynamics motion data. The apparatus further includes a robot motion control unit for controlling a robot by inputting preset torque values to related joint motors of the robot by referring to the dynamics motion data.06-02-2011
20100302258Inverse Kinematics for Motion-Capture Characters - A method for a computer system comprising receiving a displacement for a first object model surface from a user determined in response to a first physical motion captured pose, determining a weighted combination of a first displacement group and a second displacement group from the displacement, wherein the first displacement group is determined from displacements between the first object model surface and a second object model surface, wherein the second object model surface is determined from displacements between a second physical motion captured pose, wherein the second displacement group is determined from displacements between the first object model surface and a third object model surface, wherein the third object model surface is determined from a third physical motion captured pose, determining a fourth object model surface from the first object model surface and the weighted combination, and displaying the fourth object model surface to the user on a display.12-02-2010
20100302257Systems and Methods For Applying Animations or Motions to a Character - An virtual character such as an on-screen object, an avatar, an on-screen character, or the like may be animated using a live motion of a user and a pre-recorded motion. For example, a live motion of a user may be captured and a pre-recorded motion such as a pre-recorded artist generated motion, a pre-recorded motion of the user, and/or a programmatically controlled transformation may be received. The live motion may then be applied to a first portion of an the virtual character and the pre-recorded motion may be applied to a second portion of the virtual character such that the virtual character may be animated with a combination of the live and pre-recorded motions.12-02-2010
20090153569Method for tracking head motion for 3D facial model animation from video stream - A head motion tracking method for three-dimensional facial model animation, the head motion tracking method includes acquiring initial facial motion to be fit to an image of a three-dimensional model from an image inputted by a video camera; creating a silhouette of the three-dimensional model and projecting the silhouette; matching the silhouette created from the three-dimensional model with a silhouette acquired by a statistical feature point tracking scheme; and obtaining a motion parameter for the image of the three-dimensional model through motion correction using a texture to perform three-dimensional head motion tracking. In accordance with the present invention, natural three-dimensional facial model animation based on a real image acquired with a video camera can be performed automatically, thereby reducing time and cost.06-18-2009
20120147014METHOD FOR EXTRACTING PERSONAL STYLES AND ITS APPLICATION TO MOTION SYNTHESIS AND RECOGNITION - Disclosed is a method for automatically extracting personal styles from captured motion data. The inventive method employs wavelet analysis to extract the captured motion vector of different actors into wavelet coefficients, and thus forms a feature vector by optimization selection, which is used later for identification purposes. When the inventive method is applied to process animation frames, the performance can be evaluated by grouping and classification matrix without any correlation with the type of the motion. Also, even if the type of the motion is not stored in the database in advance, the motions of the actor can still be recognized by a learning module regardless of the type of the motions.06-14-2012
20100277483Method and system for simulating character - A method and system for simulating a character is provided. The method of simulating a character including: optimizing motion data by using a displacement mapping and a Proportional Derivative (PD) control; and performing controller training by using the optimized motion data and controlling a motion of the character. In this instance, the optimizing includes: generating a target motion by using the displacement mapping between an input motion and a displacement parameter; and generating a simulated motion by using the target motion and an objective function.11-04-2010
20110148886METHOD AND SYSTEM FOR RECEIVING AN INDEXED LOOK-UP TABLE AND VIEWING A VECTOR ANIMATION SEQUENCE - A method for interactively viewing a vector animation sequence, including receiving an indexed look-up table that stores a plurality of local vector objects associated with tile regions of a first vector image, indicating a request for a desired portion of a second vector image, for display at a specified resolution, determining tile regions of a pre-processed vector image, wherein the pre-processed vector image includes a plurality of tile regions and a plurality of local vector objects, each local vector object being associated with one of the tile regions, requesting at least one tile region of the pre-processed vector image from a server computer, receiving local vector objects and local vector object indices, extracting local vector objects from the indexed look-up table according to the local vector object indices, and generating the desired portion of the second vector image using the received local vector objects and the extracted local vector objects.06-23-2011
20080309671AVATAR EYE CONTROL IN A MULTI-USER ANIMATION ENVIRONMENT - In a multi-participant modeled virtual reality environment, avatars are modeled beings that include moveable eyes creating the impression of an apparent gaze direction. Control of eye movement may be performed autonomously using software to select and prioritize targets in a visual field. Sequence and duration of apparent gaze may then be controlled using automatically determined priorities. Optionally, user preferences for object characteristics may be factored into determining priority of apparent gaze. Resulting modeled avatars are rendered on client displays to provide more lifelike and interesting avatar depictions with shifting gaze directions.12-18-2008
20080204458SYSTEM AND METHOD FOR TRANSFORMING DISPERSED DATA PATTERNS INTO MOVING OBJECTS - A motion-based method and system for rapidly identifying the presence of spatially dispersed or interwoven patterns in data and their deviation from a test model for the pattern includes transforming dispersed patterns into one concentrated moving objects, for which there is a characteristic, identifiable motion signature. The method may be used with data sets containing sharp peaks, such as frequency spectra, and other data sets. A roadmap of basic motion signatures is provided for reference, including multiple harmonic series, separation of odd and even harmonics, missing modes, sidebands and inharmonic patterns. The system and method may also be used with data stored in arrays and volumes. It remaps such data to show both high-resolution information and long range trends simultaneously for applications in nanoscale imaging.08-28-2008
20130120404Animation Keyframing Using Physics - An animation-authoring environment includes a graphical user interface usable by a user to define an initial key frame, including one or more scene entities with one or more respective physics properties. The authoring environment generates a sequence of extrapolated frames from the initial key frame by using a physics simulation to extrapolate respective motion paths for scene entities in the key frame and configuring each frame in the generated sequence to depict each such scene entity at a successive location along its respective extrapolated motion path. The authoring environment may then produce a movie comprising the sequence of frames.05-16-2013
20130120405ANIMATION CREATION AND MANAGEMENT IN PRESENTATION APPLICATION PROGRAMS - An animation timeline is analyzed to determine one or more discrete states. Each discrete state includes one or more animation effects. The discrete states represent scenes of a slide in a slide presentation. The concepts of scenes allows user to view a timeline of scenes, open a scene, and direct manipulate objects in the scene to author animations. The animations can include motion path animation effects, which can be directly manipulated utilizing a motion path tweening method. To aid in direct manipulation of a motion path of an object, a ghost version of the object can be shown to communicate to a user the position of the object after a motion path animation effect that includes the motion path is performed. The ghost version may also be used to show a start position when a start point is manipulated.05-16-2013
20110273457STABLE SPACES FOR RENDERING CHARACTER GARMENTS IN REAL-TIME - Techniques are disclosed for providing a learning-based clothing model that enables the simultaneous animation of multiple detailed garments in real-time. A simple conditional model learns and preserves key dynamic properties of cloth motions and folding details. Such a conditional model may be generated for each garment worn by a given character. Once generated, the conditional model may be used to determine complex body/cloth interactions in order to render the character and garment from frame-to-frame. The clothing model may be used for a variety of garments worn by male and female human characters (as well as non-human characters) while performing a varied set of motions typically used in video games (e.g., walking, running, jumping, turning, etc.).11-10-2011
20110187728OPTO-MECHANICAL CAPTURE SYSTEM FOR INDIRECTLY MEASURING THE MOVEMENT OF FLEXIBLE BODIES AND/OR OBJECTS - Opto-mechanical motion capture system for indirectly measuring the movement of bodies and objects, mainly focused on joints of flexible materials, or which have deformations, which makes difficult the instrumentation with rigid sensors such as potentiometers. This invention consists of an image acquisition device or camera and a visualization bed in which there is a series of transmission cables which convey to the visualization bed the movements generated in the flexible parts to be sensed. The camera is set in such a way that it is possible to capture the image of the transmission cables, enabling the determination of its displacement and thus of the sensed objects. The main object of this invention is to enable the measurement of the movements of the flexible parts of the human body in a simple, cheap and comfortable way for the user of the device.08-04-2011
20100020085METHOD FOR AVATAR WANDERING IN A COMPUTER BASED INTERACTIVE ENVIRONMENT - A method for avatar wandering in a computer based interactive environment including for each avatar within a range of a current avatar, obtaining profiles of a user represented by the avatar, for each profile of the user represented by the avatar that has a same profile type as a profile of a user represented by the current avatar, comparing the profiles for matching data, computing a match score for the avatar based on the matching data, and moving the current avatar toward the avatar that has a greatest match score.01-28-2010
20090009521Remote Monitor Having Avatar Image Processing Unit - The present invention is related to a remote monitor having an avatar image processing unit for making access to at least one home appliance so as to be able to communicate therewith for representing information on operation progress and control of the home appliance with an avatar. The remote monitor includes a communication unit accessible remotely to at least one heme appliance so as to be able to communicate therewith for receiving data through a communication unit in the home appliance having a system microcomputer for operating an entire system and the communication unit which makes data communication, and a remote display unit having an avatar image unit for displaying an avatar image according to the data received through the communication unit.01-08-2009
20120306892MOBILE BALL TARGET SCREEN AND TRAJECTORY COMPUTING SYSTEM - A mobile target screen is described for ball game practicing and simulation. Tow force sensors are mounted at each of the four corners of the frame which holds a target screen. Measurements form the force sensors are used to compute and display a representation of ball speed, the location of the ball on the target screen, and the direction of the ball motion. These parameters can be used to predict the shooting distance and the landing position of the ball. It also provides enough information to predict the trajectory of the ball which can be displayed on a video screen which communicates with the sensors through a wireless transceiver.12-06-2012
20110304633 DISPLAY WITH ROBOTIC PIXELS - Techniques are disclosed for controlling robot pixels to display a visual representation of an input. The input to the system could be an image of a face, and the robot pixels deploy in a physical arrangement to display a visual representation of the face, and would change their physical arrangement over time to represent changing facial expressions. The robot pixels function as a display device for a given allocation of robot pixels. Techniques are also disclosed for distributed collision avoidance among multiple non-holonomic robots to guarantee smooth and collision-free motions. The collision avoidance technique works for multiple robots by decoupling path planning and coordination.12-15-2011
20090179901BEHAVIORAL MOTION SPACE BLENDING FOR GOAL-DIRECTED CHARACTER ANIMATION - A method for rendering frames of an animation sequence using a plurality of motion clips included in a plurality of motion spaces that define a behavioral motion space. Each motion space in the behavioral motion space depicts a character performing a different type of locomotion, including running, walking, or jogging. Each motion space is pre-processed to that all the motion clips have the same number of periodic cycles. Registration curves are made between reference clips from each motion space to synchronic the motion spaces.07-16-2009
20110304632INTERACTING WITH USER INTERFACE VIA AVATAR - Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.12-15-2011
20100073383Cloth simulation pipeline - A cloth simulation pipeline calculates normals between a cloth image and a colliding object image. The maximum value normal may be used to resolve the collision between the object and cloth images.03-25-2010
20120044251GRAPHICS RENDERING METHODS FOR SATISFYING MINIMUM FRAME RATE REQUIREMENTS - Methods and devices enable rendering of graphic images at a minimum frame rate even when processing resource limitations and rendering processing may not support the minimum frame rate presentation. While graphics are being rendered, a processor of a computing device may monitor the achieved frame rate. If the frame rate falls below a minimum threshold, the processor may note a current speed or rate of movement of the image and begin rendering less computationally complex graphic items. Rendering of less computationally complex items continues until the processor notes that the speed of rendered items is less than the noted speed. At this point, normal graphical rendering may be recommenced. The aspects may be applied to more than one type of less computationally complex item or rendering format. The various aspects may be applied to a wide variety of animations and moving graphics, as well as scrolling text, webpages, etc.02-23-2012
20120154409VERTEX-BAKED THREE-DIMENSIONAL ANIMATION AUGMENTATION - A method for controlling presentation of three dimensional (3D) animation includes rendering a 3D animation sequence including a 3D vertex-baked model which is derived from a 3D animation file including vertex data of every vertex for every 3D image frame in the 3D animation sequence. The 3D vertex-baked model includes a control surface that provides a best-fit 3D shape to vertices of the 3D vertex-baked model. The method further includes receiving a motion control input, and if the motion control input is received during an augmentation portion of the 3D animation sequence, deviating from a default posture of the control surface in accordance with the motion control input.06-21-2012
20110181607System and method for controlling animation by tagging objects within a game environment - A game developer can “tag” an item in the game environment. When an animated character walks near the “tagged” item, the animation engine can cause the character's head to turn toward the item, and mathematically computes what needs to be done in order to make the action look real and normal. The tag can also be modified to elicit an emotional response from the character. For example, a tagged enemy can cause fear, while a tagged inanimate object may cause only indifference or indifferent interest.07-28-2011
20110181606AUTOMATIC AND SEMI-AUTOMATIC GENERATION OF IMAGE FEATURES SUGGESTIVE OF MOTION FOR COMPUTER-GENERATED IMAGES AND VIDEO - In an animation processing system, generating images to be viewable on a display using a computer that are generated based on scene geometry obtained from computer readable storage and animation data representing changes over time of scene geometry elements, but also images can be modified to include shading that is a function of positions of objects at other than the current instantaneous time for a frame render such that the motion effect shading would suggest motion of at least one of the elements to a viewer of the generated images. Motion effects provide, based on depiction parameters and/or artist inputs, shading that varies for at least some received animation data, received motion depiction parameters, for at least one pixel, a pixel color is rendered based on motion effect program output and at least some received scene geometry, such that the output contributes to features that would suggest the motion.07-28-2011
20120212495User Interface with Parallax Animation - User interface animation techniques are described. In an implementation, an input having a velocity is detected that is directed to one or more objects in a user interface. A visual presentation is generated that is animated so a first object in the user interface moves in parallax with respect to a second object. The presentation is displayed so the first object appears to moves at a rate that corresponds to the velocity.08-23-2012
20120169741ANIMATION CONTROL DEVICE, ANIMATION CONTROL METHOD, PROGRAM, AND INTEGRATED CIRCUIT - An animation control device that can suppress reduction in the total quality of the animation to be displayed, and perform animation intended by an application developer is provided. The animation control device (07-05-2012
20120169740IMAGING DEVICE AND COMPUTER READING AND RECORDING MEDIUM - Provided are a display device and a non-transitory computer-readable recording medium. By comparing a priority of an animation clip corresponding to a predetermined part of an avatar of a virtual world with a priority of motion data and by determining data corresponding to the predetermined part of the avatar, a motion of the avatar in which motion data sensing a motion of a user of a real world is associated with the animation clip may be generated.07-05-2012
20090128569Image display program and image display apparatus - An image display method for displaying an image of a character in a virtual space configures a computer to execute steps of, a parameter setting step for setting parameters concerning movement of the character, a reference unit time setting step for setting a reference unit time of stepwise change of the parameters, a minimal unit time setting step for setting a minimal unit time of a equally divided time of reference unit time, a basic setting step of parameters, for setting the parameters for each reference unit time, and for allocating the parameters for each minimal unit time, a smoothing step for setting a smoothed parameters for each said minimal unit time, the parameters being allocated to the minimal unit time, and a display step for displaying the object according to said parameters set in the smoothing step for each minimal unit time.05-21-2009
20090309882LARGE SCALE CROWD PHYSICS - Systems and methods for creating autonomous agents or objects. Agents reactions to physical forces are modeled as springs. A signal representing a force or velocity change for one animation control is processed to produce realistic reaction effects. The signal may be filtered two or more times, each filter typically having a different time lag and/or filter width. The filtered signals are combined, with weightings, to produce an animation control signal. The animation control signal is then applied to the same or a different animation control to influence motion of the object or agent.12-17-2009
20120229475Animation of Characters - An animation method in which a user directs the actions of characters on a virtual stage, rather than instructing every individual movement. Such a method of producing an animated video comprises providing a virtual stage; providing templates from which characters can be assembled, each character having a body and limbs, and the templates providing facial features and clothes with differing colours and shapes; providing objects that can be placed on the virtual stage; placing the objects and the characters on the virtual stage; instructing each character as to his emotional state, and as to any required movement; wherein each character continuously and automatically behaves in accordance with the specified emotional state. Instructions to a character about a desired body movement, such as stepping in one direction or another, or turning on the spot, or walking or running along a specific route, may be provided by a sectored base ring, the sectors displaying arrows that correspond to different steps; while dragging the base ring or a marker along a route across the virtual stage causes the character to follow that route, walking or running depending on how fast the marker had been moved.09-13-2012
20100328319INFORMATION PROCESSOR AND INFORMATION PROCESSING METHOD FOR PERFORMING PROCESS ADAPTED TO USER MOTION - A positional data acquisition unit of an action detector acquires positional data indicating the position of an image of a light-emitting part of a light-emitting device held by a user in an image frame at each time step, and also acquires curve data for the head contour at each time step estimated as a result of visual tracking by a tracking processor. A history storage unit stores a history of the positional data for an image of a light-emitting part and the curved data for the head contour. A determination criteria storage unit stores the criteria for determining that a predefined action is performed by referring to the time-dependent change in the relative position of the image of the light-emitting part in relation to the curve representing the head contour. An action determination unit determines whether the action is performed based on the actual data.12-30-2010
20080297519ANIMATING HAIR USING POSE CONTROLLERS - The present invention deforms hairs from a reference pose based on one or more of the following: magnet position and/or orientation; local reference space position (e.g., a character's head or scalp); and several profile curves and variables. In one embodiment, after an initial deformation is determined, it is refined in order to simulate collisions, control hair length, and reduce the likelihood of hairs penetrating the surface model. The deformed hairs can be rendered to create a frame. This procedure can be performed multiple times, using different inputs, to create different hair deformations. These different inputs can be generated based on interpolations of existing inputs. Frames created using these deformations can then be displayed in sequence to produce an animation. The invention can be used to animate any tubular or cylindrical structure protruding from a surface.12-04-2008
20080297518Variable Motion Blur - Variable motion blur is created by varying the evaluation time used to determine the poses of objects according to motion blur parameters when evaluating a blur frame. A blur parameter can be associated with one or more objects, portions of objects, or animation variables. The animation system modifies the time of the blur frame by a function including the blur parameter to determine poses of objects or portions thereof associated with the blur parameter in a blur frame. The animation system determines the values of animation variables at their modified times, rather than at the time of the blur frame, and poses objects or portions thereof accordingly. Multiple blur parameters can be used to evaluate the poses of different portions of a scene at different times for a blur frame. Portions of an object can be associated with different blur parameters, enabling motion blur to be varied within an object.12-04-2008
20120139925System for Estimating Location of Occluded Skeleton, Method for Estimating Location of Occluded Skeleton and Method for Reconstructing Occluded Skeleton - A system for estimating a location of an occluded skeleton, a method for estimating a location of an occluded skeleton and a method for reconstructing an occluded skeleton are provided. The method for estimating a location of an occluded skeleton comprises the following steps: Firstly, a trace of a reference central point of a body is estimated according to a plurality of continuously moving images. Next, a human movement state is estimated according to the trace and a motion information of the continuously moving images free of skeleton occlusion. Then, a possible range of the occluded skeleton for maintaining human balance is calculated according to the human movement state. Afterwards, a current motion level of the occluded skeleton is predicted according to a historic motion information of the occluded skeleton. Lastly, the location of the occluded skeleton is estimated according to the current motion level and the possible range.06-07-2012
20120092349PROGRAM EXECUTION SYSTEM, PROGRAM EXECUTION DEVICE AND RECORDING MEDIUM AND COMPUTER EXECUTABLE PROGRAM THEREFOR - A program execution system, has a program execution device which has a controller operated by a user and a display on which images such as characters or players in a game are seen. In order to prevent an incorrect movement of a character on the display when a switching from one scene viewed from one camera viewpoint to another scene viewed from another camera viewpoint without additional steps by the user, the program execution system has a computer-readable and executable program stored on a recorded medium providing a character motion direction step by which, if along the motion of a character on the screen a switching is made from one scene to another, the direction of motion of the character in the second scene is maintained in coordination with the character's motion direction on a map in the first scene at least immediately before the switching.04-19-2012
20120092348SEMI-AUTOMATIC NAVIGATION WITH AN IMMERSIVE IMAGE - A View Track accompanying an immersive movie provides an automatic method of directing the user's region of interest (ROI) during the playback process of an immersive movie. The user is free to assert manual control to look around, but when the user releases this manual control, the direction of the ROI returns gradually to the automatic directions in the View Track. The View Track can also change the apparent direction of the audio from a mix of directional audio sources in the immersive movie, and the display of any metadata associated with a particular direction. A multiplicity of View Tracks can be created to allow a choice of different playback results. The View Track can consist of a separate Stabilization Track to stabilize the spherical image, for improving the performance of a basic Navigation Track for looking around. The recording of the View Track is part of the post production process for making and distributing an immersive movie for improving the user experience.04-19-2012
20130009965ANIMATION DISPLAY DEVICE - A converter 01-10-2013
20080252646ENHANCED MOTION BEHAVIOR FRAMEWORK - An enhanced motion behavior framework, in which an input is received from a user corresponding to an object to be animated and one or more animation parameters to be applied to the object, the one or more animation parameters are applied to the object, and an animation of the object is displayed based on the application of the one or more parameters to the object.10-16-2008
20130176316PANNING ANIMATIONS - Panning animation techniques are described. In one or more implementations, an input is recognized by a computing device as corresponding to a panning animation. A distance is calculated that is to be traveled by the panning animation in a user interface output by computing device, the distance limited by a predefined maximum distance. The panning animation is output by the computing device to travel the calculated distance.07-11-2013
20130135316ANIMATION AUTHORING SYSTEM AND METHOD FOR AUTHORING ANIMATION - This invention relates to an animation authoring system and an animation authoring method, to enable beginners to produce a three-dimensional animation easily and to solve input ambiguity problem in the three-dimensional environment. The animation authoring method according to the invention comprises the steps of: (a) receiving a plane route of an object on a predetermined reference plane from a user; (b) creating a motion window formed along the plane route and having a predetermined angle to the reference plane to receive motion information of the object on the motion window from the user; and (c) implementing an animation according to the received motion information.05-30-2013
20130113808METHOD AND APPARATUS FOR CONTROLLING PLAYBACK SPEED OF ANIMATION MESSAGE IN MOBILE TERMINAL - A method and apparatus for controlling a playback speed of an animation message in a mobile terminal is provided. The method includes recognizing at least one object to be displayed included in the received animation message; determining the playback speed of the received animation message with respect to each object to be displayed according to the recognized feature of each object; and displaying the animation message according to the determined playback speed.05-09-2013
20130093775System For Creating A Visual Animation Of Objects - A system for creating visual animation of objects which can be experienced by a passenger located within a moving vehicle is provided. The system includes: a plurality of objects being placed along a movement path of the vehicle; a plurality of sensors being assigned to the plurality of objects and being arranged such along the movement path that the vehicle actuates the sensors when moving along the movement path; and a plurality of highlighting devices being coupled to the plurality of sensors and being controlled by the sensors such that, in accordance with sensor actuations triggered by the movement of the vehicle, a) only one of the plurality of objects is highlighted by the highlighting devices to the passenger at one time, and b) the objects are highlighted to the passenger in such a sequence that the passenger visually experiences an animation of the objects.04-18-2013
20130127877Parameterizing Animation Timelines - Methods and systems for parameterizing animation timelines are disclosed. In some embodiments, a method includes displaying a representation of a timeline configured to animate a first image in a graphical user interface, where the timeline includes a data structure having one or more commands configured to operate upon a first property of the first image. The method also includes creating a parameterized timeline by replacing a reference to the first image within the timeline with a placeholder. The method includes, in response to a request to animate a second image, storing an entry in a dictionary of key and value pairs. The method further includes animating the second image by replacing the placeholder in the parameterized timeline with the reference to the second image during execution of the parameterized timeline.05-23-2013
20130127878PHYSICS RULES BASED ANIMATION ENGINE - At an animation authoring component, an inputted movement of an object displayed in a graphical user interface is received. Further, at a physics animation rule engine, a physics generated movement of the object that results from a set of physics animation rules is applied to the inputted movement. In addition, at the graphical user interface, the inputted movement of the object is displayed in addition to the physics generated movement of the object. At the animation authoring component, the physics generated movement of the object in addition to the inputted movement of the object is recorded.05-23-2013
20130181996VISUAL CONNECTIVITY OF WIDGETS USING EVENT PROPAGATION - A method, system and computer program product receive a set of objects for connection, create a moving object within the set of objects, display visual connection cues on objects in the set of objects, adjust the visual connection cues of the moving object and a target object in the set of objects, identify event propagation precedence, and connect the moving object with the target object.07-18-2013

Patent applications in class Motion planning or control