Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Animation

Subclass of:

345 - Computer graphics processing and selective visual display systems

345418000 - COMPUTER GRAPHICS PROCESSING

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
345474000 Motion planning or control 99
345475000 Temporal interpolation or processing 8
Entries
DocumentTitleDate
20130044116SYSTEM AND METHOD FOR CONTROLLING ANIMATION BY TAGGING OBJECTS WITHIN A GAME ENVIRONMENT - A game developer can “tag” an item in the game environment. When an animated character walks near the “tagged” item, the animation engine can cause the character's head to turn toward the item, and mathematically computes what needs to be done in order to make the action look real and normal. The tag can also be modified to elicit an emotional response from the character. For example, a tagged enemy can cause fear, while a tagged inanimate object may cause only indifference or indifferent interest.02-21-2013
20130044115DISPLAY DEVICE, DISPLAY CONTROL METHOD, PROGRAM, AND COMPUTER READABLE RECORDING MEDIUM - To provide a display device capable of displaying a trajectory of a specific portion of a program controlled control target device regardless of whether the control program is a simple sequential execution type or a situation adaptive type. A PC (02-21-2013
20090174717METHOD AND APPARATUS FOR GENERATING A STORYBOARD THEME FOR BACKGROUND IMAGE AND VIDEO PRESENTATION - A method and apparatus for using a non-active (background) state of a display-enabled device to animate images and video elements within a themed storyboard from selected sources. States described in the method include: tuning to select viewable content, population of the storyboard matrix, animation of storyboard elements using animation effects, such as lens flare and live video texturing, and evaporation of the elements. By way of example, the storyboard is populated with freeze frames of video content, one or a portion of which are then played to animate that element. The storyboard preferably cycles with additional content as the prior content evaporates, until the background mode is terminated. The method is particularly well-suited for use in television sets, although it can be integrated into any display-enable apparatus having a computer and access to content sources that can be used for storyboard elements.07-09-2009
20100164960Character Display, Character Displaying Method, Information Recording Medium, and Program - A character display for attracting user interest by increasing the variety of on-screen display while reducing data processing by making the time variation of posture common among a plurality of characters. The character display (07-01-2010
20090195546IMAGE DISTRIBUTION APPARATUS, IMAGE DISTRIBUTION METHOD, AND IMAGE DISTRIBUTION PROGRAM - In order to prevent a duplicate of a still image from being generated, an MFP includes an image obtaining portion to obtain one or more still images, a moving image generating portion to generate a moving image in which the obtained still images are displayed sequentially, and a distribution portion to perform real-time streaming distribution of the moving image in response to a request from a PC connected to a network.08-06-2009
20090195545Facial Performance Synthesis Using Deformation Driven Polynomial Displacement Maps - Acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps are described. An analysis phase can be included where the relationship between motion capture markers and detailed facial geometry is inferred. A synthesis phase can be included where detailed animated facial geometry is driven by a sparse set of motion capture markers. For analysis, an actor can be recorded wearing facial markers while performing a set of training expression clips. Real-time high-resolution facial deformations are captured, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, displacements are calculated between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in one or more polynomial displacement maps parameterized according to the local deformations of the motion capture dots. For synthesis, the polynomial displacement maps can be driven with new motion capture data.08-06-2009
20090195543Verification of animation in a computing device - Methods and systems of verifying an animation applied in a mobile device may include a timer module that is programmed to time-slice the animation into multiple scenes at predetermined time points along a timeline of the animation. A first capture module is programmed to capture actual data of each scene at each of the time points while the animation is running. A first comparison module is programmed to compare the actual data of each scene with expected data of the corresponding scene to determine whether the actual data of each scene matches the expected data of the corresponding scene. A first output module is programmed to generate a verification failure if the actual data of any scene does not match the expected data of the corresponding scene, and generate a verification success if the actual data of each scene matches the expected data of the corresponding scene.08-06-2009
20100149191SYSTEM FOR VIRTUALLY DRAWING ON A PHYSICAL SURFACE - The system (06-17-2010
20080259085Method for Animating an Image Using Speech Data - A method for animating an image is useful for animating avatars using real-time speech data. According to one aspect, the method includes identifying an upper facial part and a lower facial part of the image (step 10-23-2008
20130076759MULTI-LAYERED SLIDE TRANSITIONS - Architecture that enhances the visual experience of a slide presentation by animating slide content as “actors” in the same background “scene”. This is provided by multi-layered transitions between slides, where a slide is first separated into “layers” (e.g., with a level of transparency). Each layer can then be transitioned independently. All layers are composited together to accomplish the end effect. The layers can comprise one or more content layers, and a background layer. The background layer can further be separated into a background graphics layer and a background fill layer. The transition phase can include a transition effect such as a fade, a wipe, a dissolve effect, and other desired effects. To provide the continuity and uniformity of presentation the content on the same background scene, a transition effect is not applied to the background layer.03-28-2013
20130076758Page Switching Method And Device - A page switching method and device. The method includes: displaying current message page; when detecting a touch operation, drawing a page-turning animation according to the touch operation, and playing the page-turning animation; and when the touch operation stops, displaying an adjacent message page.03-28-2013
20130076757PORTIONING DATA FRAME ANIMATION REPRESENTATIONS - Multiple portions of a set of data frames can be processed to produce portions of an animation representation. Each of the portions of the set of data frames can be processed to produce a corresponding portion of the animation representation that represents one or more changes during a portion of an animation sequence in an animation of the set of data frames. The animation representation can be sent to a rendering environment. Sending the animation representation to the rendering environment can include sending each of the portions of the animation representation in a separate batch. Each portion of the animation representation can be formatted to be rendered before receiving all portions of the animation representation at the rendering environment.03-28-2013
20130076755GENERAL REPRESENTATIONS FOR DATA FRAME ANIMATIONS - Multiple data frames can be processed to produce a general animation representation that represents the data frames. The general animation representation may be in a general language that is suitable for being translated into any of multiple different specific languages. The general animation representation can be translated into a specific animation representation that is in a specific language suitable for processing by a rendering environment. The specific animation representation can be sent to the rendering environment, where the specific animation representation can be rendered on a display device.03-28-2013
20130076756DATA FRAME ANIMATION - Data can be received from a first data source that is a first type of data source, and data can be received from a second data source that is a second type of data source. Data frames can be processed to produce an animation representation that represents the data frames. The data frames can include the data from the first data source and the data from the second data source. The animation representation can include one or more key animation frames that each defines a full graphical representation of one of the data frames. The animation representation can also include one or more delta animation frames that each defines one or more graphical updates without defining a full graphical representation of one of the data frames. The animation representation may be sent to a rendering environment for rendering.03-28-2013
20100118033SYNCHRONIZING ANIMATION TO A REPETITIVE BEAT SOURCE - An animated dance is made up of a plurality of frames. The dance includes a plurality of different moves delineated by a set of synchronization point. A total number of frames for the video track is determined and a corresponding video track is generated such that the resulting video track is synchronize at the synchronization points to beats of the audio track.05-13-2010
20100073382SYSTEM AND METHOD FOR SEQUENCING MEDIA OBJECTS - A method of displaying a long animation is provided. The animation is defined in an animation file, which identifies a set of images that form the animation when sequentially displayed. A batch processor segments the set of images into sequential subsets, with each subset sized smaller than a maximum size. In this way, all of the images identified in a particular subset may be loaded into memory. Each subset of images is associated with a respective segment identifier, and an instruction is provided along with the images to order the subsets. In this way, a first subset of images provides for the loading of a second subset of images, thereby enabling the display of long animations.03-25-2010
20100073381Methods for generating one or more composite image maps and systems thereof - A method, computer readable medium, and system for generating a composite image map includes obtaining a plurality of sprites for an application page and determining coordinates of each of the obtained plurality of sprites. A composite image map is generated based on the obtained plurality of sprites and the determined coordinates.03-25-2010
20100073380Method of operating a design generator for personalization of electronic devices - A method of generating a customized image includes forming a first design including a first pattern having a first color and a second color. The method also includes receiving input from a user using a design modification element. The method further includes forming a second design including a second pattern including a third color and a fourth color. A change from the first design to the second design is proportional to the input received from the user using the design modification element.03-25-2010
20100073379METHOD AND SYSTEM FOR RENDERING REAL-TIME SPRITES - A method and system for improving rendering performance at a client. The method includes, responsive to an initial request for a first animation sequence at a client, downloading a first 3D object from a server. The method includes rendering the first 3D object into the first animation sequence. The method includes displaying the first animation sequence to a user. The method includes caching the first animation sequence in an accessible memory. The method includes, responsive to a repeat request for the first animation sequence, retrieving the cached first animation sequence from the accessible memory.03-25-2010
20100156910SYSTEM AND METHOD FOR MESH STABILIZATION OF FACIAL MOTION CAPTURE DATA - A method and system for removing head motion from facial motion capture data. The method includes receiving a set of measured points of a target model, wherein each point is associated with coordinates in a 3D space. The method includes computing an optimal affine transformation function. The computing includes selecting an unprocessed point from the set of measured points. The computing includes selecting two nearby neighboring points of the unprocessed point. The computing includes computing an affine transformation function that minimizes an L2-norm error. The computing includes identifying the optimal affine transformation function from a set of computed affine transformation functions. The method includes displaying an aligned target model and reference model utilizing the optimal affine transformation function. The method includes outputting the optimal affine function to a computer-readable storage medium.06-24-2010
20100039434Data Visualization Using Computer-Animated Figure Movement - Methods, systems, and apparatus, including medium-encoded computer program products, can provide data visualization using computer animated figure movement. A computer animated figure is associated with a data stream. A set of movements to be performed by the computer animated figure in response to one or more data characteristics of the data stream is assigned. The data stream is received and processed to determine the one or more data characteristics. The computer animated figure is animated according to the assigned set of movements in response to determining the one or more data characteristics.02-18-2010
20130083034ANIMATION ENGINE DECOUPLED FROM ANIMATION CATALOG - Embodiments provide animations with an animation engine decoupled from an animation catalog storing animation definitions. A computing device accesses at least one of the animation definitions corresponding to at least one markup language (ML) element to be animated. Final attribute values associated with the ML element are identified (e.g., provided by the caller or defined in the animation definition). The computing device animates the ML element using the accessed animation definition and the identified final attribute values. In some embodiments, the animation engine uses a single timer to animate a plurality of hypertext markup language (HTML) elements displayed by a browser.04-04-2013
20130083036METHOD OF RENDERING A SET OF CORRELATED EVENTS AND COMPUTERIZED SYSTEM THEREOF - An automated rendering system for creating a screenplay or a transcript is provided that includes an audio/visual (A/V) content compositor and renderer for composing audio/visual (A/V) content made up of at clips and animations, and at least one of: back ground music, still images, or commentary phrases. A transcript builder is provided to build a transcript. The transcript builder utilizes data in various forms including user situational inputs, predefined rules and scripts, game action text, logical determinations and intelligent assumptions to generate a transcript to produce the A/V content of the screenplay or the transcript. A method is also provided for rendering an event that include receiving data with a request from a user to generate an audio/visual (A/V) presentation based on the event using the system. Ancillary data input is provided as a set of rules that influence or customize the outcome of the screenplay.04-04-2013
20130083035GRAPHICAL SYMBOL ANIMATION WITH EVALUATIONS FOR BUILDING AUTOMATION GRAPHICS - Automation systems, methods, and mediums. A method includes identifying a value for a data point associated with a device in a building. The value is received from a management system operably connected to the device. The method includes mapping the value for the data point to a graphical representation of the value for the data point. The method includes generating a display comprising a graphic for the building and a symbol representing the device. The method includes displaying the graphical representation of the value for the data point in association with the symbol representing the device. Additionally, the method includes modifying the graphical representation of the value based on a change in the value in response to identifying the change in the value from the management system.04-04-2013
20130038613METHOD AND APPARATUS FOR GENERATING AND PLAYING ANIMATED MESSAGE - Methods and apparatus are provided for generating an animated message. Input objects in an image of the animated message are recognized, and input information, including information about an input time and input coordinates for the input objects, is extracted. Playback information, including information about a playback order of the input objects, is set. The image is displayed in a predetermined handwriting region of the animated message. An encoding region, which is allocated in a predetermined portion of the animated message and in which the input information and the playback information are stored, is divided into blocks having a predetermined size. Display information of the encoding region is generated by mapping the input information and the playback information to the blocks in the encoding region. An animated message including the predetermined handwriting region and the encoding region is generated. The generated animated message is transmitted.02-14-2013
20130033499STATIONARY OR MOBILE TERMINAL CONTROLLED BY A POINTING OR INPUT PERIPHERAL - A stationary or mobile terminal controlled by a pointing or input peripheral device is presented. The invention pertains to the field of man-machine interfaces (MMI) applied to digital reading. There is provided a stationary or mobile terminal that is capable of reproducing, when used, the sensation of reading paper on a screen, of developing novel modes of reading, and of enabling press groups to render the publications thereof paperless while doing away with the material and technical limitations of various reading terminals.02-07-2013
20100045680PERFORMANCE DRIVEN FACIAL ANIMATION - A method of animating a digital facial model, the method including: defining a plurality of action units; calibrating each action unit of the plurality of action units via an actor's performance; capturing first facial pose data; determining a plurality of weights, each weight of the plurality of weights uniquely corresponding to the each action unit, the plurality of weights characterizing a weighted combination of the plurality of action units, the weighted combination approximating the first facial pose data; generating a weighted activation by combining the results of applying the each weight to the each action unit; applying the weighted activation to the digital facial model; and recalibrating at least one action unit of the plurality of action units using input user adjustments to the weighted activation.02-25-2010
20100110082Web-Based Real-Time Animation Visualization, Creation, And Distribution - The subject matter disclosed herein provides methods and apparatus, including computer program products, for generating animations in real-time. In one aspect there is provided a method. The method may include generating an animation by selecting one or more clips, the clips configured to include a first state representing an introduction, a second state representing an action, and a third state representing an exit, the first state and the third state including the substantially the same frame, such that the character appears in the same position in the frame and providing the generated animation for presentation at a user interface. Related systems, apparatus, methods, and/or articles are also described.05-06-2010
20120262462PORTABLE ELECTRONIC DEVICE FOR DISPLAYING IMAGES AND METHOD OF OPERATION THEREOF - An electronic device having a display screen, one or more processors, and memory, and a method of operation thereof for displaying images is disclosed. The method comprises displaying a first image in a default position in a display area of the display screen. The method further comprises replacing the first image with a second image in the display area of the display screen, wherein the replacing of the first image by the second image is animated in the display area by the second image moving in from the edge of the display area to the default position and the first image simultaneously moving away from the default position with translational speed slower than that of the movement of the second image. The display screen may be a touch-sensitive display screen. The animation to replace the first image by the second image is initiated in response to a swipe gesture on the touch-sensitive display screen, and the speed of the animation may be a function of the speed of the swipe gesture.10-18-2012
20130027409SYSTEMS AND METHODS FOR RESOURCE PLANNING USING ANIMATION - A system and method of presenting resource information for an entity includes receiving input data associated with the resource information of the entity, generating, by a computer, an animated representation of the resource information along one or more determined timelines employing a plurality of graphical characters based on the input data and displaying the animated representation. The creation of one simple animated visual language may reduce the mass confusion typically associated with the relation of resources including financial and other concepts, saving time and money and better educating those seeking recommendations and advice regarding resource planning, including financial planning.01-31-2013
20130027408Systems and Methods for Webpage Adaptive Rendering - This disclosure describes systems, methods, and apparatus for rendering animated images on a webpage. In particular, animated images that are visible are rendered as animations, whereas animated images that are not visible, those that can only be seen by scrolling the webpage, are rendered as a single static image until the webpage is scrolled such that these animated images are visible. At such point they can be rendered as animations.01-31-2013
20130027407FLUID DYNAMICS FRAMEWORK FOR ANIMATED SPECIAL EFFECTS - An animated special effect is modeled using a fluid dynamics framework system. The fluid dynamics framework for animated special effects system accepts volumetric data as input. Input volumetric data may represent the initial state of an animated special effect. Input volumetric data may also represent sources, sinks, external forces, and/or other influences on the animated special effect. In addition, the system accepts input parameters related to fluid dynamics modeling. The input volumes and parameters are applied to the incompressible Navier-Stokes equations as modifications to the initial state of the animated special effect, as modifications to the forcing term of a pressure equation, or in the computations of other types of forces that influence the solution. The input volumetric data may be composited with other volumetric data using a scalar blending field. The solution of the incompressible Navier-Stokes equations models the motion of the animated special effect.01-31-2013
20100066746Widgetized avatar and a method and system of creating and using same - A widgetized avatar and a method and system of creating and using same is disclosed. The avatar includes computing code that provides for addition of the avatar as non-static content to at least two unique at least partially static web pages, and secondary computing code resident within the computing code, wherein the secondary computing code provides for association with at least one other portion of the computing code of ones selected from a plurality of physical characteristics, a plurality of personal information, and a plurality of actions.03-18-2010
20130069957Method and Device for Playing Animation and Method and System for Displaying Animation Background - Embodiments of the present invention provide a method and device for playing an animation, belonging to a communication technology field. The method includes obtaining a first attribute value of an animation object of the current moment when an audio signal is detected, and determining a second attribute value and a first speed value corresponding to the audio signal; taking the first attribute value and second attribute value respectively as a starting point and end point, and playing the animation object according to the first speed value; and stopping, when the audio signal stops, playing the animation object if the playing of the animation object does not end. The device includes an audio starting animation playing module and an audio ending animation playing module. In embodiments, the playing of the animation is achieved through detecting the audio signal and playing the animation object combing the audio signal, which achieves the effect of the animation and enriches the displaying effect.03-21-2013
20130069955Hierarchical Representation of Time - A method performed by a data processing apparatus, in which the method includes selecting an object in which the object represents input defining a drawing, each drawing comprising location coordinates and temporal coordinates, in which each location coordinate is associated with a respective temporal coordinate; associating the object with a respective clock in a hierarchy of clocks, each clock in the hierarchy having a respective rate of progression that is coupled to the rate of progression of one or more parent clocks in the hierarchy; and generating an animation by drawing the location coordinates according to the rate of progression of the clock associated with the object. Other embodiments of this aspect include corresponding computing platforms and computer program products.03-21-2013
20130069956Transforming Time-Based Drawings - A method performed by a data processing apparatus, in which the method includes determining multiple first temporal coordinates, while receiving input defining a drawing from an input device, the drawing including multiple first object location coordinates received during a time period, in which each first temporal coordinate is based on a time when a respective one of the first object coordinates was received, receiving an input defining an animation period, applying a transformation to the first temporal coordinates to provide multiple transformed temporal coordinates respectively corresponding to the first image location coordinates, and periodically generating, based on the animation period, an animation by drawing the first object location coordinates according to the respective transformed temporal coordinates. Other embodiments of this aspect include corresponding computing platforms and computer program products.03-21-2013
20130069954Method of Transforming Time-Based Drawings and Apparatus for Performing the Same - A method performed by data processing apparatus, the method including: rendering a first object on a display, the first object having first location coordinates and first temporal coordinates, in which the first location coordinates define a first drawing and each first location coordinate is associated with a respective temporal coordinate; receiving input defining a second object, the second object having second location coordinates and second temporal coordinates, in which the second location coordinates define a second drawing and each second location coordinate is associated with a respective second temporal coordinate; applying a transformation to the first location coordinate(s) responsive to receiving each second location coordinate, based on a most recently received second location coordinate and generating an animation by rendering the transformed first location coordinate(s) on the display according to the respective first temporal coordinates.03-21-2013
20130088497MULTIPOINT OFFSET SAMPLING DEFORMATION - A skin deformation system for use in computer animation is disclosed. The skin deformation system accesses the skeleton structure of a computer generated character, and accesses a user's identification of features of the skeleton structure that may affect a skin deformation. The system also accesses the user's identification of a weighting strategy. Using the identified weighting strategy and identified features of the skeleton structure, the skin deformation system determines the degree to which each feature identified by the user may influence the deformation of a skin of the computer generated character. The skin deformation system may incorporate secondary operations including bulge, slide, scale and twist into the deformation of a skin Information relating to a deformed skin may be stored by the skin deformation system so that the information may be used to produce a visual image for a viewer.04-11-2013
20100134499STROKE-BASED ANIMATION CREATION - A method, apparatus, and computer-readable medium are provided that allow a user to easily generate and play back animation on a computing device. A user can use a mouse, stylus, or finger to draw a stroke indicating a path and speed with which a graphical object should be moved during animation playback. The graphical object may comprise a cartoon character, drawing, or other type of image. In a sequential mode, separate tracks are provided for each graphical object, and the objects move along tracks sequentially (one at a time). In a synchronous mode, graphical objects move along tracks concurrently. Different gestures can be automatically selected for the graphical object at each point along the track, allowing motion to be simulated visually.06-03-2010
20100265258Flame Image Sequencing Apparatus and Method - An imaging apparatus is disclosed that is designed to display a set of images on a screen for a fire. The apparatus includes a display screen, a controller capable of displaying a set of images to the display screen, and a masking member to prevent a viewer from viewing a part of the display screen.10-21-2010
20100039433VISUALIZATION EMPLOYING HEAT MAPS TO CONVEY QUALITY, PROGNOSTICS, OR DIAGNOSTICS INFORMATION - A visualization system for creating, displaying and animating overview and detail heat map displays for industrial automation. The visualization system connects the heat map displays to an interface component providing manual or automatic input data from an industrial process or an archive of historical industrial process input data. The animated heat map displays providing quality, prognostic or diagnostic information.02-18-2010
20100103178SPATIALLY-AWARE PROJECTION PEN - One embodiment of the present invention sets forth a technique for providing an end user with a digital pen embedded with a spatially-aware miniature projector for use in a design environment. Paper documents are augmented to allow a user to access additional information and computational tools through projected interfaces. Virtual ink may be managed in single and multi-user environments to enhance collaboration and data management. The spatially-aware projector pen provides end-users with dynamic visual feedback and improved interaction capabilities.04-29-2010
20120218274ELECTRONIC DEVICE, OPERATION CONTROL METHOD, AND STORAGE MEDIUM STORING OPERATION CONTROL PROGRAM - According to an aspect, an electronic device includes a display unit, a contact detecting unit, a housing, and a control unit. The display unit displays an image. The contact detecting unit detects a contact. The housing has a first face in which the display unit is provided and a second face in which the contact detecting unit is provided. When a contact operation is detected by the contact detecting unit while a first image is displayed on the display unit, the control unit causes the display unit to display a second image.08-30-2012
20110007078Creating Animations - Animation creation is described, for example, to enable children to create, record and play back stories. In an embodiment, one or more children are able to create animation components such as characters and backgrounds using a multi-touch panel display together with an image capture device. For example, a graphical user interface is provided at the multi-touch panel display to enable the animation components to be edited. In an example, children narrate a story whilst manipulating animation components using the multi-touch display panel and the sound and visual display is recorded. In embodiments image analysis is carried out automatically and used to autonomously modify story components during a narration. In examples, various types of handheld view-finding frames are provided for use with the image capture device. In embodiments saved stories can be restored from memory and retold from any point with different manipulations and narration.01-13-2011
20130057555Transition Animation Methods and Systems - An exemplary method includes a transition animation system detecting a screen size of a display screen associated with a computing device executing an application, automatically generating, based on the detected screen size, a plurality of animation step values each corresponding to a different animation step included in a plurality of animation steps that are to be involved in an animation of a transition of a user interface associated with the application into the display screen, and directing the computing device to perform the plurality of animation steps in accordance with the generated animation step values. Corresponding methods and systems are also disclosed.03-07-2013
20090195544SYSTEM AND METHOD FOR BLENDED ANIMATION ENABLING AN ANIMATED CHARACTER TO AIM AT ANY ARBITRARY POINT IN A VIRTUAL SPACE - A method for blended animation by providing a set of animation sequences associated with an animated character model is disclosed. In one embodiment, a geometric representation of a blend space is generated from the set of animation sequences using locator nodes associated with each animation sequence. A subset of animation sequences is selected from the set of animation sequences by casting a ray from a reference bone to a target through the geometric representation and selecting animation sequences that are geometrically close to the intersection of the cast ray and the geometric representation. A blend weight is determined for each member animation sequence in the selected subset of animation sequences. A blended animation is generated using the selected subset of animation sequences and the blend weights, then rendered to create a final animation.08-06-2009
20100097384PROGRAM DESIGNED MASTER ANIMATION AND METHOD FOR PRODUCING THEREOF - Disclosed is a PDMA animation production method including the steps of storing animation materials constituting an animation and information separately when a PDMA is produced, the animation materials including texts, graphics, movies, and audios; partitioning frame information as desired, the frame information being construction units of the animation; separating the partitioned frame information into respective information; storing animation information together with information regarding texts, graphics, movies, and audios constituting the animation while interworking with a DB program, the animation information including the frame information; interpreting information stored in the DB program by the PDMA; and retrieving animation sources matching with the interpreted information and combining corresponding data by the PDMA to play the animation.04-22-2010
20090267949Spline technique for 2D electronic game - A technique for generating splines in two dimensions for use in electronic game play is disclosed. The technique includes generating a computer graphic of a shape to be animated that is formed by one or more splines. The shape also includes at least one joint. When the position or orientation of the joint is changed, the orientation and/or position of the splines corresponding to the joint are changed resulting in changes to the shape.10-29-2009
20090267948OBJECT BASED AVATAR TRACKING - A computer implemented method, apparatus, and computer program product for object based avatar tracking. In one embodiment, a range for an object in a virtual universe is identified. The range comprises a viewable field of the object. Avatars in the viewable field of the object are capable of viewing the object. Avatars outside the viewable field of the object are incapable of viewing the object. In response to an avatar coming within the range of the object, an object avatar rendering table is queried for a session associated with the avatar unique identifier and the object unique identifier. The object avatar rendering table comprises a unique identifier of a set of selected objects and unique identifiers for each avatar in a range of a selected object in the set of selected objects. An object initiation process associated with the object is triggered.10-29-2009
20120223953Kinematic Engine for Adaptive Locomotive Control in Computer Simulations - An adaptive locomotion control system is used within the physics processing of a computer simulation engine. The control system is applied to one or more ragdoll models which represent entities in a computer simulation. The control system applies state-detection, equation-of-motion, and applied-force functions to maintain the model's balance while standing still and while executing simple or complex movements. In one embodiment, the functions manipulate the model in a manner similar to the muscles of the modeled organism, particularly a human. In another embodiment, the functions apply spot forces to keep the model upright and to perform movements.09-06-2012
20120223952Information Processing Device Capable of Displaying A Character Representing A User, and Information Processing Method Thereof. - The basic image specifying unit specifies the basic image of a character representing a user of the information processing device. The facial expression parameter generating unit converts the degree of the facial expression of the user to a numerical value. The model control unit determines an output model of the character for respective points of time. The moving image parameter generating unit generates a moving image parameter for generating animated moving image frames of the character for respective points of time. The command specifying unit specifies a command corresponding to the pattern of the facial expression of the user. The playback unit outputs an image based on the moving image parameter and the voice data received from the information processing device of the other user. The command executing unit executes a command based on the identification information of the command.09-06-2012
20130063446Scenario Based Animation Library - Various embodiments provide a library of animation descriptions based upon various common user interface scenarios. Application developers can query the animation library for animations based on a user's interaction with the user interface. The library defines usage of transformation primitives, storyboarding of the transformation primitives and associated timing functions that are used to create particular animations. These definitions can be provided to a calling application so that the application can implement an animation that utilizes the storyboarded transformation primitives.03-14-2013
20130063448Aligning Script Animations with Display Refresh - Various embodiments align callbacks to a scripting component that enable the scripting component to update animation, with a system's refresh notifications. Specifically, an application program interface (API) is provided and implemented in a manner that generates and issues a callback to the scripting component when the system receives a refresh notification. This provides the scripting component with a desirable amount of time to run before the next refresh notification.03-14-2013
20130063444Aligning Script Animations with Display Refresh - Various embodiments align callbacks to a scripting component that enable the scripting component to update animation, with a system's refresh notifications. Specifically, an application program interface (API) is provided and implemented in a manner that generates and issues a callback to the scripting component when the system receives a refresh notification. This provides the scripting component with a desirable amount of time to run before the next refresh notification.03-14-2013
20130063443Tile Cache - Tile cache techniques are described. In at least some embodiments, a tile cache is maintained that stores tile content for a plurality of tiles. The tile content is ordered in the tile cache to match a visual order of tiles in a graphical user interface. When tiles are moved (e.g., panned and/or scrolled) in the graphical user interface, tile content can be retrieved from the tile cache and displayed.03-14-2013
20130063445Composition System Thread - Composition system thread techniques are described. In one or more implementations, a composition system may be configured to compose visual elements received from applications on a thread that is executed separately than a user interface thread of the applications. As such, the composition system may execute asynchronously from a user interface thread of the application. Additionally, the composition system may be configured to expose one or more application programming interfaces (APIs) that are accessible to the applications. The APIs may be used for constructing a tree of objects representing the operations that are to be performed to compose one or more bitmaps. Further, these operations may be controlled by several API visual properties to allow applications to animate content within their windows and use disparate technologies to rasterize such content.03-14-2013
20130063447INFORMATION PROCESSING DEVICE, IMAGE TRANSMISSION METHOD AND IMAGE TRANSMISSION PROGRAM - An information processing device includes a memory; and a processor coupled to the memory, wherein the processor executes a process comprising: drawing an image representing a processing result based on software into an image memory; identifying a high-frequency change area; animating an image of the high-frequency change area; adding time information to an image of a change area having a change or the image of the high-frequency change area animated, and transmits the image to a terminal device; receiving the time information from the terminal device; determining based on a difference between the received time information and a reception time of the time information whether image drawing delay occurs; and starting an animation of the change area when the image drawing delay occurs and the animation is not being executed or changes an image transmission interval when the image drawing delay occurs and the animation is being executed.03-14-2013
20090237411Lightweight Three-Dimensional Display - A computer-implemented imaging process method includes generating a progression of images of a three-dimensional model and saving the images at a determined location, generating mark-up code for displaying image manipulation controls and for permitting display of the progression of images in response to user interaction with the image manipulation controls, and providing the images and mark-up code for use by a third-party application.09-24-2009
20090051691IMAGE DISPLAY APPARATUS - An object of the present invention is to be able to output moving image data that enables a desired image group partially included in all intra-subject images to be played as a moving image. An image display apparatus according to the present invention includes an image display function of displaying a series of images obtained by picking up an interior of a digestive canal of a subject at time series, and includes an input unit 02-26-2009
20090009520Animation Method Using an Animation Graph - A method of animating a scene graph (M), which comprises steps for: creating (e01-08-2009
20120113126Device, Method, and Graphical User Interface for Manipulating Soft Keyboards - A method includes, at an electronic device with a display and a touch-sensitive surface: concurrently displaying a first text entry area and an unsplit keyboard on the display; detecting a gesture on the touch-sensitive surface; and, in response to detecting the gesture on the touch-sensitive surface, replacing the unsplit keyboard with an integrated input area. The integrated input area includes a left portion with a left side of a split keyboard, a right portion with a right side of the split keyboard, and a center portion in between the left portion and the right portion.05-10-2012
20120113125CONSTRAINT SYSTEMS AND METHODS FOR MANIPULATING NON-HIERARCHICAL OBJECTS - Methods and apparatus for animating images using bidirectional constraints are described.05-10-2012
20120236008IMAGE GENERATING APPARATUS AND IMAGE GENERATING METHOD - Disclosed are an image generating apparatus and an image generating method with which the size of a storage region required to display animated images can be suppressed. In the image generating apparatus (09-20-2012
20120236007ANIMATION RENDERING DEVICE, ANIMATION RENDERING PROGRAM, AND ANIMATION RENDERING METHOD - An interpreter 09-20-2012
20130162653Creating Animations - Animation creation is described, for example, to enable children to create, record and play back stories. In an embodiment, one or more children are able to create animation components such as characters and backgrounds using a multi-touch panel display together with an image capture device. For example, a graphical user interface is provided at the multi-touch panel display to enable the animation components to be edited. In an example, children narrate a story whilst manipulating animation components using the multi-touch display panel and the sound and visual display is recorded. In embodiments image analysis is carried out automatically and used to autonomously modify story components during a narration. In examples, various types of handheld view-finding frames are provided for use with the image capture device. In embodiments saved stories can be restored from memory and retold from any point with different manipulations and narration.06-27-2013
20110279461SPAWNING PROJECTED AVATARS IN A VIRTUAL UNIVERSE - The present invention provides a computer implemented method and apparatus to project a projected avatar associated with an avatar in a virtual universe. A computer receives a command to project the avatar, the command having a projection point. The computer transmits a request to place a projected avatar at the projection point to a virtual universe host. The computer renders a tab associated with the projected avatar.11-17-2011
20100007665Do-It-Yourself Photo Realistic Talking Head Creation System and Method - A do-it-yourself photo realistic talking head creation system comprising: a template; handheld device comprising display and video camera having an image output signal of a subject; a computer having a mixer program for mixing the template and image output signal of the subject into a composite image, and an output signal representational of the composite image; a computer adapted to communicate the composite image signal to the display for display to the subject as a composite image; the display and the video camera adapted to allow the video camera to collect the image of the subject, the subject to view the composite image, and the subject to align the image of the subject with the template; storage means having an input for receiving the output signal of the video camera representational of the collected image of the subject and storing the image of the subject substantially aligned with the template.01-14-2010
20130009963GRAPHICAL DISPLAY OF DATA WITH ANIMATION - An animated graphic transition is displayed to represent a data difference between data sets. A plurality of data sets is provided, and a user is presented with a plurality of options that includes selection of one or more data sets. A user selection from the plurality of options is detected. Displayed are a first graphic element that represents data in one data set and a second graphic element that represents data in another data set. An animated graphic transition is displayed in conjunction with the first and second graphic elements to represent a data difference between the selected data sets.01-10-2013
20110298809ANIMATION EDITING DEVICE, ANIMATION PLAYBACK DEVICE AND ANIMATION EDITING METHOD - An animation editing device includes animation data including time line data that defines frames on the basis of a time line showing temporal display order of the frames, and space line data that defines frames on the basis of a space line for showing a relative positional relationship between a display position of each of animation parts and a reference position shown by a tag by mapping the relative positional relationship onto a one-dimensional straight line, displays the time line and the space line, and the contents of the frames based on the time line and the space line, and accepts an editing command to perform an editing process according to the inputted editing command.12-08-2011
20110298808Animated Vehicle Attendance Systems - In one embodiment, an animated vehicle attendant system may include: a communication path positioned within a vehicle; an avatar creation interface positioned within a passenger compartment of the vehicle and communicatively coupled to the communication path; a first display positioned within the passenger compartment of the vehicle and communicatively coupled to the communication path, wherein the first display includes a first processor and a first memory; a second display positioned within the passenger compartment of the vehicle and communicatively coupled to the communication path, wherein the second display includes a second processor and a second memory; and an animated avatar including one or more alterable visual characteristics. The animated avatar is stored in the first memory and/or the second memory. The first processor and/or the second processor executes machine readable instructions to: receive input from the avatar creation interface; update the one or more alterable visual characteristics based upon the input from the avatar creation interface; and present the animated avatar on the first display and/or the second display.12-08-2011
20110285727ANIMATION TRANSITION ENGINE - A method that facilitates smoothly animating content of a graphical user interface includes acts of receiving a description of a first virtual scene and receiving a description of a second virtual scene. The method also includes an act of causing an animated transition to be displayed on a display screen of a computing device between the first virtual scene and the second virtual scene at a graphical object level based at least in part upon the description of the first virtual scene and the description of the second virtual scene, wherein the animated transition at the graphical object level is an animated change of a graphical object between the first virtual scene and the second virtual scene.11-24-2011
20120105458MULTIMEDIA INTERFACE PROGRESSION BAR - A method comprising displaying a display area adapted to display a multimedia element, displaying a progression bar adapted to represent at least a portion of a duration of the multimedia element, and providing an indicator associated with the progression bar, the indicator being displayable at a timely position along the at least a portion of a duration of the multimedia element and is further adapted to timely enable an action when the multimedia element is played, the display bar includes a movable play position indicator and wherein the action is enabled on a basis of a proximity between the indicator and the play position indicator, the action encompassing timely displaying an image.05-03-2012
20120105455UTILIZING DOCUMENT STRUCTURE FOR ANIMATED PAGINATION - In general, this disclosure describes techniques for visually emphasizing information displayed on a computing device. In one example, a method that includes receiving a first portion of a document for display by the computing device, the first portion of the document including multiple elements separated by one or more delimiters. The method further includes dividing the multiple elements into a first set of one or more elements, each of which is displayable in its entirety at a time of display of the first portion of the document, and a second set of at least one element, the at least one element not displayable in its entirety at the time of display of the first portion of the document. The method further includes generating for display the first portion of the document, including visually emphasizing the first set of elements with respect to the second set of elements.05-03-2012
20120098837APPARATUS FOR AUGMENTING A HANDHELD DEVICE - Apparatus and system that enables a handheld multimedia device (e.g. an mp3 player, such as the Apple™ iPod™ Touch, or a mobile telephone, such as the Apple™ iPhone™, or any other device that includes one or more multimedia technologies such as a display screen, touch input, video, audio and networking capabilities) to be adapted, both in software and physically, so as to be used for a new or enhanced purpose.04-26-2012
20090278851METHOD AND SYSTEM FOR ANIMATING AN AVATAR IN REAL TIME USING THE VOICE OF A SPEAKER - This is a method and a system for animating on a screen (11-12-2009
20110292054System and Method for Low Bandwidth Image Transmission - An image transmission method (and related system) for obtaining data of a local subject and processing the data of the local subject to fit a local model of at least a region of the local subject and extract parameters of the local model to capture features of the region of the local subject. The method (and related system) may also include obtaining data of at least one remote subject and processing the data of the remote subject to fit at least one of at least one region of the remote subject and extract parameters the remote model to capture features of the region of the remote subject. The method (and related system) may also include transmitting the extracted parameters of the local region to a remote processor and reconstructing the local image based on the extracted parameters of the local region and the extracted parameters of the remote region.12-01-2011
20110292053PLACEMENT OF ANIMATED ELEMENTS USING VECTOR FIELDS - The placement of one animated element in a virtualized three-dimensional environment can be accomplished with reference to a second animated element and a vector field derived from the relationship thereof. If the first animated element is “inside” the second animated element after the second one was moved to a new animation frame, an existing vector field can be calculated for the region where it is “inside”. The vector field can comprise vectors that can have a direction and magnitude commensurate with the initial velocity and direction required to move the first animated element back outside of the second one. Movement of the first animated element can then be simulated in accordance with the vector field and afterwards a determination can be made whether any portion still remains inside. Such an iterative process can move and place the first animation element prior to the next move of the second animation element.12-01-2011
20100033488Example-Based Motion Detail Enrichment in Real-Time - An approach to enrich skeleton-driven animations with physically-based secondary deformation in real time is described. To achieve this goal, the technique described employs a surface-based deformable model that can interactively emulate the dynamics of both low- and high-frequency volumetric effects. Given a surface mesh and a few sample sequences of its physical behavior, a set of motion parameters of the material are learned during an off-line preprocessing step. The deformable model is then applicable to any given skeleton-driven animation of the surface mesh. Additionally, the described dynamic skinning technique can be entirely implemented on GPUs and executed with great efficiency. Thus, with minimal changes to the conventional graphics pipeline, the technique can drastically enhance the visual experience of skeleton-driven animations by adding secondary deformation in real time.02-11-2010
20110216076APPARATUS AND METHOD FOR PROVIDING ANIMATION EFFECT IN PORTABLE TERMINAL - An apparatus and method for providing a highly-realistic animation effect in a portable terminal. An apparatus for providing an animation effect in a portable terminal includes an animation processing unit increasing the realism of an animation by performing a composite animation scheme that continuously processes a key frame animation, which is represented in a fixed pattern, and a physical animation that is realistically represented according to peripheral environments. The apparatus also includes a display unit displaying an animation played by the animation processing unit09-08-2011
20110216075INFORMATION PROCESSING APPARATUS AND METHOD, AND PROGRAM - An information processing apparatus includes a detection unit configured to detect a gesture made by a user, a recognition unit configured to recognize a type of the gesture detected by the detection unit, a control unit configured to control operation of a first application and a second application, and an output unit configured to output information of the first application or the second application. If the gesture is recognized by the recognition unit while the control unit is controlling the operation of the first application in the foreground, the control unit controls the operation of the second application operating in the background of the first application on the basis of the type of the gesture recognized by the recognition unit.09-08-2011
20090189906SCRIPT CONTROL FOR GAIT ANIMATION IN A SCENE GENERATED BY A COMPUTER RENDERING ENGINE - A system for controlling a rendering engine by using specialized commands. The commands are used to generate a production, such as a television show, at an end-user's computer that executes the rendering engine. In one embodiment, the commands are sent over a network, such as the Internet, to achieve broadcasts of video programs at very high compression and efficiency. Commands for setting and moving camera viewpoints, animating characters, and defining or controlling scenes and sounds are described. At a fine level of control math models and coordinate systems can be used make specifications. At a coarse level of control the command language approaches the text format traditionally used in television or movie scripts. Simple names for objects within a scene are used to identify items, directions and paths. Commands are further simplified by having the rendering engine use defaults when specifications are left out. For example, when a camera direction is not specified, the system assumes that the viewpoint is to be the current action area. The system provides a hierarchy of detail levels. Movement commands can be defaulted or specified. Synchronized speech can be specified as digital audio or as text which is used to synthesize the speech.07-30-2009
20090167768Selective frame rate display of a 3D object - Systems and methods are discussed for performing 3D animation of an object using limited hardware resources. When an object is rotated, the size of the object displayed progressively increases, thus taking up more memory, CPU, and other hardware resources. To limit the impact on resources as an object becomes larger, the electronic device may select to display more small frames of the object at a higher frame rate, and fewer large frames at a lower frame rate, thus providing a uniform 3D animation.07-02-2009
20090167766ADVERTISING REVENUE SHARING - Technologies are described herein for sharing advertisement revenue. An advertiser-generated avatar is provided to a first participant. The advertiser-generated avatar may be associated with an advertisement. Further, the first participant may be associated with a current avatar. The current avatar is replaced with the advertiser-generated avatar. While the first participant is associated with the advertiser-generated avatar, a level of interaction between the first participant and other participants is monitored. An amount of compensation to provide the first participant is determined based on the level of interaction between the first participant and the other participants. The compensation is provided to the first participant.07-02-2009
20090141030SYSTEM AND METHOD FOR MULTILEVEL SIMULATION OF ANIMATION CLOTH AND COMPUTER-READABLE RECORDING MEDIUM THEREOF - A system for multilevel simulation of an animation cloth is provided. The system includes a multilevel area generation module, a curvature calculation module, a curvature comparison module, and a dynamic simulation module. The multilevel area generation module divides a plurality of grid units of the animation cloth into a plurality of level sub-areas based on a multilevel technique, wherein each of the level sub-areas is generated by dividing an upper level sub-area. The curvature calculation module calculates the curvatures of the level sub-areas according to the plane vectors of the grid units in a frame. The curvature comparison module compares the curvatures of the level sub-areas with a flatness threshold. The dynamic simulation module calculates the plane vector of each grid unit in a next frame through different method according to the comparison result of the curvature comparison module.06-04-2009
20090147010GENERATION OF VIDEO - An apparatus and a method are provided for generating video data derived from the execution of a computer program. In a first mode, the apparatus is operable to (a) execute a computer program comprising one or more components executed in a sequence of execution frames, each execution frame having a given state; and (b) record video data comprising a sequence of video data frames corresponding to the sequence of execution frames. In a second mode, the apparatus is operable to (c) process video data which have been recorded during the previous execution of the program, to allow a visualization of the execution of that program; and (d) allow a user, at any frame of the sequence of video data frames, to change the mode to the first mode and to obtain from the video data the state of the corresponding execution frame of the program.06-11-2009
20080266299METHOD FOR PREDICTIVELY SPLITTING PROCEDURALLY GENERATED PARTICLE DATA INTO SCREEN-SPACE BOXES - A method for use in rendering includes receiving an input particle system, an instancing program, and a number indicating a maximum number of particles to be stored in memory, providing an input particle count representative of at least a portion of the input particle system to at least one operator for the instancing program, running the at least one operator in a prediction mode to generate an output particle count, comparing the output particle count to the number indicating a maximum number of particles to be stored in memory, and spatially splitting a bounding box representative of the input particle count in response to the output particle count being greater than the number indicating a maximum number of particles to be stored in memory.10-30-2008
20100085363Photo Realistic Talking Head Creation, Content Creation, and Distribution System and Method - A system and method for creating, distributing, and viewing photo-realistic talking head based multimedia content over a network, comprising a server and a variety of communication devices, including cell phones and other portable wireless devices, and a software suite, that enables users to communicate with each other through creation, use, and sharing of multimedia content, including photo-realistic talking head animations combined with text, audio, photo, and video content. Content is uploaded to at least one remote server, and accessed via a broad range of devices, such as cell phones, desktop computers, laptop computers, personal digital assistants, and cellular smartphones. Shows comprising the content may be viewed with a media player in various environments, such as internet social networking sites and chat rooms via a web browser application, or applications integrated into the operating systems of the digital devices, and distributed via the internet, cellular wireless networks, and other suitable networks.04-08-2010
20120293517SYSTEM AND METHOD FOR VIDEO CHOREOGRAPHY - An electronic entertainment system for creating a video sequence by executing video game camera behavior based upon a video game sound file includes a memory configured to store an action event/camera behavior (AE/CB) database, game software such as an action generator module, and one or more sound files. In addition, the system includes a sound processing unit coupled to the memory for processing a selected sound file, and a processor coupled to the memory and the sound processing unit. The processor randomly selects an AE pointer and a CB pointer from the AE/CB database. Upon selection of the CB pointer and the AE pointer, the action generator executes camera behavior corresponding to the selected CB pointer to view an action event corresponding to the selected AE pointer.11-22-2012
20130187929VISUAL REPRESENTATION EXPRESSION BASED ON PLAYER EXPRESSION - Using facial recognition and gesture/body posture recognition techniques, a system can naturally convey the emotions and attitudes of a user via the user's visual representation. Techniques may comprise customizing a visual representation of a user based on detectable characteristics, deducting a user's temperament from the detectable characteristics, and applying attributes indicative of the temperament to the visual representation in real time. Techniques may also comprise processing changes to the user's characteristics in the physical space and updating the visual representation in real time. For example, the system may track a user's facial expressions and body movements to identify a temperament and then apply attributes indicative of that temperament to the visual representation. Thus, a visual representation of a user, such as an avatar or fanciful character, can reflect the user's expressions and moods in real time.07-25-2013
20100079466ASYNCHRONOUS STREAMING OF DATA FOR VALIDATION - The present invention relates to computer capture of object motion. More specifically, embodiments of the present invention relate to capturing of facial movement or performance of an actor. Embodiments of the present invention provide a head-mounted camera system that allows the movements of an actor's face to be captured separately from, but simultaneously with, the movements of the actor's body. In some embodiments of the present invention, a method of motion capture of an actor's performance is provided. A self-contained system is provided for recording the data, which is free of tethers or other hard-wiring, is remotely operated by a motion-capture team, without any intervention by the actor wearing the device. Embodiments of the present invention also provide a method of validating that usable data is being acquired and recorded by the remote system.04-01-2010
20090213123Method of using skeletal animation data to ascertain risk in a surveillance system - The present invention discloses a method of surveillance comprising the steps of matching skeletal animation data representative of recorded motion to a pre-defined animation. The pre-defined animation is associated with a risk value. An end-user is also provided with at least the recorded motion as well as a risk value. The method may be carried out in real time and the skeletal animation data may be three-dimensional.08-27-2009
20100123723SYSTEM AND METHOD FOR DEPENDENCY GRAPH EVALUATION FOR ANIMATION - Aspects include systems, devices, and methods for evaluating a source dependency graph and animation curve with a game system. The dependency graph may be evaluated at an interactive rate during execution of a game. The animation curve may describe change in a state of a control element over time. Subnetworks of the dependency graph may be identified and evaluated using a plurality of processors.05-20-2010
20100123722SYSTEMS AND METHODS INVOLVING GRAPHICALLY DISPLAYING CONTROL SYSTEMS - A method for displaying a control system comprising, receiving a function block diagram file including a function block having an associated logic function, receiving an animation instruction associated with the function block, receiving system data from a system controller, receiving a first graphic associated with the logic function from a function block library, processing the first graphic and the system data according to the animation instruction to render an updated first graphic reflecting the systems data, and displaying the function block and the rendered updated first graphic associated with the logic function.05-20-2010
20080303828Web-based animation - Approaches providing web-based animations using tools and techniques that take into account the limited capabilities and resources available in the web environment are disclosed. In some embodiments, such web-based animations are implemented in JavaScript.12-11-2008
20090284533Method of rendering an image and a method of animating a graphics character - A computer implemented method of generating behavior of a graphics character within an environment including a selected graphics character and one or more graphics elements, the method comprising: generating an image of the environment from a perspective of the selected graphics character; processing the image using an artificial intelligence engine with one or more layers to determine an activation value for the graphics character wherein at least one of the layers is a fuzzy processing layer, and generating the behavior of the graphics character based on the activation value.11-19-2009
20090262119OPTIMIZATION OF TIME-CRITICAL SOFTWARE COMPONENTS FOR REAL-TIME INTERACTIVE APPLICATIONS - A method for optimizing the performance of a time-critical computation component for a real-time interactive application includes the use of an algorithm having a precise logical thread and at least one fast estimation logical thread. The computation errors generated by the fast estimation thread are imperceptible by humans and are frame specific such that the errors are corrected within a graphical frame's time by the data from the precise logical thread.10-22-2009
20090262118METHOD, SYSTEM AND STORAGE DEVICE FOR CREATING, MANIPULATING AND TRANSFORMING ANIMATION - An animation method, system, and storage device which takes animators submissions of characters and animations and breaks the animations into segments where discontinuities will be minimized; allows users to assemble the segments into new animations; allows users to apply modifiers to the characters; provides a semantic restraint system for virtual objects; and provides automatic character animation retargeting.10-22-2009
20090262117Displaying traffic flow data representing traffic conditions - An article of manufacture for displaying traffic flow data representing traffic conditions on a road system includes creating a graphical map of the road system which includes one or more segments. The status of each segment on the graphical map is determined such that the status of each segment corresponds to the traffic flow data associated with that segment. An animated traffic flow map of the road system is created by combining the graphical map and the status of each segment.10-22-2009
20090262116MULTI-LAYERED SLIDE TRANSITIONS - Architecture that enhances the visual experience of a slide presentation by animating slide content as “actors” in the same background “scene”. This is provided by multi-layered transitions between slides, where a slide is first separated into “layers” (e.g., with a level of transparency). Each layer can then be transitioned independently. All layers are composited together to accomplish the end effect. The layers can comprise one or more content layers, and a background layer. The background layer can further be separated into a background graphics layer and a background fill layer. The transition phase can include a transition effect such as a fade, a wipe, a dissolve effect, and other desired effects. To provide the continuity and uniformity of presentation the content on the same background scene, a transition effect is not applied to the background layer.10-22-2009
20090295806Dynamic Scene Descriptor Method and Apparatus - A method for rendering a frame of animation includes retrieving scene descriptor data that specifies at least one object, wherein the object is associated with a first database query, wherein the first database query is associated with a first rendering option, receiving a selection of the first rendering option or a second rendering option, querying a database with the first database query and receiving a first representation of the object from a database when the selection is of the first rendering option, loading the first representation of the object into computer memory when the selection is of the first rendering option, and rendering the object for the frame of animation using the first representation of the object when the selection is of the first rendering option, wherein the first representation of the object is not loaded into computer memory when the selection is of the second rendering option.12-03-2009
20100201692User Interface for Controlling Animation of an Object - A user can control the animation of an object via an interface that includes a control area and a user-manipulable control element. In one embodiment, the control area includes an ellipse, and the user-manipulable control element includes an arrow. In yet another embodiment, the control area includes an ellipse, and the user-manipulable control element includes two points on the circumference of the ellipse. In yet another embodiment, the control area includes a first rectangle, and the user-manipulable control element includes a second rectangle. In yet another embodiment, the user-manipulable control element includes two triangular regions, and the control area includes an area separating the two regions.08-12-2010
20100201691SHADER-BASED FINITE STATE MACHINE FRAME DETECTION - Embodiments for shader-based finite state machine frame detection for implementing alternative graphical processing on an animation scenario are disclosed. In accordance with one embodiment, the embodiment includes assigning an identifier to each shader used to render animation scenarios. The embodiment also includes defining a finite state machine for a key frame in each of the animation scenarios, whereby each finite state machine representing a plurality of shaders that renders the key frame in each animation scenario. The embodiment further includes deriving a shader ID sequence for each finite state machine based on the identifier assigned to each shader. The embodiment additionally includes comparing an input shader ID sequence of a new frame of a new animation scenario to each derived shader ID sequences. Finally, the embodiment includes executing alternative graphics processing on the new animation scenario when the input shader ID sequence matches one of the derived shader ID sequences.08-12-2010
20110205233Constraint-Based Ordering for Temporal Coherence of Stroke Based Animation - A renderer allows for a flexible and temporally coherent ordering of strokes in the context of stroke-based animation. The relative order of the strokes is specified by the artist or inferred from geometric properties of the scene, such as occlusion, for each frame of a sequence, as a set of stroke pair-wise constraints. Using the received constraints, the strokes are partially ordered for each of the frames. Based on these partial orderings, for each frame, a permutation of the strokes is selected amongst the ones consistent with the frame's partial order, so as to globally improve the perceived temporal coherence of the animation. The sequence of frames can then, for instance, be rendered by ordering the strokes according to the selected set of permutations for the sequence of frames.08-25-2011
20100060647Animating Speech Of An Avatar Representing A Participant In A Mobile Communication - Animating speech of an avatar representing a participant in a mobile communication including selecting one or more images; selecting a generic animation template; fitting the one or more images with the generic animation template; texture wrapping the one more images over the generic animation template; and displaying the one or more images texture wrapped over the generic animation template. Receiving an audio speech signal; identifying a series of phonemes; and for each phoneme: identifying a new mouth position for the mouth of the generic animation template; altering the mouth position to the new mouth position; texture wrapping a portion of the one or more images corresponding to the altered mouth position; displaying the texture wrapped portion of the one or more images corresponding to the altered mouth position of the mouth of the generic animation template; and playing the portion of the audio speech signal represented by the phoneme.03-11-2010
20100060646ARBITRARY FRACTIONAL PIXEL MOVEMENT - A technique is provided for displaying pixels of an image at arbitrary subpixel positions. In accordance with aspects of this technique, interpolated intensity values for the pixels of the image are derived based on the arbitrary subpixel location and an intensity distribution or profile. Reference to the intensity distribution provides appropriate multipliers for the source image. Based on these multipliers, the image may be rendered at respective physical pixel locations such that the pixel intensities are summed with each rendering, resulting in a destination image having suitable interpolated pixel intensities for the arbitrary subpixel position.03-11-2010
20100141661CONTENT GENERATION SYSTEM, CONTENT GENERATION DEVICE, AND CONTENT GENERATION PROGRAM - A content generation system includes a host terminal and an encode terminal. The host terminal has: a lecture material display unit for displaying a lecture material on a desk top; and a desk top image transmission unit for transmitting a desk top image. The encode terminal has: a lecturer imaging data generation unit which generates lecturer imaging data by capturing a lecture performed by the lecturer; an animation data generation unit which generates animation data from the image on the desk top received from the host terminal in synchronization with the lecturer imaging data; and a content data transmission unit which transmits content data containing the lecturer imaging data and the animation data.06-10-2010
20100141663SYSTEM AND METHODS FOR DYNAMICALLY INJECTING EXPRESSION INFORMATION INTO AN ANIMATED FACIAL MESH - A system and method for modifying facial animations to include expression and microexpression information is disclosed. Particularly, a system and method for applying actor-generated expression data to a facial animation, either in realtime or in storage is disclosed. Present embodiments may also be incorporated into a larger training program, designed to train users to recognize various expressions and microexpressions.06-10-2010
20110267357ANIMATING A VIRTUAL OBJECT WITHIN A VIRTUAL WORLD - A method of animating a virtual object within a virtual world, the method comprising applying a weighted combination of task-space inverse-dynamics control and joint-space inverse-dynamics control to the virtual object.11-03-2011
20110267356ANIMATING A VIRTUAL OBJECT WITHIN A VIRTUAL WORLD - A method of animating a virtual object within a virtual world, wherein the virtual object comprises a plurality of object parts, the method comprising: at an animation update step: specifying a target frame in the virtual world; and applying control to a first object part, wherein the control is arranged such that the application of the control in isolation to the first object part would cause a movement of the first object part in the virtual world that (a) reduces a difference between a control frame and the target frame, the control frame being a frame at a specified position and orientation in the virtual world relative to the first object part and (b) has a substantially non-zero component along at most one or more degrees of freedom identified for the first object part.11-03-2011
20080303830Automatic feature mapping in inheritance based avatar generation - The generation of characters within computer animations is currently a labor intensive and expensive activity for a wide range of businesses. Whereas prior art approaches have sought to reduce this loading by providing reference avatars, these do not fundamentally overcome the intensive steps in generating these reference avatars, and they provide limited variations. According to the invention a user is provided with a simple and intuitive mechanism to affect the weightings applied in establishing the physical characteristics of an avatar generated using an inheritance based avatar generator. The inheritance based generator allowing, for example, the user to select a first generation of four grandparents, affect the weightings in generating the second generation parents, and affect the weightings in generating the third generation off-spring avatar from these parents. Accordingly the invention provides animators with a means of rapidly generating and refining the off-spring avatar to provide the character for their animated audio-visual content.12-11-2008
20080303829Sex selection in inheritance based avatar generation - The generation of characters within computer animations is currently a labor intensive and expensive activity for a wide range of businesses. Whereas prior art approaches have sought to reduce this loading by providing reference avatars, these do not fundamentally overcome the intensive steps in generating these reference avatars, and they provide limited variations. According to the invention a user is provided with a simple and intuitive mechanism to affect the weightings applied in establishing the physical characteristics of an avatar generated using an inheritance based avatar generator. The inheritance based generator allowing, for example, the user to select a first generation of four grandparents, affect the weightings in generating the second generation parents, and affect the weightings in generating the third generation off-spring avatar from these parents. Accordingly the invention provides animators with a means of rapidly generating and refining the off-spring avatar to provide the character for their animated audio-visual content.12-11-2008
20080303827Methods and Systems for Animating Displayed Representations of Data Items - Methods and systems for animating visual components representing data items. One embodiment comprises a method for producing an application using declarative language code to specify animation behavior for data item representations. A programming application may be used to create the declarative language code using a display design area for placing and adjusting objects such as data item containers and/or an editor for entering and editing code. One embodiment comprises a method that allows an application, such as a rich Internet application, to create representations of displayed objects and virtually displayed objects to facilitate animation. One embodiment involves facilitating animation using initial and changed layouts, such layouts including representing of a limited number of data items both inside and outside the content display area. In certain embodiments, a computer-readable medium (such as, for example random access memory or a computer disk) comprises code for carrying out these and other methods.12-11-2008
20080303826Methods and Systems for Animating Displayed Representations of Data Items - Methods and systems for animating visual components representing data items. One embodiment comprises a method for producing an application using declarative language code to specify animation behavior for data item representations. A programming application may be used to create the declarative language code using a display design area for placing and adjusting objects such as data item containers and/or an editor for entering and editing code. One embodiment comprises a method that allows an application, such as a rich Internet application, to create representations of displayed objects and virtually displayed objects to facilitate animation. One embodiment involves facilitating animation using initial and changed layouts, such layouts including representing of a limited number of data items both inside and outside the content display area. In certain embodiments, a computer-readable medium (such as, for example random access memory or a computer disk) comprises code for carrying out these and other methods.12-11-2008
20080309670Recasting A Legacy Web Page As A Motion Picture With Audio - Computer-implemented methods, systems, and computer program products are provided for recasting a legacy web page as a motion picture with audio. Embodiments include retrieving a legacy web page; identifying audio objects in the legacy web page for audio rendering; identifying video objects in the legacy web page for motion picture rendering; associating one or more of the video objects for motion picture rendering with one or more of the audio objects for audio rendering; determining in dependence upon the selected audio objects and video objects a duration for the motion picture; selecting audio events for rendering the audio objects identified for audio rendering; selecting motion picture video events for rendering the video objects identified for motion picture rendering; assigning the selected audio events and the selected video events to playback times for the motion picture; rendering, with the selected audio events at their assigned playback times, the audio content of the each of the audio objects identified for audio rendering; rendering, with the selected motion picture video events at their assigned playback times, the video content of the video objects identified for motion picture rendering; and recording in a multimedia file the rendered audio content and motion picture video content.12-18-2008
20080204457Rig Baking - Model components can be used to pose character models to create a variety of realistic and artistic effects. An embodiment of the invention analyzes the behavior of a model component to determine a statistical representation of the model component that closely approximates the output of the model component. As the statistical representation of model components execute faster than the original model components, the model components used to pose a character model can be replaced at animation time by equivalent statistical representations of model components to improve animation performance. The statistical representation of the model component is derived from an analysis of the character model manipulated through a set of representative training poses. The statistical representation of the model component is comprised of a weighted combination of posed frame positions added to a set of posing errors controlled by nonlinear combinations of the animation variables.08-28-2008
20090315896ANIMATION PLATFORM - An animation platform for managing the interpolation of values of one or more animation variables from one or more applications. The animation platform uses animation transitions to interpolate the values of the animation variables. When conflicts arise, the animation platform implements application-supplied logic to determine an execution priority of the conflicting animation transitions.12-24-2009
20080303831Transfer of motion between animated characters - Motion may be transferred between portions of two characters if those portions have a minimum topological similarity. The elements of the topology that are similar are referred to as basic elements. To transfer motion between the source and target characters, the motion associated with the basic elements of the source character is determined. This motion is retargetted to the basic elements of the target character. The retargetted motion is then attached to the basic elements of the target character. As a result, the animation of the basic elements in the topology of the target character effectively animates the target character with motion that is similar to that of the source character.12-11-2008
20080284783WAVE ZONES RENDERING TECHNIQUE - Rendering a deforming object in animation including: defining a deforming object surface angle; identifying a normal vector discontinuity point using the deforming object surface angle; defining front part and back part of the deforming object with reference to the normal vector discontinuity point; dividing the front part of the deforming object into zones based on the deforming object surface angle; dividing the back part of the deforming object into zones based on the deforming object surface angle; and rendering each zone.11-20-2008
20080273037LOOPING MOTION SPACE REGISTRATION FOR REAL-TIME CHARACTER ANIMATION - A method for generating a looping motion space for real-time character animation may include determining a plurality of motion clips to include in the looping motion space and determining a number of motion cycles performed by a character object depicted in each of the plurality of motion clips. A plurality of looping motion clips may be synthesized from the motion clips, where each of the looping motion clips depicts the character object performing an equal number of motion cycles. Additionally, a starting frame of each of the plurality of looping motion clips may be synchronized so that the motion cycles in each of the plurality of looping motion clips are in phase with one another. By rendering an animation sequence using multiple passes through the looping motion space, an animation of the character object performing the motion cycles may be extended for arbitrary length of time.11-06-2008
20110007080SYSTEM AND METHOD FOR CONFORMING AN ANIMATED CAMERA TO AN EDITORIAL CUT - A method for conforming an animated camera to an editorial cut within a software application executing on a computer system. The method includes providing a shot that includes three-dimensional animation captured by a virtual camera associated with a pre-defined camera style; receiving an editorial action that has been performed to the shot; and updating a camera move associated with the virtual camera based on the camera style and the editorial action.01-13-2011
20110007079BRINGING A VISUAL REPRESENTATION TO LIFE VIA LEARNED INPUT FROM THE USER - Data captured with respect to a human may be analyzed and applied to a visual representation of a user such that the visual representation begins to reflect the behavioral characteristics of the user. For example, a system may have a capture device that captures data about the user in the physical space. The system may identify the user's characteristics, tendencies, voice patterns, behaviors, gestures, etc. Over time, the system may learn a user's tendencies and intelligently apply animations to the user's avatar such that the avatar behaves and responds in accordance with the identified behaviors of the user. The animations applied to the avatar may be animations selected from a library of pre-packaged animations, or the animations may be entered and recorded by the user into the avatar's avatar library.01-13-2011
20110007077ANIMATED MESSAGING - A method performed by one or more devices includes receiving a user selection of a picture that contains an object of a character to be animated for an animated message and receiving one or more designations of areas within the picture to correspond to one or more human facial features for the character associated with the object. The method further includes receiving a textual message; receiving one or more user selections of one or more animation codes that identify animations to be performed by the one or more human facial features designated within the picture, and receiving an encoding of the textual message and the one or more animation codes. The method further includes generating the animated message based on the picture, the one or more designations of the one or more human facial features, and the one or more animation codes, and sending the animated message to a recipient.01-13-2011
20090147009VIDEO CREATING DEVICE AND VIDEO CREATING METHOD - A video creating device for creating a video full of originality from a text. A video viewer (06-11-2009
20090179899METHOD AND APPARATUS FOR EFFICIENT OFFSET CURVE DEFORMATION FROM SKELETAL ANIMATION - A method for use in animation includes establishing an influence primitive, associating the influence primitive with a model having a plurality of model points, and for each of the plurality of model points on the model, determining an offset primitive that passes through the model point. Another method includes deforming the model, and determining a deformed position of each of the plurality of model points by using a separate offset primitive for each model point. A computer readable storage medium stores a computer program adapted to cause a processor based system to execute one or more the above steps.07-16-2009
20090179898CREATION OF MOTION BLUR IN IMAGE PROCESSING - Motion blur is created in images by utilizing a motion vector. Vertices are developed with each vertex including a motion vector. The motion vector is indicative of how far vertices have moved since a previous frame in a sequence of images. The vertices are converted to an image and motion blur is added to the image as a function of the motion vector for each vertex.07-16-2009
20130120399METHOD, APPARATUS, COMPUTER PROGRAM AND USER INTERFACE - A method, apparatus, computer program and user interface wherein the method comprises displaying a still image on a display; detecting user selection of a portion of the still image; and in response to the detection of the user selection, replacing the selected portion of the image with a moving image and maintaining the rest of the still image, which has not been selected, as a still image.05-16-2013
20130120400ANIMATION CREATION AND MANAGEMENT IN PRESENTATION APPLICATION PROGRAMS - An animation timeline is analyzed to determine one or more discrete states. Each discrete state includes one or more animation effects. The discrete states represent scenes of a slide in a slide presentation. The concepts of scenes allows user to view a timeline of scenes, open a scene, and direct manipulate objects in the scene to author animations. The animations can include motion path animation effects, which can be directly manipulated utilizing a motion path tweening method. To aid in direct manipulation of a motion path of an object, a ghost version of the object can be shown to communicate to a user the position of the object after a motion path animation effect that includes the motion path is performed. The ghost version may also be used to show a start position when a start point is manipulated.05-16-2013
20130120402Discarding Idle Graphical Display Components from Memory and Processing - Memory storage and processing for idle computer-generated graphical display components are discarded for conserving memory capacity, processing resources and power consumption. If a computer-generated display frame goes idle for a prescribed duration, for example, 30 seconds, wherein no user action or processor action is performed on the idle display frame, stored data representing the idle display frame is discarded from memory and processing for the idle display component is ceased, thus conserving memory space, processing resources and power consumption (e.g., battery power). If the discarded display frame becomes active again, its discarded resources may be recreated. Alternatively, an idle display component may be passed to a separate application and may be reclaimed by a requiring application when the idle display component becomes active again.05-16-2013
20130120403ANIMATION CREATION AND MANAGEMENT IN PRESENTATION APPLICATION PROGRAMS - An animation timeline is analyzed to determine one or more discrete states. Each discrete state includes one or more animation effects. The discrete states represent scenes of a slide in a slide presentation. The concepts of scenes allows user to view a timeline of scenes, open a scene, and direct manipulate objects in the scene to author animations. The animations can include motion path animation effects, which can be directly manipulated utilizing a motion path tweening method. To aid in direct manipulation of a motion path of an object, a ghost version of the object can be shown to communicate to a user the position of the object after a motion path animation effect that includes the motion path is performed. The ghost version may also be used to show a start position when a start point is manipulated.05-16-2013
20110216074REORIENTING PROPERTIES IN HAIR DYNAMICS - Techniques are disclosed for orienting (or reorienting) properties of computer-generated models, such as those associated with dynamic models or simulation models. Properties (e.g., material or physical properties) that influence the behavior of a dynamic or simulation model (e.g., a complex curve model representing a curly hair) may be oriented or re-oriented as desired using readily available reference frames. These references frame may be obtained using a proxy model that corresponds to the dynamic or simulation model in a less computationally expensive manner in some embodiments than some techniques for determining reference frames directly using the dynamic or simulation model. In some embodiments, the proxy model may include a smoothed version of the dynamic or simulation model. In other embodiments, the proxy model may include a filtered or simplified version of the dynamic or simulation model.09-08-2011
20090128567MULTI-INSTANCE, MULTI-USER ANIMATION WITH COORDINATED CHAT - Two or more participants provide inputs from a remote location to a central server, which aggregates the inputs to animate participating avatars in a space visible to the remote participants. In parallel, the server collects and distributes text chat data from and to each participant, such as in a chat window, to provide chat capability in parallel to a multi-participant animation. Avatars in the animation may be provided with animation sequences, based on defined character strings or other data detected in the text chat data. Text data provided by each user is used to select animation sequences for an avatar operated by the same user.05-21-2009
20090184967SCRIPT CONTROL FOR LIP ANIMATION IN A SCENE GENERATED BY A COMPUTER RENDERING ENGINE - A system for controlling a rendering engine by using specialized commands. The commands are used to generate a production, such as a television show, at an end-user's computer that executes the rendering engine. In one embodiment, the commands are sent over a network, such as the Internet, to achieve broadcasts of video programs at very high compression and efficiency. Commands for setting and moving camera viewpoints, animating characters, and defining or controlling scenes and sounds are described. At a fine level of control math models and coordinate systems can be used make specifications. At a coarse level of control the command language approaches the text format traditionally used in television or movie scripts. Simple names for objects within a scene are used to identify items, directions and paths. Commands are further simplified by having the rendering engine use defaults when specifications are left out. For example, when a camera direction is not specified, the system assumes that the viewpoint is to be the current action area. The system provides a hierarchy of detail levels. Movement commands can be defaulted or specified. Synchronized speech can be specified as digital audio or as text which is used to synthesize the speech.07-23-2009
20090141031Method for Providing an Animation From a Prerecorded Series of Still Pictures - The invention relates to a method for providing an animation from prerecorded still pictures where the relative positions of the pictures are known. The method is based on prerecorded still pictures and location data, associated with each still picture, that indicates the projection of the subsequent still picture into the current still picture. The method comprises the repeated steps of providing a current still picture, providing the location data associated with the still picture, generating an animation based on the current still picture and the location data, and presenting the animation on a display. The invention provides the experience of driving a virtual car through the photographed roads, either by an auto pilot or manually. The user may change speed, drive, pan, shift lane, turn in crossings or take u-turns anywhere. Also, the invention provides a means to experience real time, interactive video-like animation from widely separated still pictures, as an alternative to video-streaming over a communication line. This service is called Virtual Car Travels.06-04-2009
20090002377APPARATUS AND METHOD FOR SYNCHRONIZING AND SHARING VIRTUAL CHARACTER - An apparatus and method for synchronizing and sharing a virtual character are provided. The method includes generating a virtual character, synchronizing content in a predetermined form with the generated virtual character; converting the virtual character into an extensible markup language (XML)-based file and storing the XML-based file.01-01-2009
20090002376Gradient Domain Editing of Animated Meshes - Gradient domain editing of animated meshes is described. Exemplary systems edit deforming mesh sequences by applying Laplacian mesh editing techniques in the spacetime domain. A user selects relevant frames or handles to edit and the edits are propagated to the entire sequence. For example, if the mesh depicts an animated figure, then user-modifications to position of limbs, head, torso, etc., in one frame are propagated to the entire sequence. In advanced editing modes, a user can reposition footprints over new terrain and the system automatically conforms the walking figure to the new footprints. A user-sketched curve can automatically provide a new motion path. Movements of one animated figure can be transferred to a different figure. Caricature and cartoon special effects are available. The user can also select spacetime morphing to smoothly change the shape and motion of one animated figure into another over a short interval.01-01-2009
20120069028REAL-TIME ANIMATIONS OF EMOTICONS USING FACIAL RECOGNITION DURING A VIDEO CHAT - Embodiments are directed towards displaying an animated video emoticon by augmenting features identified in a video stream. Augmenting features identified in the video stream may include modifying, in whole or in part, some aspects of the identified features but not other aspects. For example, a user may select an animated video emoticon indicating surprise. Surprise may be conveyed by detecting the location of the user's eyes in the video stream, enlarging a size aspect of the eyes so as to appear ‘wide-eyed’, but leaving other aspects such as color and shape unchanged. Then, the location and/or orientation of the eyes in the video stream are tracked, and the augmentation is applied to the eyes at each tracked location and/or orientation. In another embodiment, identified features may be removed from the video stream and replaced with images, graphics, video, and the like.03-22-2012
20090251469METHOD FOR DETECTING COLLISIONS AMONG LARGE NUMBERS OF PARTICLES - A method for detecting object collisions in a simulation, which includes identifying a plurality of objects moving along a path within a simulation area, and defining a grid comprising defined regions which individually define a region within which any of the plurality of objects could potentially occupy. For each of the objects, the method further includes identifying which of the defined regions that each of the plurality of object occupies for at least a portion of a time step, and for each of the objects, determining an associated potential collision set by identifying objects of the plurality of objects which occupy common regions of the defined regions during any portion of the time step. In addition, for each of the objects, the method further includes determining an actual collision set comprising objects with which a given object will collide during the time step based upon location parameters of objects included in the potential collision set.10-08-2009
20090251470System and method for compressing a picture - A system for compressing a picture includes: an information extraction unit for extracting information needed for encoding during a picture scene composition and animation process using a modeling object; and a rendering unit for generating an uncompressed picture sequence by rendering the object where the picture scene composition and animation process is performed. Further, the system for compressing a picture includes an encoding unit for generating a compressed bit stream by encoding the picture sequence from the rendering unit based on the information extracted by the information extraction unit.10-08-2009
20090251468ANIMATING OF AN INPUT-IMAGE TO CREATE PERSONAL WORLDS - The present invention discloses a system and a method for creating a personal animated world of a user by automatically animating an input-image such as a drawing of an animal inputted by the user.10-08-2009
20090051692Electronic presentation system - The invention provides a digital, animation presentation system for dramatically presenting various works. The digital and animated presentation in accordance with the invention is not limited by the conventions of paper books or electronic books that mimic paper-based books, and provides for the dramatic presentation of animation and animated text that includes text moving forward or backwards across the reader's display as well as appearing to move forward or away from the reader. The invention is applicable to a variety of works, including various fiction and non-fiction stories, educational materials, as well as tutorials and instruction manuals. In accordance with the invention, a reader can control his or her viewing of the digital animation and text so that he or she can view a story in its natural forward progression, pause and/or stop and re-read a section, return to an earlier section and/or skip ahead to a later section. The invention also provides for dramatic presentation and animation of the text as well as animation and sound effects that correlate to the text.02-26-2009
20110227930FARMINIZER SOFTWARE - Method for the transfer of project data of a multidimensional animation from a first computer to a second computer which are connected via a network and has at least one rendering device (render farm), the required information of which is determined by a set of digital rules, comprising the following steps: 09-22-2011
20090201298System and method for creating computer animation with graphical user interface featuring storyboards - Systems, methods, and computer readable media for customizing a computer animation. A custom animation platform prepares a storyboard including at least one customizable storyboard item and one or more replacement storyboard items configured to replace the customizable storyboard item. Then, the custom animation platform sends the storyboard and the replacement storyboard items to an interactive device via a network to thereby cause a user of the device to select one of the replacement storyboard items. The custom animation platform receives user data including the user's selection from the device and generates a computer animation based on the user data.08-13-2009
20110141120APPLICATION PROGRAMMING INTERFACES FOR SYNCHRONIZATION - The application programming interface operates in an environment with user interface software interacting with multiple software applications or processes in order to synchronize animations associated with multiple views or windows of a display of a device. The method for synchronizing the animations includes setting attributes of views independently with each view being associated with a process. The method further includes transferring a synchronization call to synchronize animations for the multiple views of the display. In one embodiment, the synchronization call includes the identification and the number of processes that are requesting animation. The method further includes transferring a synchronization confirmation message when a synchronization flag is enabled. The method further includes updating the attributes of the views from a first state to a second state independently. The method further includes transferring a start animation call to draw the requested animations when both processes have updated attributes.06-16-2011
20090231347Method and Apparatus for Providing Natural Facial Animation - Natural inter-viseme animation of 3D head model driven by speech recognition is calculated by applying limitations to the velocity and/or acceleration of a normalized parameter vector, each element of which may be mapped to animation node outputs of a 3D model based on mesh blending and weighted by a mix of key frames.09-17-2009
20090231346Diagnostic System for Visual Presentation, Animation and Sonification for Networks - A diagnostic system for visual representation, animation and sonification for networks that requires far less knowledge and can be used even by experts to reduce the time for analysis since it makes pattern analysis much more possible.09-17-2009
20110227931METHOD AND APPARATUS FOR CHANGING LIP SHAPE AND OBTAINING LIP ANIMATION IN VOICE-DRIVEN ANIMATION - The present invention discloses a method and apparatus for changing lip shape and obtaining a lip animation in a voice-driven animation, and relate to computer technologies. The method for changing lip shape includes: obtaining audio signals and obtaining motion extent proportion of lip shape according to characteristics of the audio signals; obtaining an original lip shape model inputted by a user and generating a motion extent value of the lip shape according to the original lip shape model and the obtained motion extent proportion of the lip shape; generating a lip shape grid model set according to the obtained motion extent value of the lip shape and a preconfigured lip pronunciation model library. The method for changing lip shape in a voice-driven animation includes an obtaining module, a first generating module and a second generating module. The solutions provided by the present invention have a simple algorithm and low cost.09-22-2011
20110227929STATELESS ANIMATION, SUCH AS BOUNCE EASING - An animation system is described herein that uses a transfer function on the progress of an animation that realistically simulates a bounce behavior. The transfer function maps normalized time and allows a user to specify both a number of bounces and a bounciness factor. Given a normalized time input, the animation system maps the time input onto a unit space where a single unit is the duration of the first bounce. In this coordinate space, the system can find the corresponding bounce and compute the start unit and end unit of this bounce. The system projects the start and end units back onto a normalized time scale and fits these points to a quadratic curve. The quadratic curve can be directly evaluated at the normalized time input to produce a particular output.09-22-2011
20090201297ELECTRONIC DEVICE WITH ANIMATED CHARACTER AND METHOD - An electronic device may display an animated character on a display and, when presence of a user is detected, the character may appear to react to the user. The character may be a representation of a person, an animal or other object. Ascertaining when the user is looking at the display may be accomplished by analyzing a video data stream generated by an imaging device, such as a camera used for video telephony.08-13-2009
20090219292SYSTEMS AND METHODS FOR SPECIFYING ARBITRARY ANIMATION CONTROLS FOR MODEL OBJECTS - Systems and methods for defining or specifying an arbitrary set of one or more animation control elements or variables (i.e., “avars”), and for associating the set with a model object or part of a model object. Once a set of avars (“avarset”) is associated with an object model, a user is able to select that model or part of the model, and the avarset associated with that part of the model is made available to, or enabled for, any animation tool that affords avar editing capabilities or allows manipulation of the model using animation control elements. This enables users to create and save sets of avars to share between characters, or other objects, and shots. In certain embodiments, the user can associate multiple avarsets with a model part and can designate one of those sets as “primary” so that when that model part is selected, the designated primary avarset is broadcast to the available editing tools. Additionally, the user can override the primary designation set and select one of the other sets of avars, or the user can cycle through the various associated avarsets.09-03-2009
20090315898PARAMETER CODING PROCESS FOR AVATAR ANIMATION, AND THE DECODING PROCESS, SIGNAL, AND DEVICES THEREOF - The invention relates to a method for coding animation parameters for a character (A) with which are associated morphological values, said animation parameters comprising a translation parameter associated with at least one part of said character (A), characterized in that to code an intrinsic translation of said part of said character (A) by a translation vector, said translation parameter contains a value which is dependent on said vector and on one of said morphological values.12-24-2009
20110227932Method and Apparatus for Generating Video Animation - The examples of the present invention provide a method and apparatus for generating a video animation, and the method and apparatus relate to the animation field. The method includes: receiving a command sent by a user, determining an action corresponding to the command according to the command, and determining the total number of frames corresponding to the action and a motion coefficient of each frame; calculating an offset of each control point in each frame according to the motion coefficient of each frame, and generating a video animation according to the offset of each control point in each frame and the total number of frames. An apparatus for generating a video animation is also provided.09-22-2011
20120105457TIME-DEPENDENT CLIENT INACTIVITY INDICIA IN A MULTI-USER ANIMATION ENVIRONMENT - A method for managing a multi-user animation platform is disclosed. A three-dimensional space within a computer memory is modeled. An avatar of a client is located within the three-dimensional space, the avatar being graphically represented by a three-dimensional figure within the three-dimensional space. The avatar is responsive to client input commands, and the three-dimensional figure includes a graphical representation of client activity. The client input commands are monitored to determine client activity. The graphical representation of client activity is then altered according to an inactivity scheme when client input commands are not detected. Following a predetermined period of client inactivity, the inactivity scheme varies non-repetitively with time.05-03-2012
20120105456Interactive, multi-environment application for rich social profiles and generalized personal expression - A system and method provides to users the ability to create fully extensible, visually-dominated tokens and associated tackboards, the ability to create fully extensible, visually-dominated collections of tokens, the ability to create an adaptable, interactive, animated, visual collage that supports visualization of the collection, and the ability to create social capabilities to communicate, share, explore, and interact with other users.05-03-2012
20090315894BROWSER-INDEPENDENT ANIMATION ENGINES - Tools and techniques are described for browser-independent animation engines. These animation engines may include browser-independent animation objects that represent entities that may be animated within a browser. These animation objects may define animation attributes, with the animation attributes being associated with attribute values that describe aspects of the entity. The animation attributes may also be associated with animation evaluators that define how the attribute value changes over time. These animation engines may also include a browser-specific layer for interpreting the attribute values into instructions specific to the browser.12-24-2009
20090315897ANIMATION PLATFORM - An animation platform for managing the interpolation of values of one or more animation variables from one or more applications. The animation platform uses animation transitions to interpolate the values of the animation variables. The animation platform uses a continuity parameter to smoothly switch from one animation transition to the next.12-24-2009
20090315895PARAMETRIC FONT ANIMATION - Font animation technique embodiments are presented which animate alpha-numeric characters of a message or document. In one general embodiment this is accomplished by the sender transmitting parametric information and animation instructions pertaining to the display of characters found in the message or document to a recipient. The parametric information identifies where to split the characters and where to rotate the resulting sections. The sections of each character affected are then translated and/or rotated and/or scaled as dictated by the animation instructions to create an animation over time. Additionally, if a gap in a stroke of an animated character exists between the sections of the character, a connecting section is displayed to close the stroke gap making the character appears contiguous.12-24-2009
20120194523Animation of Audio Ink - In a pen-based computing system, a microphone on the smart pen device records audio to produce audio data and a gesture capture system on the smart pen device records writing gestures to produce writing gesture data. Both the audio data and the writing gesture data include a time component. The audio data and writing gesture data are combined or synchronized according to their time components to create audio ink data. The audio ink data can be uploaded to a computer system attached to the smart pen device and displayed to a user through a user interface. The user makes a selection in the user interface to play the audio ink data, and the audio ink data is played back by animated the captured writing gestures and playing the recorded audio in synchronization.08-02-2012
20100182325APPARATUS AND METHOD FOR EFFICIENT ANIMATION OF BELIEVABLE SPEAKING 3D CHARACTERS IN REAL TIME - An apparatus for animating a moving and speaking enhanced-believability, character in real time, comprising a plurality of behavior generators, each for defining a respective aspect of facial behavior, a unifying scripter, associated with the behavior generators, the scripter operable to combine the behaviors into a unified animation script, and a renderer, associated with the unifying scripter, the renderer operable to render the character in accordance with the script, thereby to enhance believability of the character.07-22-2010
20100259545SYSTEM AND METHOD FOR SIMPLIFYING THE CREATION AND STORAGE OF COMPLEX ANIMATION OBJECTS AND MOVIES - An animation generation system for online recording and editing of elaborated animation objects and movies, that comprises a plurality of elaborated animated objects with hinges for controlling the object's limb movement; a collection of associated actions with parameters which can be programmed by authorized animation developers, who may be registered web site members that communicate through messages which comprise the created animation movies and objects using the Internet infrastructure; a collection of actions of associated generic features and a collection of associated actions complex moves comprising ready made small actions; A database of elaborated animated objects and movies containing accumulated collection of animation movies and objects created; A user interface module for presenting objects' features, for allowing a user to choose animation objects or related actions, for inserting actions by dragging and dropping, for playing an edited scene and for recognizing the action pattern of a developer's input and identifying the animation object and suggesting a variety of possible actions.10-14-2010
20090079744ANIMATING OBJECTS USING A DECLARATIVE ANIMATION SCHEME - Technologies are described herein for animating objects through the use of animation schemes. An animation scheme is defined using a declarative language that includes instructions defining the animations and/or visual effects to be applied to one or more objects and how the animations or visual effects should be applied. The animation scheme may include rules which, when evaluated, define how the objects are to be animated. An animation scheme engine is also provided for evaluating an animation scheme along with other factors to apply the appropriate animation to each of the objects. The animation scheme engine retrieves an animation scheme and data regarding the objects. The animation scheme engine then evaluates the animation scheme along with the data regarding the objects to identify the animation to be applied to each object. The identified animations and visual effects are then applied to the objects.03-26-2009
20090079743DISPLAYING ANIMATION OF GRAPHIC OBJECT IN ENVIRONMENTS LACKING 3D REDNDERING CAPABILITY - Three dimensional (3D) animations of an avatar graphic object are displayed in an environment that lacks high quality real-time 3D animation rendering capability. Before the animation is displayed in the environment at runtime, corresponding 3D and 2D reference models are created for the avatar. The 2D reference model is provided in a plurality of different views or reference angles. A 3D animation rendering program is used to produce 3D motion data for each animation. The 3D motion data define a position and rotation of parts of the 3D reference model. Image files are prepared for art assets drawn on associated parts of the 2D reference model in all views. At runtime in the environment, the position, rotation, and layer of each avatar part in 3D space is mapped to 2D space for each successive frame of an animation, with selected art assets applied to the associated parts of the avatar.03-26-2009
20100188409INFORMATION PROCESSING APPARATUS, ANIMATION METHOD, AND PROGRAM - An information processing apparatus is provided which includes an input information recording unit for recording, when a movement stroke for an object is input, information on moving speed and movement stroke of an input tool used for inputting the movement stroke, and an object behaviour control unit for moving the object, based on the information on moving speed and movement stroke recorded by the input information recording unit, in such a way that the movement stroke of the input tool and a moving speed at each point in the movement stroke are replicated.07-29-2010
20100182326RIGGING FOR AN ANIMATED CHARACTER MOVING ALONG A PATH - In computer enabled key frame animation, a method and associated system for rigging a character so as to provide a large range of motion with great fluidity of motion. The rigging uses a character body that moves along a path or freely as needed. The nodes in the body and path are not physically connected but are linked for performing a particular task. This task driven behavior of the nodes which may allow them to re-organize themselves in different re-configurations in order to perform a common duty, implies a variable geometry to the entire dynamic structure. To some regard the nodes can be said to be intelligent.07-22-2010
20100182327Method and System for Processing Picture - Embodiments of the present invention provide a method for processing the pictures, including: decomposing a dynamic picture frame into multiple static picture frames; bonding each of the static picture frames with a static original picture to generate multiple static pictures; and forming a dynamic picture with the multiple static pictures. Embodiments of the present invention further provide a system for processing the pictures, including a decomposing unit, a bonding unit and a composing unit. The decomposing unit is configured to decompose a dynamic picture frame into multiple static picture frames; the bonding unit is configured to bond each of the static picture frames with a static original picture to generate multiple static pictures; and the composing unit is configured to form a dynamic picture with the multiple static pictures. By processing the pictures with the technical solution provided by embodiments of the present invention, pictures may possess a sense of action and good expressive force, and may better display the personality of the user.07-22-2010
20100182324DISPLAY APPARATUS AND DISPLAY METHOD FOR PERFORMING ANIMATION OPERATIONS - A display apparatus and a displaying method of the same are provided. The display apparatus includes: a display unit; a detector which detects a user's motion; a signal processor; and a controller which controls the signal processor to display on the display unit an animation operation related to a still image if the still image is being displayed on the display unit and the user's motion is detected by the detector.07-22-2010
20100259546MODELIZATION OF OBJECTS IN IMAGES - A system includes an aligner to align an initial position of an at least partially kinematically, parameterized model with an object in an image, and a modelizer to adjust parameters of the model to match the model to contours of the object, given the initial alignment. An animation system includes a modelizer to hierarchically match a hierarchically rigid model to an object in an image, and a cutter to cut said object from said image and to associate it with said model. A method for animation includes hierarchically matching a hierarchically rigid model to an object in an image, and cutting said object from said image to associate it with said model.10-14-2010
20100164959RENDERING A VIRTUAL INPUT DEVICE UPON DETECTION OF A FINGER MOVEMENT ACROSS A TOUCH-SENSITIVE DISPLAY - A method comprises a processor detecting a person's finger moving across an unrendered portion of a touch-sensitive display. As a result of detecting the finger moving, the method further comprises the processor causing data to be rendered as a virtual keyboard image on the display.07-01-2010
20100188410GRAPHIC ELEMENT WITH MULTIPLE VISUALIZATIONS IN A PROCESS ENVIRONMENT - Smart graphic elements are provided for use as portions or components of one or more graphic displays, which may be executed in a process plant to display information to users about the process plant environment, such as the current state of devices within the process plant. Each of the graphic elements is an executable object that includes a property or a variable that may be bound to an associated process entity, like a field device, and that includes multiple visualizations, each of which may be used to graphically depict the associated process entity on a user interface when the graphic element is executed as part of the graphic display. Any of the graphic element visualizations may be used in any particular graphic display and the same graphic display may use different ones of the visualizations at different times. The different visualizations associated with a graphic element make the graphic element more versatile, at they allow the same graphic element to be used in different displays using different graphical styles or norms. These visualizations also enable the same graphic element to be used in displays designed for different types of display devices, such as display devices having large display screens, standard computer screens and very small display screens, such as PDA and telephone display screens.07-29-2010
20100238181Method And System For Animating Graphical User Interface Elements Via A Manufacturing/Process Control Portal Server - A method and system are disclosed for rendering animated graphics on a browser client based upon a stream of runtime data from a manufacturing/process control system. The graphics animation is based upon an animated graphic display object specification and runtime data from a portal server affecting an appearance trait of the animated graphic display object. The client browser receives an animated graphics description from the portal server specifying an animation behavior for an identified graphical display object. The client creates a data exchange connection between an animated display object, corresponding to the animated graphics description, and a source of runtime data from the portal server affecting display of the animated display object. Thereafter, the client applies runtime data received from the source of runtime data to the animated display object to render an animated graphic display object.09-23-2010
20100238180APPARATUS AND METHOD FOR CREATING ANIMATION FROM WEB TEXT - An apparatus and method for creating animation from a web text are provided. The apparatus includes a script formatter for generating a domain format script from the web text using a domain format that corresponds to a type of the web text, an adaptation engine for generating animation contents using the generated domain format script, and a graphics engine reproducing the generated animation contents in the form of an animation.09-23-2010
20100238179Presentation of Personalized Weather Information by an Animated Presenter - A computer-implemented personalized weather presentation method. The method includes generating personalized weather information (09-23-2010
20110175919SIGNAGE DISPLAY SYSTEM AND PROCESS - The invention is a novel display system, for signage and the like. An apparatus and process is provided for displaying a static source image in a manner that it is perceived as an animated sequence of images when viewed by an observer in relative motion to the apparatus. The source image is sliced or fractured to provide a plurality of image fractions of predetermined dimension. The fractions are redistributed in a predetermined sequence to provide an output image, which is placed in a preferably illuminated display apparatus provided with a mask. An observer in relative motion to the display apparatus, sequentially views a predetermined selection of image fractions through the mask, which are perceived by the observer as a changing sequence of images. Applying the concepts of persistence of vision, the observer perceives the reconstructed imagery as live action animation, a traveling singular image or a series of static images, or changing image sequences, from a plurality of lines of sight.07-21-2011
20120139923WRAPPER FOR PORTING A MEDIA FRAMEWORK AND COMPONENTS TO OPERATE WITH ANOTHER MEDIA FRAMEWORK - A system comprises a media framework component graph, a first media framework, a second media framework, and a media framework translator. The media framework component graph comprises one or more components. The one or more components are coupled with the first media framework. The first media framework is designed to run the media framework component graph. The media framework translator enables the first media framework and the media framework component graph to both function as a component for the second media framework.06-07-2012
20090289944IMAGE PROCESSING APPARATUS, IMAGE OUTPUTTING METHOD, AND IMAGE OUTPUTTING PROGRAM EMBODIED ON COMPUTER READABLE MEDIUM - In order to enable a still image to be checked while preventing leakage of confidential information contained in the still image, an MFP includes: an image acquiring portion to acquire a still image; an encoding portion to generate encoded data by encoding the acquired still image using an encoding key stored in advance; a decoding portion to decode the encoded data using the encoding key or a decoding key corresponding to the encoding key; and a transmitting portion to externally output the decoded still image in an electronically non-recordable form.11-26-2009
20130187927Method and System for Automated Production of Audiovisual Animations - The present invention relates to a computer-implemented method for the automated production of an audiovisual animation, in particular a tutorial video, wherein the method comprises the following steps: 07-25-2013
20130127873System and Method for Robust Physically-Plausible Character Animation - An interactive application may include a quasi-physical simulator configured to determine the configuration of animated characters as they move within the application and are acted on by external forces. The simulator may work together with a parameterized animation module that synthesizes and provides reference poses for the animation from example motion clips that it has segmented and parameterized. The simulator may receive input defining a trajectory for an animated character and input representing one or more external forces acting on the character, and may perform a quasi-physical simulation to determine a pose for the character in the current animation frame in reaction to the external forces. The simulator may enforce a goal constraint that the animated character follows the trajectory, e.g., by adding a non-physical force to the simulation, the magnitude of which may be dependent on a torque objective that attempts to minimize the use of such non-physical forces.05-23-2013
20130127876GRAPHIC DISPLAY APPARATUS - A graphic display apparatus within an automotive vehicle wherein the display apparatus includes at least two display units operable to display graphics and/or video, a wire connector connecting the at least two display units together, and a control system connected to the wire connector wherein the control system is operable to play video or graphics on the at least two display units. A method is provided to all the system to be universal for both audio and navigation systems wherein each system calls for a predetermined delay of the animation. The display units are in communication with one another providing for a coordinated or synchronized display of graphics. If, by way of example, a firework explodes on the main display screen, the remnants of that single firework will be exploded onto the secondary display screens.05-23-2013
20090322761APPLICATIONS FOR MOBILE COMPUTING DEVICES - A sequence of images is displayed in response to user input, such as an answer to a question, a touch and drag operation, a tap operation or shaking of a mobile device. The images may be displayed in an order determined by a direction implied by the user input, and may be accompanied by music. The display of the sequence of images may continue for a time determined by the shaking of the device prior to commencement of the display of the sequence of images. The sequence of images may depict a common constituent in successively different poses or states.12-31-2009
20090322760Dynamic animation scheduling - Dynamic animation scheduling techniques are described in which application callbacks are employed to permit dynamic scheduling of animations. An application may create a storyboard that defines an animation as transitions applied to a set of variables. The storyboard may be communicated to an animation component configured to schedule the storyboard. The animation component may then communicate one or more callbacks at various times to the application that describe a state of the variables. Based on the callbacks, the application may specify changes, additions, deletions, and/or other modifications to dynamically modify the storyboard. To draw the animation, the application may communicate a get variable values command to the animation component. The animation component performs calculations to update the variable values based on the storyboard and communicates the results to the application. The application may then cause output of the animation defined by the storyboard.12-31-2009
20090153567Systems and methods for generating personalized computer animation using game play data - Systems, methods, and computer storage media for generating a computer animation of a game. A custom animation platform receives game play data of the game and determines at least one scene based on the game play data. Then, one or more frames in the scene are set up, where at least one of the frames includes at least one non-game pre-production element of the game. Subsequently, the frames are rendered and the rendered frames are combined to generate a computer animation.06-18-2009
20090167767GROWING AND CARING FOR A VIRTUAL CHARACTER ON A MOBILE DEVICE - A computer implemented method, data processing system, and computer program product for enabling a user to create and care for a virtual character on a user communication device. The data processing system includes a server connected via a communication network to a user communication device. In response to a user action, a software application with a virtual character in it is downloaded into the user communication device. The virtual character interacts with the user through the software application in accordance with predefined characteristics. The user is enabled to perform virtual actions relating to the virtual character, responsive to the virtual character behavior, by sending data (e.g., by SMS or MMS) using the user communication device over the communication network to the server. The virtual character is responsive to data sent to it from the server in response to the user's virtual actions, in accordance with the virtual character's predefined characteristics.07-02-2009
20090167769METHOD, DEVICE AND SYSTEM FOR MANAGING STRUCTURE DATA IN A GRAPHIC SCENE - A method is provided for restoring graphic animation content including the following steps: in a receiver terminal; transmitting a request for retrieving the content; and obtaining at least one graphic scene of the content describing at least the spatio-temporal arrangement between the graphic objects of the content. The content further includes at least one function for managing structured data allowing interaction with a database of structured data. The method further includes: querying the database, based on at least one command present in the graphic scene and associated with the functions(s) for managing structured data; obtaining structured data derived from the database; integrating the structured data in the graphic scene; and restoring the graphic scene.07-02-2009
20090147008Arrangements for controlling activites of an avatar - Systems are disclosed herein that allow a participant to be associated with an avatar and receive a transmission from the participant in response to a participant activated transmission. The transmission can include a participant selectable and time delayed mood and/or activity command which can be associated with a user configurable command to avatar activity conversion table. The associated avatar activity table can provide control signal to the VU system controlling the participant's avatar for extended time periods, where the activity commands allow the avatar to exhibit a mood and to conduct an activity. The preconfigured time controlled activity commands allow the user to control their avatar without being actively engaged in a session with a virtual universe client or logged on and the control configuration can be set up such that a single mood/activity control signal can initiate moods and activities that occur over an extended period of time.06-11-2009
20100302254ANIMATION SYSTEM AND METHODS FOR GENERATING ANIMATION BASED ON TEXT-BASED DATA AND USER INFORMATION - Animation devices and a method that may output text-based data as an animation, are provided. The device may be a terminal, such as a mobile phone, a computer, and the like. The animation device may extract one or more emotions corresponding to a result obtained by analyzing text-based data. The emotion may be based on user relationship information managed by a user of the device. The device may select an action corresponding to the emotion from a reference database, and combine the text-based data with the emotion and action to generate an animation script. The device may generate a graphic in which a character is moved based on the action information, the emotion information, and the text-based data.12-02-2010
20100302255METHOD AND SYSTEM FOR GENERATING A CONTEXTUAL SEGMENTATION CHALLENGE FOR AN AUTOMATED AGENT - Provided is a system and method for generating a contextual segmentation challenge that poses an identification challenge. The method including obtaining at least one ad element and obtaining a test element. The ad element and the test element then combined to provide a composite image. At least one noise characteristic is then applied to the composite image. The composite image is then animated as a plurality of views as a contextual segmentation challenge. A system for performing the method is also provided.12-02-2010
20100302253REAL TIME RETARGETING OF SKELETAL DATA TO GAME AVATAR - Techniques for generating an avatar model during the runtime of an application are herein disclosed. The avatar model can be generated from an image captured by a capture device. End-effectors can be positioned an inverse kinematics can be used to determine positions of other nodes in the avatar model.12-02-2010
20100302252MULTIPLE PERSONALITY ARTICULATION FOR ANIMATED CHARACTERS - A method for a computer system includes determining a model for a first personality of a component of an object, wherein the model for the first personality of the component is associated with a component name and a first personality indicia, determining a model for a second personality of the component of the object, wherein the model for the second personality of the component is associated with the component name and the second personality indicia, determining a multiple personality model of the object, wherein the model of the object includes the model for the first personality of the component, the model of the second personality of the component, the first personality indicia, and the second personality indicia, and storing the multiple personality model of the object in a single file.12-02-2010
20100302256System and Method for Video Choreography - An electronic entertainment system for creating a video sequence by executing video game camera behavior based upon a video game sound file includes a memory configured to store an action event/camera behavior (AE/CB) database, game software such as an action generator module, and one or more sound files. In addition, the system includes a sound processing unit coupled to the memory for processing a selected sound file, and a processor coupled to the memory and the sound processing unit. The processor randomly selects an AE pointer and a CB pointer from the AE/CB database. Upon selection of the CB pointer and the AE pointer, the action generator executes camera behavior corresponding to the selected CB pointer to view an action event corresponding to the selected AE pointer.12-02-2010
20090066701IMAGE BROWSING METHOD AND IMAGE BROWSING APPARATUS THEREOF - An image browsing method includes: detecting a movement corresponding to a user input to generate a detecting variation; checking if the detecting variation is greater than a predetermined threshold value; when the detecting variation is greater than the predetermined threshold value, displaying an animation indicative of completely turning a page for showing a target image instead of a current image in order to allow browsing of the target image; and when the detecting variation is not greater than the predetermined threshold value, displaying the current image for browsing of the current image.03-12-2009
20090066703CONSTRAINT SCHEMES FOR COMPUTER SIMULATION OF CLOTH AND OTHER MATERIALS - Constraint schemes for use in the computer simulation and animation of cloth, clothing and other materials helps to prevent clothing from excessive stretching, bunching up in unwanted areas, or “passing through” rigid objects during collisions. Several types of constraint systems are employed, including the use of skinned vertices as constraints and axial constraints. In these schemes cloth simulated vertices are generated for the material using a cloth simulation technique, and skinned vertices are generated for the material using a skin simulation technique. One or more of the cloth simulated vertices are compared to the corresponding skinned vertices. The cloth simulated vertices are modified if they deviate from the corresponding skinned vertices by more than a certain amount. Vertical constraints are also employed, which involve generating a first set of vertices for the material using a cloth simulation technique, comparing a vertical component of each of the first set of vertices to a lower limit for each of the first set of vertices, and for each vertical component that falls below the lower limit, modifying the vertical component to be equal to the lower limit.03-12-2009
20090066702Development Tool for Animated Graphics Application - A presentation engine collects information concerning the rendering of the frames of an animated graphics application, such the time taken for rendering the frame and the amount of memory used. This information quantifies the amount of certain computing resources being utilized on a per-frame basis, enabling identification by the authors of the animated graphics application, particularly by the designers of the animated graphics, of frames that are problematic, especially on resource-limited devices. The generation of information does not depend on the animated graphics application being instrumented to generate the metrics. The method is adaptable to any resource-limited device, to which the presentation engine is ported or adapted to run. When executing on a resource-limited device, the information is sent to a workstation for analysis. An analysis tool, which may be a stand-alone program or part of an authoring tool or other program, displays the collected metrics graphically in relation to the frame.03-12-2009
20110018881VARIABLE FRAME RATE RENDERING AND PROJECTION - In rendering a computer-generated animation sequence, pieces of animation corresponding to shots of the computer-generated animation sequence are obtained. Measurements of action in the shots are obtained. Frame rates, which can be different, for the shots are determined based on the determined measurements of action in the shots. The shots are rendered based on the determined frame rates for the shots. The rendered shots with frame rate information indicating the frame rates used in rendering the shots are stored.01-27-2011
20110109634PORTABLE ELECTRONIC DEVICE AND METHOD OF INFORMATION RENDERING ON PORTABLE ELECTRONIC DEVICE - A portable electronic device-implemented method includes rendering information on a display of the portable electronic device, detecting receipt of an initiating input, and rendering a band, including at least one field, along an edge of the display.05-12-2011
20110109635Animations - At least certain embodiments of the present disclosure include a method for animating a display region, windows, or views displayed on a display of a device. The method includes starting at least two animations. The method further includes determining the progress of each animation. The method further includes completing each animation based on a single timer.05-12-2011
20110115798METHODS AND SYSTEMS FOR CREATING SPEECH-ENABLED AVATARS - Methods and systems for creating speech-enabled as avatars are provided in accordance with some embodiments, methods for creating speech-enabled avatars are provided, the method comprising; receiving a single image that includes a face with distinct facial geometry; comparing points on the distinct facial geometry with corresponding points on a prototype facial surface, wherein the prototype facial surface is modeled by a Hidden Markov Model that has facial motion parameters; deforming the prototype facial surface based at least in part on the comparison; in response to receiving a text input or an audio input, calculating the facial motion parameters based on a phone set corresponding to the received input; generating a plurality of facial animations based on the calculated facial motion parameters and the Hidden Markov Model; and generating an avatar from the single image that includes the deformed facial sin face, the plurality of facial animations, and the audio input or an audio waveform corresponding to the text input.05-19-2011
20110115799METHOD AND SYSTEM FOR ASSEMBLING ANIMATED MEDIA BASED ON KEYWORD AND STRING INPUT - One aspect of the invention is a method for automatically assembling an animation. According to this embodiment, the method includes accepting at least one input keyword relating to a subject for the animation and accessing a set of templates. In this embodiment, each template generates a different type of output, and each template includes components for display time, screen location, and animation parameters. The method also includes retrieving data from a plurality of websites or data collections using an electronic search based on the at least one input keyword and the templates, determining which retrieved data to assemble into the set of templates, coordinating assembly of data-populated templates to form the animation, and returning the animation for playback by a user.05-19-2011
20090051690Motion line switching in a virtual environment - A computing system enhances the human-like realism of computer opponents in racing-type games and other motion-related games. The computing system observes multiple prescribed motion lines and computes switching probabilities attributed to switching of simulated motion of a racer from one prescribed motion line to another. A sampling module samples at random over the switching probabilities to select one of the switching probabilities. At least one control signal is generated to switch simulated motion of the entity in a virtual reality environment from the first prescribed motion line to one of the other prescribed motion lines, in accordance with the selected one of the switching probabilities.02-26-2009
20090033666SCNEARIO GENERATION DEVICE, SCENARIO GENERATION METHOD, AND SCENARIO GENERATION PROGRAM - There is provided a scenario generation device capable of automatically generating a scenario for generating an animation of rich expression desired by a user even from a text created by the user who has no special knowledge about creation of animation. In the device, a scenario generation unit (02-05-2009
20100220100SIGNAGE DISPLAY SYSTEM AND PROCESS - The invention is a novel display system, for signage and the like. An apparatus and process is provided for displaying a static source image in a manner that it is perceived as an animated sequence of images when viewed by an observer in relative motion to the apparatus. The source image is sliced or fractured to provide a plurality of image fractions of predetermined dimension. The fractions are redistributed in a predetermined sequence to provide an output image, which is placed in a preferably illuminated display apparatus provided with a mask. An observer in relative motion to the display apparatus, sequentially views a predetermined selection of image fractions through the mask, which are perceived by the observer as a changing sequence of images. Applying the concepts of persistence of vision, the observer perceives the reconstructed imagery as live action animation, a traveling singular image or a series of static images, or changing image sequences, from a plurality of lines of sight.09-02-2010
20090219293CELLULAR TELEPHONE SET AND CHARACTER DISPLAY PRESENTATION METHOD TO BE USED IN THE SAME - A cellular telephone set can increase number of display patterns of animation display without occupying large storage region in the memory and without performing setting operation every time. The character presentation means determines character to be displayed in each event screen upon depression of call release button after telephone calling, depression of call release button after telephone call reception, upon occurrence of at least one of presence of not responded call and newly received mail, and upon variation of state between open state and closed state of the first and second casings, depending upon calling history, time of calling, call arriving history, time of call arrival, and timing of detection of variation of state between open state and closed state of the first and second casing by the detecting means.09-03-2009
20110084970SYSTEM AND METHOD FOR PREVENTING PINCHES AND TANGLES IN ANIMATED CLOTH - Systems and methods are disclosed for altering character body animations to improve subsequent cloth animations. In particular, based on a character body animation, an extra level of processing is performed, prior to the actual cloth simulation. The extra level of processing removes potential areas of pinching or tangling in input character body simulation data, ensuring that the output of the cloth simulation will be have reduced pinches and tangles.04-14-2011
20090213124Method of Displaying Product and Service Performance Data - An entertaining and informative method of displaying competitive product performance data is disclosed. The various embodiments include a method for displaying product performance data by use of animated contests between animated representatives of competing products. The contest results are relative to selected product test results. The relationship between the test results and the contest results is a mathematical approximation. Thus, a gross disparity in the displayed animated contest is indicative of a gross disparity in the performance of the products on the test. Likewise, a closely fought contest in the displayed animated contest is indicative of close performance of the products on the test.08-27-2009
20100013837Method And System For Controlling Character Animation - Embodiments of the present invention provide a method for controlling character animation, in which the character animation includes at least two bones and skins corresponding to the bones, the method includes: (a) dividing the character animation into at least two parts, and setting an identification number for each part; (b) establishing a mapping table comprising a corresponding relationship between the identification number and skin data of each part; (c) picking skin data of an operation focus location in the character animation; (d) querying the mapping table according to the skin data, obtaining a corresponding identification number, and controlling the part in the character animation corresponding to the identification number. Embodiments of the present invention also provide a system for controlling character animation. Different parts of the character animation may be picked respectively by dividing the character animation into multiple parts.01-21-2010
20100013836Method and apparatus for producing animation - Provided are a method and apparatus for interactively producing an animation. A user-level contents, for producing an animation, may be queried for and received from a user, and a video script representing the animation may be created by using the user-level contents based on regulation information and animation direction knowledge. An image of the animation may then be output by the playing the video script.01-21-2010
20090315893USER AVATAR AVAILABLE ACROSS COMPUTING APPLICATIONS AND DEVICES - An avatar along with its accessories, emotes, and animations may be system provided and omnipresent. In this manner, the avatar and its accessories, emotes, and animations may be available across multiple environments provided or exposed by multiple avatar computing applications, such as computer games, chats, forums, communities, or instant messaging services. An avatar system may change the avatar and its accessories, emotes, and animations, e.g. pursuant to a request from the user, instructions from an avatar computing application, or updates provided by software associated with a computing device. The avatar and its accessories, emotes, and animations may be changed by a system or computing application associated with a computing device outside of a computer game or computing environment in which the avatar may be rendered or used by the user.12-24-2009
20100066745Face Image Display, Face Image Display Method, and Face Image Display Program - The present invention provides a facial image display apparatus that can display moving images concentrated on the face when images of people's faces are displayed. A facial image display apparatus is provided wherein a facial area detecting unit (03-18-2010
20110175920METHOD FOR HANDLING AND TRANSFERRING DATA IN AN INTERACTIVE INPUT SYSTEM, AND INTERACTIVE INPUT SYSTEM EXECUTING THE METHOD - A method in a computing device of transferring data to another computing device includes establishing wireless communication with the other computing device, designating data for transfer to the other computing device; and in the event that the computing device assumes a predetermined orientation, automatically initiating wireless transfer of the data to the other computing device. A system implementing the method is provided. A method of handling a graphic object in an interactive input system having a first display device includes defining a graphic object placement region for the first display device that comprises at least a visible display region of the first display device and an invisible auxiliary region between the visible display region and an outside edge of the first display device; and in the event that the graphic object enters the invisible auxiliary region, automatically moving the graphic object through the invisible auxiliary region until at least a portion of the graphic object enters a visible display region of a second display device of a second interactive input system. A system implementing the method, and other related systems and methods, are provided.07-21-2011
20100128042SYSTEM AND METHOD FOR CREATING AND DISPLAYING AN ANIMATED FLOW OF TEXT AND OTHER MEDIA FROM AN INPUT OF CONVENTIONAL TEXT - A system and method for generating and displaying text on a screen as an animated flow from a digital input of conventional text. The Invention divides text into short-scan lines of coherent semantic value that progressively animate from invisible to visible and back to invisible. Multiple line displays are frequent. The effect is aesthetically engaging, perceptually focusing, and cognitively immersive. The reader watches the text like watching a movie. The Invention may exist in whole or in part as a standalone application on a specific screen device. The Invention includes a manual authoring tool that allows the insertion of non-text media such as sound, image, and advertisements.05-27-2010
20090096796Animating Speech Of An Avatar Representing A Participant In A Mobile Communication - Animating speech of an avatar representing a participant in a mobile communication including selecting one or more images; selecting a generic animation template; fitting the one or more images with the generic animation template; texture wrapping the one more images over the generic animation template; and displaying the one or more images texture wrapped over the generic animation template. Receiving an audio speech signal; identifying a series of phonemes; and for each phoneme: identifying a new mouth position for the mouth of the generic animation template; altering the mouth position to the new mouth position; texture wrapping a portion of the one or more images corresponding to the altered mouth position; displaying the texture wrapped portion of the one or more images corresponding to the altered mouth position of the mouth of the generic animation template; and playing the portion of the audio speech signal represented by the phoneme.04-16-2009
20090153566METHODS AND APPARATUS FOR ESTIMATING AND CONTROLLING BEHAVIOR OF ANIMATRONICS UNITS - A method for determining behavior of an animatronics unit includes receiving animation data comprising artistically determined motions for at least a portion of an animated character, determining a plurality of control signals to be applied to at least the portion of the animatronics unit in response to the animation data, estimating the behavior of at least the portion of the animatronics unit in response to the plurality of control signals by driving a software simulation of at least the portion of the animatronics unit with the plurality of control signals, and outputting a representation of the behavior of at least the portion of the animatronics unit to a user.06-18-2009
20090027400Animation of Audio Ink - In a pen-based computing system, a microphone on the smart pen device records audio to produce audio data and a gesture capture system on the smart pen device records writing gestures to produce writing gesture data. Both the audio data and the writing gesture data include a time component. The audio data and writing gesture data are combined or synchronized according to their time components to create audio ink data. The audio ink data can be uploaded to a computer system attached to the smart pen device and displayed to a user through a user interface. The user makes a selection in the user interface to play the audio ink data, and the audio ink data is played back by animated the captured writing gestures and playing the recorded audio in synchronization.01-29-2009
20110148885APPARATUS AND METHOD FOR EDITING ANIMATION DATA OF VIRTUAL OBJECT UTILIZING REAL MODEL - Disclosed are an apparatus and a method for editing animation data of a virtual object using a model. The animation data editing apparatus using the model according to the embodiment of the present invention allows motion information acquired by measuring a real model to be used by computer graphic software for animation or modeling so as to produce a computer graphic model corresponding to the real model into an animation by being adjusted and modified by a designer on the basis of measured motion information.06-23-2011
20100118037OBJECT-AWARE TRANSITIONS - Techniques for accomplishing slide transitions in a presentation are disclosed. In accordance with these techniques, each object on a slide is individually manipulable during slide transitions. In certain embodiments, the presence of an object on both the outgoing and incoming slides may be taken into account during slide transition. Likewise, in certain embodiments, derivative objects, such as shadows or reflections, may be handled as distinct objects in generating a transition between slides.05-13-2010
20100118036APPARATUS AND METHOD OF AUTHORING ANIMATION THROUGH STORYBOARD - Described herein is an animation authoring apparatus and method thereof for authoring an animation. The apparatus includes a storyboard editor that provides a storyboard editing display that a user may interact with to edit a storyboard, and to store the edited storyboard. The apparatus further includes a parser to parse syntax of the edited storyboard, and a rendering engine to convert the edited storyboard into a graphic animation based on the parsed syntax of the edited storyboard.05-13-2010
20100118035MOVING IMAGE GENERATION METHOD, MOVING IMAGE GENERATION PROGRAM, AND MOVING IMAGE GENERATION DEVICE - A moving image generation method includes: a content designation step of designating a plurality of contents used for a moving image; a content collecting step of collecting each designated content; a content image generation step of generating content images based on the collected contents; a display mode setting step of setting a display mode of each generated content image; and a moving image generation step of generating a moving image where each content image alters with respect to time in accordance with the display mode which has been set.05-13-2010
20100118034APPARATUS AND METHOD OF AUTHORING ANIMATION THROUGH STORYBOARD - An animation authoring apparatus and method of authoring an animation including a storyboard editor to provide a storyboard editing screen, to interact with a user to edit a storyboard, and to store the edited storyboard, a parser to parse syntax of the edited storyboard, and a rendering engine to convert the edited storyboard into a graphic animation based on the parsed syntax of the edited storyboard.05-13-2010
20110175922METHOD FOR DEFINING ANIMATION PARAMETERS FOR AN ANIMATION DEFINITION INTERFACE - A system and a computer-readable medium are provided for controlling a computing device to define a set of computer animation parameters for an object to be animated electronically. An electronic reference model of the object to be animated is obtained. The reference model is altered to form a modified model corresponding to a first animation parameter. Physical differences between the electronic reference model and the modified model are determined and a representation of the physical differences are stored as the first animation parameter. Altering of the reference model and determining of the physical differences are repeated. The stored parameters are provided to a rendering device for generation of the animation in accordance with the stored parameters. Determining physical differences between the electronic reference model and the modified model and storing a representation of the physical differences as the first animation parameter include comparing vertex positions of the reference model.07-21-2011
20090322762Animated performance tool - A performance tool comprises a program that configures metrics into animated scenarios and at least one display that displays the animated scenarios. The animated scenarios illustrate measurable inputted data from multiple sets of data that are juxtaposed with one another.12-31-2009
20110175921PERFORMANCE DRIVEN FACIAL ANIMATION - A method of animating a digital facial model, the method including: defining a plurality of action units; calibrating each action unit of the plurality of action units via an actor's performance; capturing first facial pose data; determining a plurality of weights, each weight of the plurality of weights uniquely corresponding to the each action unit, the plurality of weights characterizing a weighted combination of the plurality of action units, the weighted combination approximating the first facial pose data; generating a weighted activation by combining the results of applying the each weight to the each action unit; applying the weighted activation to the digital facial model; and recalibrating at least one action unit of the plurality of action units using input user adjustments to the weighted activation.07-21-2011
20110175918CHARACTER ANIMATION CONTROL INTERFACE USING MOTION CAPURE - A processor-readable medium stores code representing instructions to cause a processor to define a virtual feature. The virtual feature can be associated with at least one engaging condition. The code further represents instructions to cause the processor to receive an end-effector coordinate associated with an actor and calculate an actor intention based at least in part on a comparison between the at least one engaging condition and the end-effector coordinate.07-21-2011
20120200574TRAINING FOR SUBSTITUTING TOUCH GESTURES FOR GUI OR HARDWARE KEYS TO CONTROL AUDIO VIDEO PLAY - A user can toggle between GUI input and touch screen input with the GUI hidden using touch gestures correlated to respective hidden GUI elements and, thus, to respective commands for a TV and/or disk player sending AV data thereto. When in the GUI input mode, an animated hand can be presented on the display moving through the touch gesture corresponding to a selected GUI element to train the user on which touch gestures correspond to which GUI elements (and, thus, to respective commands for a TV and/or disk player sending AV data thereto.)08-09-2012
20110261060DRAWING METHOD AND COMPUTER PROGRAM - The present invention provides an easy method for creating animations from drawings. A user utilizes a user interface of an electronic media to draw a first line, then to go back in the recording's timeline, and then to draw a second line, such that a playback of the recording shows at least some portion of the first and second lines being drawn simultaneously. This allows a user to easily create animations from drawings for the purpose of visualization, art, entertainment, or encoding of synchronized motion. The invention allows for various ways in which the computer can receive a user's drawing events, in which drawing events are associated with timelines to create a recording, in which drawing events are displayed, and in which the playback of drawing events is saved.10-27-2011
20110164044Preparation method for the virtual reality of high fidelity sports and fitness equipment and interactive system and method based on the virtual reality - This invention is a preparation method for virtual reality of high fidelity sports and fitness equipment and an interactive system and method based on the virtual reality. Image content of real scene shooting and the control parameters of sports and fitness equipment correspond to real scene will be taken synchronously. The control parameters can let sports and fitness equipment, along with the change of real scene, adjust in real time and automatically its forward and backward leaning, left and right leaning, swinging angle, or adjust automatically its loading size; or follow the user's exercise speed to adjust the playing speed of the virtual reality; or follow the environmental parameters to adjust the loading size; or follow the staring direction of the eye of the user or the face direction of the user or the swinging angle of sports and fitness equipment to change visual field scope image of real scene; it can prepare diversified virtual reality digital content, it can reach high virtual level according to different exercise characteristic; the user won't be limited by the site and weather, and the application field becomes wider.07-07-2011
20110164042Device, Method, and Graphical User Interface for Providing Digital Content Products - A multifunction device having a touch-sensitive surface displays graphical objects that represent digital content products, each graphic object having a front side image and a back side image. An initial display shows front side images of objects representing digital content products. A user input selects a graphical object, resulting in an animation that simultaneously flips the graphical object over and enlarges it. At the end of the animation, the back side is displayed, and is larger than the initial front side image. A second user input on a front side image of a second graphical object results in a second animation that simultaneously flips the first graphical object over and reduces its size, and also flips the second graphical object over and enlarges it. The front side image of the first graphical object and the back side image of the second graphical object are thereby concurrently displayed.07-07-2011
20110164043METHOD OF REMOTELY CONTROLLING A PRESENTATION TO FREEZE AN IMAGE USING A PORTABLE ELECTRONIC DEVICE - A system and method are set forth for remotely controlling a presentation from a portable electronic device so as to freeze a slide on a remote projector to permit searching for a desired slide on the portable electronic device and then continuing the presentation when searching is complete. In one embodiment, a switch is provided in a communication layer of a presentation application such that when the switch is turned off, communication is suspended between the portable electronic device and the projector, thereby permitting browsing on the portable electronic device without interrupting the presentation. When the switch is turned on the current slide information is transmitted from the portable electronic device to the projector.07-07-2011
20130120401Animation of Computer-Generated Display Components of User Interfaces and Content Items - Animation of computer-generated display components of user interfaces and content items is provided. An animation application or engine creates images of individual display components (e.g., bitmap images) and places those images on animation layers. Animation behaviors may be specified for the layers to indicate how the layers and associated display component images animate or behave when their properties change (e.g., a movement of an object contained on a layer), as well as, to change properties on layers in order to trigger animations (e.g., an animation that causes an object to rotate). In order to achieve high animation frame rates, the animation application may utilize three processing threads, including a user interface thread, a compositor thread and a rendering thread. Display behavior may be optimized and controlled by utilizing a declarative markup language, such as the Extensible Markup Language, for defining display behavior functionality and properties.05-16-2013
20110080412DEVICE FOR DISPLAYING CUTTING SIMULATION, METHOD FOR DISPLAYING CUTTING SIMULATION, AND PROGRAM FOR DISPLAYING CUTTING SIMULATION - In order to reduce the amount of computation required for ray tracing and facilitate simulating of changes in workpiece shape even on an inexpensive, low-performance computer, a device for displaying a cutting simulation includes: a rendered workpiece image update section for updating by ray tracing a portion of a rendered workpiece image buffer and a rendered workpiece depth buffer, the portion being associated with a rendering region corresponding to a change in the shape of the workpiece; a rendered tool image creation section for rendering a tool image by ray tracing for the current tool rendering region; and an image transfer section for transferring a partial image of the previous tool rendering region and the current workpiece rendering region to be updated from the rendered workpiece image buffer to a display frame buffer as well as transferring the current tool rendering image to the display frame buffer.04-07-2011
20110080410SYSTEM AND METHOD FOR MAKING EMOTION BASED DIGITAL STORYBOARD - A system and a method for generating a digital storyboard in which characters with various emotions are produced. The digital storyboard generating system includes an emotion-expressing character producing unit to produce an emotion-based emotion-expressing character, and a storyboard generating unit to generate storyboard data using the emotion-expressing character. Optionally, cartoon-rendering is performed on the storyboard data to generate an image, where the image is output to the user.04-07-2011
20100110081SOFTWARE-AIDED CREATION OF ANIMATED STORIES - Software-assistance that allows a child or other author to generate a story. The author may generate their own content and add that author-generated content to the story. For instance, the author could drawn their own background, background items, and/or characters. These drawn items could even be added to a library so that they could be reused in other stories. The author can define their own animations associated with characters and background items, rather than selecting predefined animations. The story timeline may also keep track of events that are caused by the author interacting with the story in particular ways, and that represents significant story changes. The author may then jump to these navigation points to delete the event thereby removing the effects of the story change.05-06-2010
20100283788VISUALIZATION SYSTEM FOR A DOWNHOLE TOOL - Apparatus for visualizing a downhole tool in a subsurface environment. The apparatus comprising: an input for receiving data on at least one of the downhole tool and the subsurface environment, a physical model processing said input for generating a representation of the downhole tool moving through said subsurface environment and an output for displaying said downhole tool movement in real-time.11-11-2010
20100283787CREATION AND RENDERING OF HIERARCHICAL DIGITAL MULTIMEDIA DATA - The present invention relates to a method for the creation of large hierarchical computer graphics datasets. The method comprises combination (11-11-2010
20080218523SYSTEM AND METHOD FOR NAVIGATION OF DISPLAY DATA - Navigating display data (e.g., large documents) on an electronic display is described in which a first set of visual indicators are layered over the portion of the portion of data displayed on the electronic display. The user selects a particular navigation task, which selection signal is received by the navigation application. The navigation application determines a section of interest based on the particular navigation task selected and layers a second set of visual indicators over the portion of the display data defined by all of the sections other than the section of interest. The navigation application then animates movement of the display data and both sets of visual indicators on the electronic display according to the particular navigation task selected.09-11-2008
20100194761CONVERTING CHILDREN'S DRAWINGS INTO ANIMATED MOVIES - The present invention comprises of a business method and music and text-derived speech animation software for producing simple, effective animations of digital media content that educate, entertain the children and views by the presentation of speaking digital characters. The invention makes the creation of digital talking characters both easy and effective to produce. The completed animation is then provided to the children who made the drawings and optionally posted on a website accessible through the Internet or used for the creation of online Greeting Cards and Story books.08-05-2010
20100194762Standard Gestures - Systems, methods and computer readable media are disclosed for grouping complementary sets of standard gestures into gesture libraries. The gestures may be complementary in that they are frequently used together in a context or in that their parameters are interrelated. Where a parameter of a gesture is set with a first value, all other parameters of the gesture and of other gestures in the gesture package that depend on the first value may be set with their own value which is determined using the first value.08-05-2010
20110273455Systems and Methods of Rendering a Textual Animation - Systems and methods of rendering a textual animation are provided. The methods include receiving an audio sample of an audio signal that is being rendered by a media rendering source. The methods also include receiving one or more descriptors for the audio signal based on at least one of a semantic vector, an audio vector, and an emotion vector. Based on the one or more descriptors, a client device may render the textual transcriptions of vocal elements of the audio signal in an animated manner. The client device may further render the textual transcriptions of the vocal elements of the audio signal to be substantially in synchrony to the audio signal being rendered by the media rendering source. In addition, the client device may further receive an identification of a song corresponding to the audio sample, and may render lyrics of the song in an animated manner.11-10-2011
20110187727APPARATUS AND METHOD FOR DISPLAYING A LOCK SCREEN OF A TERMINAL EQUIPPED WITH A TOUCH SCREEN - An apparatus and method for displaying a lock screen including a character object having a motion effect in a terminal equipped with a touch screen. The method includes locking the touch screen and displaying the lock screen including the character object having the motion effect on a preset background image. Upon generation of a touch input, determining whether the touch input is for unlocking the touch screen, and if the touch input is for unlocking the touch screen, unlocking the touch screen and controlling the character object to perform a preset action indicating the unlocking of the touch screen.08-04-2011
20110187725COMMUNICATION CONTROL DEVICE, COMMUNICATION CONTROL METHOD, AND PROGRAM - There is provided a communication control device including: a data storage unit storing feature data representing features of appearances of one or more communication devices; an environment map building unit for building an environment map representing positions of communication devices present in a real space based on an input image obtained by imaging the real space and the feature data stored in the data storage unit; a detecting unit for detecting a user input toward a first communication device designating any data provided in the first communication device and a direction; a selecting unit for selecting a second communication device serving as a transmission destination of the designated data from the environment map based on the direction designated by the user input; and a communication control unit for transmitting the data provided in the first communication device from the first communication device to the second communication device.08-04-2011
20110187726IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - There is provided an image processing device including: a data storage unit having feature data stored therein, the feature data indicating a feature of appearance of one or more physical objects; an environment map building unit for building an environment map based on an input image obtained by imaging a real space using an imaging device and the feature data stored in the data storage unit, the environment map representing a position of a physical object present in the real space; an information generating unit for generating animation data for displaying a status of communication via a communication interface on a screen, using the environment map built by the environment map building unit; and an image superimposing unit for generating an output image by superimposing an animation according to the animation data generated by the information generating unit on the input image.08-04-2011
20110187724MOBILE TERMINAL AND INFORMATION DISPLAY METHOD - A mobile terminal includes a display unit to display information processed by the mobile terminal, the display unit comprising a touch panel; and a control unit to control the display to display first information in a first direction if a first drag is detected and to display second information in a second direction different from the first direction while the first information is displayed if a second drag is detected. An information display method for a mobile terminal includes displaying first information in a first direction in response to a first drag; and displaying second information in a second direction different from the first direction in response to a second drag while the first information is displayed.08-04-2011
20110187723TRANSITIONING BETWEEN TOP-DOWN MAPS AND LOCAL NAVIGATION OF RECONSTRUCTED 3-D SCENES - Technologies are described herein for transitioning between a top-down map of a reconstructed structure within a 3-D scene and an associated local-navigation display. An application transitions between the top-down map and the local-navigation display by animating a view in a display window over a period of time while interpolating camera parameters from values representing a starting camera view to values representing an ending camera view. In one embodiment, the starting camera view is the top-down map view and the ending camera view is the camera view associated with a target photograph. In another embodiment, the starting camera view is the camera view associated with a currently-viewed photograph in the local-navigation display and the ending camera view is the top-down map.08-04-2011
20090174716Synchronized Visual and Audio Apparatus and Method - A method and apparatus for synchronizing sound with an illuminated animated image is provided. First and second image frames are defined on a planar surface using a plurality of light transmitting media. A plurality of light sources are positioned adjacent to the plurality of light transmitting media such that the first image frame and the second image frame are illuminated independently by selectively activating each light source in accordance with a pre-programmed illumination sequence. A speaker plays a first sound when the first image frame is illuminated and a second sound when the second image frame is illuminated. A driving device, coupled to the light sources and the speaker, is used to synchronize the illumination of the image frames with the sounds.07-09-2009
20090179900Methods and Apparatus for Export of Animation Data to Non-Native Articulation Schemes - A method for exporting animation data from a native animation environment to a non-native animation environment includes determining first object poses in response to a first object model in the native environment and animation variables, determining a second object model including a geometric object model, determining second object poses in response to the second object model and animation variables, determining surface errors between the first object poses and the second object poses, determining a corrective object offsets in response to the surface errors, determining actuation values associated with the corrective object offsets in response to the surface errors, determining a third object model compatible with the non-native animation environment in response to the second object of poses, the corrective offsets, and the actuation values, and storing the third object model in a memory.07-16-2009
20110080411SIMULATED RESOLUTION OF STOPWATCH - There is described a mobile device comprising a display screen for displaying an image of a clock having a resolution of at least a first digit representing a tenth of a second and a second digit representing a hundredth of a second; and a processor having an internal clock, the processor adapted to update at least the first digit of the image of the clock on the display screen with true elapsed time, and to update the second digit with a non-true number.04-07-2011
20120147012COORDINATION OF ANIMATIONS ACROSS MULTIPLE APPLICATIONS OR PROCESSES - Animation coordination system and methods are provided that manage animation context transitions between and/or among multiple applications. A global coordinator can obtain initial information, such as initial graphical representations and object types, initial positions, etc., from initiator applications and final information, such as final graphical representations and object types, final positions, etc. from destination applications. The global coordination creates an animation context transition between initiator applications and destination applications based upon the initial information and the final information.06-14-2012
20120098836METHOD AND APPARATUS FOR TURNING PAGES IN E-BOOK READER - A method turns pages in an electronic book (e-book) reader. The method includes displaying an e-book as left/right pages and, if a page turn signal is generated, turning a left page or a right page while displaying an action of turning a left page to the right or a right page to the left as if turning pages of a paper book.04-26-2012
20120306891Device and Method for Dynamically Rendering an Animation - A device includes one or more processors, and memory storing programs. The programs include a respective application and an application service module. The application service module includes instructions for, in response to a triggering event from the respective application, initializing an animation object with one or more respective initialization values corresponding to the triggering event. The animation object includes an instance of a predefined animation software class. At each of a series of successive times, the device updates the animation object so as to produce a respective animation value in accordance with a predefined animation function based on a primary function of an initial velocity and a deceleration rate and one or more secondary functions. The device updates a state of one or more user interface objects in accordance with the respective animation value, and renders on a display a user interface in accordance with the updated state.12-06-2012
20120306889METHOD AND APPARATUS FOR OBJECT-BASED TRANSITION EFFECTS FOR A USER INTERFACE - A method and apparatus can provide object-based transition effects for a user interface. The method can include displaying at least one first element corresponding to a first activity on a screen of a user device. The method can include receiving a baton transition request and generating first activity baton information. The method can include displaying a first baton image corresponding to the first activity baton information and generating second activity baton information that provides visual transition information for a transition from the first activity to the second activity. The method can include transitioning the first baton image corresponding to the first activity baton information to a second image corresponding to the second activity baton information, displaying the second image corresponding to the second activity baton information, and displaying at least one second element corresponding to the second activity on the screen.12-06-2012
20120306890Device and Method for Dynamically Rendering an Animation - An electronic device includes a display, one or more processors, and memory storing programs for execution by the one or more processors. The programs include one or more applications and an application service module. The application service module includes instructions for, in response to receiving a triggering event from a respective application of the one or more applications, initializing an animation object with one or more respective initialization values corresponding to the triggering event. The animation object comprises an instance of a predefined animation software class. At each of a series of successive times, the device updates the animation object so as to produce a respective animation value in accordance with a predefined animation function, and renders on the display a user interface including one or more user interface objects in accordance with the respective animation value from the animation object.12-06-2012
20110304631METHODS AND APPARATUSES FOR PROVIDING A HARDWARE ACCELERATED WEB ENGINE - Methods of expressing animation in a data stream are disclosed. In one embodiment, a method of expressing animation in a data stream includes defining animation states in the data stream with each state having at least one property such that properties are animated as a group. The animation states that are defined in the data stream may be expressed as an extension of a styling sheet language. The data stream may include web content and the defined animation states.12-15-2011
20110304630REAL-TIME TERRAIN ANIMATION SELECTION - In-game characters select the proper animation to use depending on the state of the terrain on which they are currently moving. In this specific case the character chooses an animation depending on the angle of the ground on which it is walking. The method involves real-time determination of the ground angle which is then used to choose the most desirable animation from a closed set of pre-created animations. The animation set consists of animations rendered with the character moving on flat terrain, as well as animations rendered of the character moving uphill and downhill (separately) at pre-determined angles. In this game an animation set consisted of the following animations: 0 degrees, 15 degrees uphill, 30 degrees uphill, 45 degrees uphill, 15 degrees downhill, 30 degrees downhill, 45 degrees downhill). Drawing of the animation is offset to give the best appearance relative to the ground angle.12-15-2011
20110304629REAL-TIME ANIMATION OF FACIAL EXPRESSIONS - Animation of a character, such as a video game avatar, to reflect facial expressions of an individual in real-time is described herein. An image sensor is configured to generate a video stream, wherein frames of the video stream include a face of an individual. Facial recognition software is utilized to extract data from the video stream that is indicative of facial expressions of the individual. A three-dimensional rig is driven based at least in part upon the data that is indicative of facial expressions of the individual, and an avatar is animated to reflect the facial expressions of the user in real-time based at least in part upon the three-dimensional rig.12-15-2011
20090184968Incentive Method For The Spirometry Test With Universal Control System Regardless Of Any Chosen Stimulating Image - An incentive method for the spirometry test wherein the use of two separate images instead of only one is provided, the first image being controlled by the incentive control mechanism related to respiration of the patient, and the second image, initially covered by the first image, representing the incentive to the spirometry and being completely independent both of the first image and the respiration. The first image is universal in the meaning that it can be used from time to time with a virtually infinite number of second incentive images. The first image is modified as the spirometry test proceeds to unveil gradually the second image which is an image having a meaning only if completely unveiled, the incentive effect being then the motivation to blow so that the curtain unveils completely an image which has its completeness and can be interpreted only if completely unveiled.07-23-2009
20120147013ANIMATION CONTROL APPARATUS, ANIMATION CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM - An animation control apparatus has: an interpolation component information creating unit (06-14-2012
20100123724Portable Touch Screen Device, Method, and Graphical User Interface for Using Emoji Characters - In some embodiments, a computer-implemented method performed at a portable electronic device with a touch screen display includes simultaneously displaying a character input area operable to display text character input and emoji character input selected by a user, a keyboard display area, and a plurality of emoji category icons. In response to detecting a gesture on a respective emoji category icon, the method also includes simultaneously displaying: a first subset of emoji character keys for the respective emoji category in the keyboard display area and a plurality of subset-sequence-indicia icons for the respective emoji category. The method also includes detecting a gesture in the keyboard display area and, in response: replacing display of the first subset of emoji character keys with display of a second subset of emoji character keys for the respective emoji category, and updating the information provided by the subset-sequence-indicia icons.05-20-2010
20100141662COMMUNICATION NETWORK AND DEVICES FOR TEXT TO SPEECH AND TEXT TO FACIAL ANIMATION CONVERSION - A communication system comprises a sending device, a receiving device and a network which connects the sending device to the receiving device. The sending device comprises at least one user operable input for entering a sequence of textual characters. as a message and transmission means for sending the message across the network. The receiving device comprises a memory which stores a plurality of head images, each one being associated with a different sending device and comprising an image of a head viewed from the front, receiver means for receiving the message comprising the sequence of textual characters, text to speech converting means for converting the text characters of the message into an audio message corresponding to the sequence of text characters and animating means for generating an animated partial 3D image of a head from the head image stored in the memory which is associated with the sender of the message. The animating means animates at least one facial feature of the head, the animation corresponding the movements made by the head when reading the message. A display displays the animated partial 3D head; and a loudspeaker outputs the audio message in synchronisation with the displayed head.06-10-2010
20110316860IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - [Problems] An appropriate motion expression in which a processing load on image processing for a motion of a character is reduced and a predetermined site of the character is in contact with a contact allowed object appropriately is carried out.12-29-2011
20110316859APPARATUS AND METHOD FOR DISPLAYING IMAGES - Apparatus and method for displaying images are provided. The apparatus is configured to cause a display of an image; detect one or more inputs by one or more input objects; determine coordinates of the one or more inputs in respect to the image; determine one or more property of the input; cause production of an animation with the image, the animation relating to the determined coordinates and being configured on the basis of one or more detected properties of the one or more inputs.12-29-2011
20110316858Apparatuses and Methods for Real Time Widget Interactions - An electronic interaction apparatus is provided with a touch screen and a processing unit. The processing unit executes a first widget and a second widget, wherein the first widget generates an animation on the touch screen and modifies the animation in response to an operating status change of the second widget.12-29-2011
20120044250Systems, Methods, and Machine-Readable Storage Media for Presenting Animations Overlying Multimedia Files - Provided are systems, methods, and machine-readable storage media for presenting animations overlying multimedia files in accordance with the present disclosure. Embodiments are described for linking an animation to a multimedia file and presenting the animation overlying a concurrent playback of the multimedia file (e.g., its content). Embodiments are described for including additional elements to the presentation of the animation outside of the playback of the animation, including residual elements that relate to the content of the animation and/or allow a user to receive further information about the content of the animation. Embodiments are described for linking an animation to more than one multimedia file.02-23-2012
20110157188INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - An apparatus comprises: a storage unit adapted to store information that associates information representing the key frame with information representing the display form of the object of the key frame; an assignment unit adapted to assign, to the time-points corresponding to the key frames, indicators with which a difference between the display forms of the objects is identifiable; and a control unit adapted to, when a new key frame is set at a time-point of interest on the time axis where no key frame is preset, and the same indicator as one of the indicators is assigned to the time-point of interest, cause the storage unit to store information representing the newly set key frame in association with information representing the display form of the object of each preset key frame, which is information representing the display form of the object at the time-point to which the same indicator is assigned.06-30-2011
20120001924Embedding Animation in Electronic Mail, Text Messages and Websites - Provided are techniques for providing animation in electronic communications. An image is generated by capturing multiple photographs from a camera or video camera. The first photograph is called the “naked photo.” Using a graphics program, photos subsequent to the naked photo are edited to cut an element common to the subsequent photos. The cut images are pasted into the naked photo as layers. The modified naked photo, including the layers, is stored as a web-enabled graphics file, which is then transmitted in conjunction with electronic communication. When the electronic communication is received, the naked photo is displayed and each of the layers is displayed and removed in the order that each was taken with a short delay between photos. In this manner, a movie is generated with much smaller files than is currently possible.01-05-2012
20120001923SOUND-ENHANCED EBOOK WITH SOUND EVENTS TRIGGERED BY READER PROGRESS - A sound-enhanced ebook is disclosed, the sound being presented to a reader of the ebook in accordance with the reader's progress through the ebook. The sound-enhanced ebook includes text information, and a plurality of sound events, each sound event being played in response to a reader's progress through particular text information associated with the sound event. Also disclosed is an ebook presenter for presenting text and coordinated sound events of a sound-enhanced ebook to a reader, the sound events being presented as the reader progresses through particular text of the ebook. The ebook presenter includes a text presentation module, a reader progress module, and a sound event presentation module, each sound event being associated with particular text information of the ebook, and each sound event being presentable in response to the reader's progress through the text information of the ebook as estimated by the reader progress module.01-05-2012
20120007870METHOD OF CHANGING PROCESSOR-GENERATED INDIVIDUAL CHARACTERIZATIONS PRESENTED ON MULTIPLE INTERACTING PROCESSOR-CONTROLLED OBJECTS - Processor-controlled objects, such as inter-communicating processor-controlled blocks, are adapted to present changeable individual characterizations to a user. A user manipulating the objects can cause, over time, a designated object to inherit characterizations and properties from other interacting objects to permit scalability in a set of such objects. The communication of individual characterization between interacting objects allows generation of sensory responses (in a response generator of a specific object or otherwise in a response generator associated with at least one other similar objects) based on proximity, relative position and the individual characterization presented on and by those interacting objects at the time of interaction. In this way, a set of objects has vastly extended interactive capabilities since each object is capable of dynamically taking on different characterizations arising from a meaningful combination of properties from different conjoined objects.01-12-2012
20120007869Distributed physics based training system and methods - A distributed simulation system is composed of simulator stations linked over a network that each renders real-time video imagery for its user from scene data stored in its data storage. The simulation stations are each connected with a physics farm that manages the virtual objects in the shared virtual environment based on their physical attribute data using physics engines, including an engine at each simulation station. The physics engines of the physics farm are assigned virtual objects so as to reduce the effects of latency, to ensure fair fight requirements of the system, and, where the simulation is of a vehicle, to accurately model the ownship of the user at the station. A synchronizer system is also provided that allows for action of simulated entities relying on localized closed loop controls to cause the entities to meet specific goal points at specified system time points.01-12-2012
20080297515METHOD AND APPARATUS FOR DETERMINING THE APPEARANCE OF A CHARACTER DISPLAY BY AN ELECTRONIC DEVICE - A method and an electronic device are for for selecting apparel for a character that is generated by an electronic device. The method and electronic device determine a changed context of the character, select an updated set of apparel for the character based on the changed context of the character, change the apparel of the character according to the updated set of new apparel; and present the character having the updated set of apparel on a display.12-04-2008
20110096078SYSTEMS AND METHODS FOR PORTABLE ANIMATION RIGS - One embodiment of the present invention sets forth a technique for transporting both behavior and related geometric information for an animation asset between different animation environments. A common virtual machine specification with a specific instruction set architecture is defined for executing behavioral traits of the animation asset. Each target animation environment implements the instruction set architecture. Because each virtual machine runtime engine implements an identical instruction set architecture, animation behavior can identically reproduced over any arbitrary platform implementing the virtual machine runtime engine. Embodiments of the present invention beneficially enable reuse of animation assets without compatibility restrictions related to platform or application differences.04-28-2011
20110096077CONTROLLING ANIMATION FRAME RATE OF APPLICATIONS - Many computer applications incorporate and support animation (e.g., interactive user interfaces). Unfortunately, it may be challenging for computer applications and rendering systems to render animation frames at a smooth and consistent rate while conserving system resources. Accordingly, a technique for controlling animation rendering frame rate of an application is disclosed herein. An animation rendering update interval of an animation timer may be adjusted based upon a rendering system state (e.g., a rate of compositing visual layouts from animation frames) of a rendering system and/or an application state (e.g., a rate at which an application renders frames) of an application. Adjusting the animation rendering update interval allows the animation timer to adjust the frequency of performing rendering callback notifications (work requests to an application to render animation frames) to an application based upon rendering system performance and application performance.04-28-2011
20110096076APPLICATION PROGRAM INTERFACE FOR ANIMATION - Many computer applications incorporate and support animation. Application performance may be enhanced by delegating animation management to an application program interface (animation API) for animation. Accordingly, an animation API for managing animation is disclosed herein. The animation API may be configured to sequentially interpolate values of animation variables defining animation movement of animation objects. The animation API may interpolate the values of the animation variables using animation transitions within animation storyboards. The animation API may be configured to determine durations of animation transitions based upon animation characteristics parameters (e.g., starting position, desiring ending position, starting velocity of an animation variable). Durations and start times of animation transitions may be determined based upon key frames. The animation API may be configured to resolve scheduling conflicts among one or more animation transitions. Also, the animation API may be configured to facilitate smooth animation while switching between animation transitions for an animation variable.04-28-2011
20090153565METHODS AND APPARATUS FOR DESIGNING ANIMATRONICS UNITS FROM ARTICULATED COMPUTER GENERATED CHARACTERS - A method for specifying a design for an animatronics unit includes receiving motion data comprising artistically determined motions, determining a design for construction of at least a portion of the animatronics unit in response to the motion data, and outputting the design for construction of the animatronics unit.06-18-2009
20120013621System and Method for Facilitating the Creation of Animated Presentations - The system and method of creating animated presentations of the present invention focuses largely on the ability for web users with little training to easily create and share animated presentations with other users on the web in addition to allowing experienced artists to share and gain recognition for their works. The system according to the present invention further makes use of manipulable puppets that permit adjustment at several joints in order to facilitate the illusion of movement. The user can very simply adjust the puppet in each frame to their liking and then the system combines the frames into an animated presentation. The user is further able to use other tools available in the animation creator to, for example, adjust the background of the animation, edit the facial expression of the puppet, add text, and/or other shapes to the animation in order to create a unique animated presentation.01-19-2012
20120013620Animating Speech Of An Avatar Representing A Participant In A Mobile Communications With Background Media - Animating speech of an avatar representing a participant in a mobile communication including preparing the avatar for display for display including: selecting images to represent the participant, selecting a generic animation template having a mouth, fitting the images with the generic animation template, and texture wrapping the one or more images representing the participant over the generic animation template; selecting background media; displaying images texture wrapped over the generic animation template with the background media; and animating the images including: receiving an audio speech signal, identifying a series of phonemes, and for each phoneme: identifying a next mouth position, altering the mouth position, texture wrapping a portion of the images corresponding to the altered mouth position, displaying the texture wrapped portion and playing, synchronously with the displayed texture wrapped portion, the portion of the audio speech signal represented by the phoneme.01-19-2012
20120056889ALTERNATE SOURCE FOR CONTROLLING AN ANIMATION - Techniques and tools described herein provide effective ways to program a property of a target object to vary depending on a source. For example, for a key frame animation for a property of a target UI element, an alternate time source is set to a property of a source UI element. When the target UI element is rendered at runtime, the animation changes the target value depending on the value of the property of the source UI element. Features of UI elements and animations can be specified in markup language. The alternate time source can be specified through a call to a programming interface. Animations for multiple target UI elements can have the same source, in which case different parameters for the respective animations can be used to adjust source values in different ways.03-08-2012
20090135187SYSTEM AND METHOD FOR DYNAMICALLY GENERATING RESPONSE MOTIONS OF VIRTUAL CHARACTERS IN REAL TIME AND COMPUTER-READABLE RECORDING MEDIUM THEREOF - A system and a method for dynamically generating response motions of a virtual character in real time and a computer-readable recording medium thereof are provided. The system includes a balance state module, response graph module, and a tracking control module. The balance state module calculates a balance state of the virtual character according to the balance-related information of a character model of the virtual character. The response graph module is coupled to the balance state module for providing a response motion according to the balance state. The tracking control module is coupled to the response graph module for providing a driving information according to the response motion and a body information of the character model. The driving information is used for driving the character model to converge toward the response motion.05-28-2009
20090135188METHOD AND SYSTEM OF LIVE DETECTION BASED ON PHYSIOLOGICAL MOTION ON HUMAN FACE - A method and a system of live detection based on a physiological motion on a human face are provided. The method has the following steps: in step a, a motion area and at least one motion direction in visual angle of a system camera are detected and a detected facial region is found. In step b, whether a valid facial motion exists in the detected facial region is determined. If a valid facial motion is inexistent, the object is considered as a photo of human face, otherwise, the method proceeds to step c to determine whether the facial motion is a physiological motion. If not, the object is considered as the photo of human face, yet considered as a real human face. The real human face and the photo of human face can be distinguished by the present invention so as to increase the reliability of the face recognition system.05-28-2009
20090135189Character animation system and method - A character animation system includes a data generating unit for generating a character skin mesh and an internal reference mesh, a character bone value, and a character solid-body value, a skin distortion representing unit for representing skin distortion using the generated character skin mesh and the internal reference mesh when an external shock is applied to a character, and a solid-body simulation engine for applying the generated character bone value and the character solid-body value to a real-time physical simulation library and representing character solid-body simulation. The system further includes a skin distortion and solid-body simulation processing unit for processing to return to a key frame to be newly applied after the skin distortion and the solid-body simulation are represented.05-28-2009
20120154407APPARATUS AND METHOD FOR PROVIDING SIMULATION RESULT AS IMAGE - Provided are an apparatus and method for providing a simulation result as an image. The method includes performing a simulation of a predetermined system and generating a result log of the simulation, and converting the result log of the simulation into an image on the basis of a database storing a state and operation of a model of the system as image data. Accordingly, a simulation result of a system can be provided without detailed information about the system, an additional application, or a separate storage.06-21-2012
20120026173Transitioning Between Different Views of a Diagram of a System - Presenting different views of a system based on input from a user. A first view of a first portion of the system may be displayed. For example, the first portion may be a device of the system. User input specifying a first gesture may be received. In response to the first gesture, a second view of the first portion of the system may be displayed. For example, the first view may represent a first level of abstraction of the portion of the system and the second view may represent a second level of abstraction of the portion of the system. A second gesture may be used to view a view of a different portion of the system. Additionally, when changing from a first view to a second view, the first view may “morph” into the second view.02-02-2012
20120026172COLLISION FREE CONSTRUCTION OF ANIMATED FEATHERS - To generate a skin-attached element on a skin surface of an animated character, a region of the skin surface within a predetermined distance from a skin-attached element root position is deformed to form a lofted skin according to one of a plurality of constraint surfaces, where each of the plurality of constraint surfaces does not intersect with each other. A sublamina mesh surface constrained to the lofted skin is created. A two-dimensional version of the skin-attached element is projected onto the sublamina mesh surface. The lofted skin is reverted back to a state of the skin surface prior to the deformation of the region of the skin surface.02-02-2012
20120026174Method and Apparatus for Character Animation - The present invention provides various means for the animation of character expression in coordination with an audio sound track. The animator selects or creates characters and expressive characteristic from a menu, and then enters the characteristics, including lip and mouth morphology, in coordination with a running sound track.02-02-2012
20120154408INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus includes a display panel, a frame, a touch sensor, and a controller. The display panel includes a display surface of a predetermined display area. The frame includes a frame surface that surrounds the display panel and determines the display area. The touch sensor is configured to detect touches to the display surface and the frame surface. The controller is configured to execute predetermined processing when a touch to a first area on the display surface is detected, and to execute the predetermined processing when a touch to a second area on the frame surface is detected, the second area being adjacent to the first area.06-21-2012
20110018880TIGHT INBETWEENING - A tool for inbetweening is provided, wherein inbetween frames are at least partly computer generated by analyzing elements of key frames to identify strokes, determining corresponding stroke pairs, computing a continuous stroke motion for each stroke pair, defined by a carrier defined by endpoints of the two strokes and, for mutual endpoints, adjusting the continuous stroke motion of the meeting strokes such that the adjustment results in the continuous stroke motion coinciding at the mutual endpoint such that the mutual endpoint would follow the same path and deforming the stroke as it is moved by the stroke motion, wherein the deformation is a weighted combination of deformations, each reconstructed using shape descriptors that are interpolated from the shape descriptors of the corresponding samples on the key frames, wherein the shape descriptors are computed from neighboring sample points in the cyclic order of samples along the stroke.01-27-2011
20110090231ON-LINE ANIMATION METHOD AND ARRANGEMENT - The arrangement has a first computer arranged to be in data communication with a second computer. The arrangement has a device for receiving from a second computer a editable version of animation data sufficient for rendering visually simplified animation in the second computer. The editable version of animation data has at least one reference to additional data for the purpose of forming animation in the second computer, and forming in the first computer a renderable or rendered version of animation data by combining the editable version of animation data with the referenced additional data.04-21-2011
20120236006MUSCULO-SKELETAL SHAPE SKINNING - A method for use in animation includes establishing a model having a plurality of bones with muscles attached to the bones, binding skin to the muscles when the model is in a first pose with each vertex of the skin being attached at a first attachment point on a muscle, deforming the model into a second pose, and selecting a second attachment point for each vertex of the skin in the second pose. A storage medium stores a computer program for causing a processor based system to execute these steps, and a system for use in animation includes a processing system configured to execute these steps.09-20-2012
20120236005AUTOMATICALLY GENERATING AUDIOVISUAL WORKS - In one embodiment, a method comprises inferentially selecting one or more design animation modules based upon analysis of information obtained from digital visual media items and digital audio media items; and automatically creating an audiovisual work using the selected design animation modules. Audiovisual works can be automatically created based upon inferred and implicit metadata including music genre, image captions, song structure, image focal points, as well as user-supplied data such as text tags, emphasis flags, groupings, and preferred video style.09-20-2012
20100245365IMAGE GENERATION SYSTEM, IMAGE GENERATION METHOD, AND COMPUTER PROGRAM PRODUCT - An image generation system includes an operation information acquisition section that acquires operation information based on sensor information from a controller that includes a sensor, the operation information acquisition section acquiring rotation angle information about the controller around a given coordinate axis as the operation information, a hit calculation section that performs a hit calculation process, the hit calculation process setting at least one of a moving state and an action state of a hit target that has been hit by a hit object based on the rotation angle information that has been acquired by the operation information acquisition section, and an image generation section that generates an image based on the operation information.09-30-2010
20120127181AV DEVICE - There is a problem in that, conventionally, an OSD of a channel sign is displayed in a uniform pattern regardless of any switching operations. Thus it may be difficult for a user to intuitively grasp which switching operation causes a screen display based only on information on the screen, in the case where the user watches a TV screen but not the remote controller. To solve this problem, an AV device is provided which displays an animation of the channel sign from bottom to top in the case where an up key of a broadcast reception channel is operated, for example, and a function for displaying the animation of the channel sign from right to left in the case where an external input (input channel) switching operation is performed so that the user is capable of intuitively grasping the content of the operation.05-24-2012
20120299934Method and Apparatus for Creating a Computer Simulation of an Actor - A method for creating a computer simulation of an actor having a first foot, a second foot and a body including the steps of planting the first foot as a support foot along a space time-varying path. There is the step of stopping time regarding placement of the first foot. There is the step of changing posture of the first foot while the first foot is planted. There is the step of moving time into the future for the second foot as a lifted foot and changing posture for the lifted foot. An apparatus for creating a computer simulation of an actor having a first foot, a second foot and a body. A software program for creating a computer simulation of an actor having a first foot, a second foot and a body that performs the steps of planting the first foot as a support foot along a space time-varying path.11-29-2012
20100207951METHOD AND DEVICE FOR MONITORING OPERATION OF A SOLAR THERMAL SYSTEM - A novel method for monitoring the operation of a solar thermal system such as the healthy home system or the like. The present device includes a hardware housing with a processor device coupled to a bus and one or more memory devices. The processor device can be coupled to one or more input devices wherein the one or more input devices are coupled to at least the solar array. The input devices can be coupled to the electric panel, the space heater, the water heater, as well as other components of the healthy home. The method includes a variety of steps such as establishing connection to associated hardware in the healthy home system, running diagnostic checks to determine system health, validating acquired data, and displaying the data through text display and graphical illustrations. The method also includes updating the system information according to a schedule scheme such as a polling scheme, interrupt scheme, or others. These and possibly other steps can provide an easy and cost effective means of monitoring a healthy home's system operation.08-19-2010
20100207949ANIMATION EVENTS - A method of detecting an occurrence of an event of an event type during an animation, in which the animation comprises, for each of a plurality of object parts of an object, data defining the respective movement of that object part at each of a sequence of time-points for the animation, the method comprising: indicating the event type, wherein the event type specifies: one or more of the object parts; and a sequence of two or more event phases that occur during an event of that event type such that, for each event phase, the respective movements of the one or more specified object parts during that event phase are each constrained according to a constraint type associated with that event phase; and detecting an occurrence of an event of the event type by detecting a section of the animation during which the respective movements defined by the animation for the specified one or more object parts are constrained in accordance with the sequence of two or more event phases.08-19-2010
20100207950DEFINING SIMPLE AND COMPLEX ANIMATIONS - A unified user interface (“UI”) is provided that includes functionality for defining both simple and complex animations for an object. The unified UI includes a UI for defining a single animation for an object and a UI for defining a more complex animation. The UI for defining a single animation for an object includes a style gallery and an effects options gallery. The UI for defining two or more animations for a single object includes a style gallery for selecting two or more animation classes to be applied to an object, one or more user interface controls for specifying the timing and order of the two or more animations, and an on-object user interface (“OOUI”) displayed adjacent to each object for providing a visual indication of the two or more animations and for providing an indication when an animation includes two or more build steps.08-19-2010
20110181605SYSTEM AND METHOD OF CUSTOMIZING ANIMATED ENTITIES FOR USE IN A MULTIMEDIA COMMUNICATION APPLICATION - In an embodiment, a method is provided for creating a personal animated entity for delivering a multi-media message from a sender to a recipient. An image file from the sender may be received by a server. The image file may include an image of an entity. The sender may be requested to provide input with respect to facial features of the image of the entity in preparation for animating the image of the entity. After the sender provides the input with respect to the facial features of the image of the entity, the image of the entity may be presented as a personal animated entity to the sender to preview. Upon approval of the preview from the sender, the image of the entity may be presented as a sender-selectable personal animated entity for delivering the multi-media message to the recipient.07-28-2011
20110181604METHOD AND APPARATUS FOR CREATING ANIMATION MESSAGE - A method for creating an animation message includes generating input information containing information regarding input time and input coordinates according to input order of drawing information input through a touch screen; dividing an image containing the drawing information and background information into a plurality of blocks; creating an animation message by mapping the input information to the plurality of blocks so that the drawing information can be sequentially reproduced according to the input order; allocating a parity bit per pre-set block range of the animation message in order to detect an error occurring in the animation message; and transmitting the created animation message.07-28-2011
20110181603ELECTRONIC READER DEVICE AND GRAPHICAL USER INTERFACE CONTROL METHOD THEREOF - An electronic reader device with a physical control disposed on a surface of the device housing. The physical control is operable to initiate a first function. A display disposed on the surface of the housing is operable to show a virtual control that initiates a second function. A sensor detects a drag operation moving the virtual control to a position on a border of the display adjacent to the physical control. A processor associates the second function with the physical control in response to the drag operation and performs the second function upon activation of the physical control.07-28-2011
20110181602USER INTERFACE FOR AN APPLICATION - A user interface is provided for interacting with slides and objects provided on slides. In certain embodiments, the user interface includes features that are displayed attached to or proximate to selected slides or objects. In embodiments, aspects of the user interface may be used to preview, review, add, or modify transitions associated with animation from one slide to the next (or previous) and builds associated with animation of objects on slides.07-28-2011
20110181601CAPTURING VIEWS AND MOVEMENTS OF ACTORS PERFORMING WITHIN GENERATED SCENES - Generating scenes for virtual environment of a visual entertainment program, comprising: capturing views and movements of an actor performing within the generated scenes, comprising: tracking movements of a headset camera and a plurality of motion capture markers worn by the actor within a physical volume of space; translating the movements of the headset camera into head movements of a virtual character operating within the virtual environment; translating the movements of the plurality of motion capture markers into body movements of the virtual character; generating first person point-of-view shots using the head and body movements of the virtual character; and providing the generated first person point-of-view shots to the headset camera worn by the actor.07-28-2011
20120133658DISPLAY CONTROL APPARATUS AND METHOD FOR CONTROLLING THE SAME - A display control apparatus includes a moving image display unit configured to control a display apparatus to display a moving image thereon, a reading unit configured to read animation information including a plurality of frame images, a detection unit configured to detect touch on the display apparatus, and a display control unit configured to control the display apparatus to display a first frame image of the animation information thereon if touch on a specific position of the display apparatus is detected by the detection unit when the moving image is being displayed on the display aparatus, and to start a transition display of frame images of the animation information if the touch on the display apparatus becomes undetected by the detection unit.05-31-2012
20120313951TECHNIQUES FOR SYNCHRONIZING HARDWARE ACCELERATED GRAPHICS RENDERING AND SURFACE COMPOSITION - A method, a non-transitory computer readable medium having instructions recorded therein for performing the method, and processing device for rendering an animation for a screen. The method includes rendering a frame of animation of a screen, attaching a Move Surfaces at BufferSwap (MSBS) command to at least one surface to be aligned with the frame of animation, swapping the buffer of the frame of animation, updating at least one of a size and a location of the at least one surface having an attached MSBS command, and composing a scene including the contents of the at least one surface of which the at least one of the size and the location has been updated.12-13-2012
20100053172MESH TRANSFER USING UV-SPACE - Mesh data and other proximity information from the mesh of one model can be transferred to the mesh of another model, even with different topology and geometry. A correspondence can be created for transferring or sharing information between points of a source mesh and points of a destination mesh. Information can be “pushed through” the correspondence to share or otherwise transfer data from one mesh to its designated location at another mesh. Correspondences can be created based on parameterization information, such as UV sets, one or more maps, harmonic parameterization, or the like. A collection of “feature curves” may be inferred or user-placed to partition the source and destination meshes into a collection of “feature regions” resulting in partitions or “feature curve networks” for constructing correspondences between all points of one mesh and all points of another mesh.03-04-2010
20120249557SYSTEM FOR PARTICLE EDITING - A computer animation system including polygon mesh editing tools configured to edit a particle simulation by first converting a particle cash of the simulation into a polygon mesh, editing the polygon mesh and then converting the edited polygon mesh back into an edited particle cash.10-04-2012
20090058863Image animation with transitional images - A technique is provided for animating an image or a portion of an image. In accordance with this technique, intermediary or transitional images, referred to as offset images, are displayed as part of an animation step to lessen abrupt changes in pixel values. In one embodiment, the offset images are generated using a weighted average of proximate pixels. In such an embodiment, the weight factor may take into account the distance of the offset from the proximate pixels such that closer pixels are more heavily weighted. Based on the direction of movement for the animation, the offset images are ordered and displayed as part of the animation steps of an animation sequence.03-05-2009
20090058862AUTOMATIC AVATAR TRANSFORMATION FOR A VIRTUAL UNIVERSE - An approach that automatically transforms an avatar characteristic of an avatar that is online in a virtual universe is described. In one embodiment, there is an avatar locator component configured to locate an avatar that is online in the virtual universe. An avatar characteristics transforming component is configured to automatically transform the avatar characteristic associated with the located avatar according to predetermined transformation criteria.03-05-2009
20120075312Avatars in Social Interactive Television - Virtual environments are presented on displays along with multimedia programs to permit viewers to participate in a social interactive television environment. The virtual environments include avatars that are created and maintained in part using continually updated animation data that may be captured from cameras that monitor viewing areas in a plurality of sites. User input from the viewers may be processed in determining which viewers are presented in instances of the virtual environment. Continually updating the animation data results in avatars accurately depicting a viewer's facial expressions and other characteristics. Presence data may be collected and used to determine when to capture background images from a viewing area that may later be subtracted during the capture of animation data. Speech recognition technology may be employed to provide callouts within a virtual environment.03-29-2012
20120075311IMAGE FORMING APPARATUS FOR DISPLAYING INFORMATION ON SCREEN - An image forming apparatus includes an operation panel serving as a display apparatus and an input apparatus for accepting a request for performing processing and a control unit for controlling display on the operation panel. The control unit performs determination processing for determining whether a time period required for the processing requested to be performed is predictable or not, provides animation display by continuously displaying two or more windows relating to the processing when it is determined that the time period required for the processing is predictable, and provides pop-up display by displaying one window relating to the processing when it is determined that the time period required for the processing is not predictable.03-29-2012
20100026688GRAPHICAL WIND GUAGE - A wind gauge display apparatus comprising a control device and a reconfigurable display for displaying a first visual representation of a wind gauge if a wind angle is within a first range and displaying a second visual representation of the wind gauge if the wind angle is within a second range. The angles displayed on the reconfigurable display may be determined by input from a user. On the reconfigurable display, a location of a visual indicator of wind speed may be different in the first visual representation of the wind gauge than in the second visual representation of the wind gauge. The wind gauge display apparatus may also comprise a sensor for determining wind angle and wind speed.02-04-2010
20120249556Methods, systems, and computer readable media for fast geometric sound propagation using visibility computations - Methods, systems, and computer program products for simulating sound propagation can be operable to define a sound source position within a modeled scene having a given geometry and construct a visibility tree for modeling sound propagation paths within the scene. Using from-region visibility techniques to model sound diffraction and from-point visibility technique to model specular sound reflections within the scene, the size of the visibility tree can be reduced. Using the visibility tree, an impulse response can be generated for the scene, and the impulse response can be used to simulate sound propagation in the scene.10-04-2012
20090066700FACIAL ANIMATION USING MOTION CAPTURE DATA - Methods and apparatus for facial animation using motion capture data are described herein. A mathematic solution based on minimizing a metric reduces the number of motion capture markers needed to accurately translate motion capture data to facial animation. A set of motion capture markers and their placement on an actor are defined and a set of virtual shapes having virtual markers are defined. The movement of the virtual markers are modeled based on an anatomical model. An initial facial capture is correlated to a corresponding virtual reference shape. For each subsequent facial capture, a delta vector is computed and a matrix solution determined based on the delta marker, initial positions, and set of virtual shapes. The solution can minimize a metric such as mean squared distance. The solution can be manually modified or edited using a user interface or console.03-12-2009
20090309881COPYING OF ANIMATION EFFECTS FROM A SOURCE OBJECT TO AT LEAST ONE TARGET OBJECT - A method and a processing device may be provided for copying animation effects of a source object to one or more target objects of a presentation. The source object and the target objects may be included in presentation templates, or presentation slides of presentation files. The one or more target objects may be included in a same presentation slide as the source object, a different presentation slide as the source object, a same presentation file as the source object, a different presentation file as a source object, a same presentation template as a source object, or a different presentation template as the source object. Animation effects that are supported by a target object may be copied from the source object to the target object. When copying one or more animation effects from the source object to multiple target objects, timing of the animation effects may be serial or concurrent.12-17-2009
20120256928Methods and Systems for Representing Complex Animation Using Scripting Capabilities of Rendering Applications - A computerized device implements an animation coding engine to analyze timeline data defining an animation sequence and generate a code package. The code package can represent the animation sequence using markup code that defines a rendered appearance of a plurality of frames and a structured data object also comprised in the code package and defining a parameter used by a scripting language in transitioning between frames. The markup code can also comprise a reference to a visual asset included within a frame. The code package further comprises a cascading style sheet defining an animation primitive as a style to be applied to the asset to reproduce one or more portions of the animation sequence without transitioning between frames.10-11-2012
20120327091Gestural Messages in Social Phonebook - A method and a system is offered to enable communicating a status of a user (12-27-2012
20120188253SIGNAGE DISPLAY SYSTEM AND PROCESS - Display apparatus and process is provided for displaying a static source image in a manner that it is perceived as an animated sequence of images when viewed by an observer in relative motion to the apparatus. The source image is sliced or fractured to provide a plurality of image fractions of predetermined dimension. The fractions are redistributed in a predetermined sequence to provide an output image, which is placed in a preferably illuminated display apparatus provided with a mask. An observer in relative motion to the display apparatus, sequentially views a predetermined selection of image fractions through the mask, which are perceived by the observer as a changing sequence of images. Applying the concepts of persistence of vision, the observer perceives the reconstructed imagery as live action animation, a traveling singular image or series of static images, or changing image sequences, from a plurality of lines of sight.07-26-2012
20120188255Framework for Graphics Animation and Compositing Operations - A framework for performing graphics animation and compositing operations has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media, or any other type of object for a user interface of an application. The application commits change to the state of the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, an animation is determined for animating the change in state. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer for display on the processing device. Those portions of the render tree that have changed relative to prior versions can be tracked to improve resource management.07-26-2012
20120188257LOOPING MOTION SPACE REGISTRATION FOR REAL-TIME CHARACTER ANIMATION - A method for generating a looping motion space for real-time character animation may include determining a plurality of motion clips to include in the looping motion space and determining a number of motion cycles performed by a character object depicted in each of the plurality of motion clips. A plurality of looping motion clips may be synthesized from the motion clips, where each of the looping motion clips depicts the character object performing an equal number of motion cycles. Additionally, a starting frame of each of the plurality of looping motion clips may be synchronized so that the motion cycles in each of the plurality of looping motion clips are in phase with one another. By rendering an animation sequence using multiple passes through the looping motion space, an animation of the character object performing the motion cycles may be extended for arbitrary length of time.07-26-2012
20120188256VIRTUAL WORLD PROCESSING DEVICE AND METHOD - A method and apparatus for processing a virtual world. A data structure of a virtual object of a virtual world may be defined, and a virtual world object of the virtual world may be controlled, and accordingly an object in a real world may be reflected to the virtual world. Additionally, the virtual world object may migrate between virtual worlds, using the defined data structure.07-26-2012
20120188254Distinguishing requests for presentation time from requests for data time - Techniques are provided for managing Presentation Time in a digital rendering system for presentation of temporally-ordered data when the digital rendering system includes a Variable Rate Presentation capability. In one embodiment, Presentation Time is converted to Data Time, and Data Time is reported instead of Presentation Time when only one time can be reported. In another embodiment, a predetermined one of Presentation Time and Data Time is returned in response to a request for a Current Time.07-26-2012
20120320066Modifying an Animation Having a Constraint - A computer-implemented method for handling a modification of an animation having a constraint includes detecting a user modification of an animation that involves at least first and second objects, the first object constrained to the second object during a constrained period and non-constrained to the second object during a non-constrained period. The method includes, based on the user modification, selecting one of at least first and second compensation adjustments for the animation based on a compensation policy; and adjusting the animation according to the selected compensation adjustment.12-20-2012
20100315426SYSTEMS AND METHODS FOR INTEGRATING GRAPHIC ANIMATION TECHNOLOGIES IN FANTASY SPORTS CONTEST APPLICATIONS - Systems and methods for integrating graphic animation technologies with fantasy sports contest applications are provided. This invention enables a fantasy sports contest application to depict plays in various sporting events using graphic animation. The fantasy sports contest application may combine graphical representation of real-life elements such as, for example, player facial features, with default elements such as, for example, a generic player body, to create realistic graphic video. The fantasy sports contest application may provide links to animated videos for depicting plays on contest screens in which information associated with the plays may be displayed. The fantasy sports contest application may play the animated video for a user in response to the user selecting such a link. In some embodiment of the present invention, the fantasy sports contest application may also customize animated video based on user-supplied setup information. For example, the fantasy sports contest application may provide play information and other related data to allow a user to generate animated videos using the user's own graphics processing equipment and graphics animation program.12-16-2010
20120229474FLYING EFFECTS CHOREOGRAPHY SYSTEM - A flying effects choreography system provides visualizations of flying effects within a virtual environment. The system allows choreographers to define a sequence of waypoints that identify a path of motion. A physics engine of the system may then calculate position data for a performer or other element attached to a free-swinging pendulum cable, as the performer and pendulum cable move along the path of motion. In this manner, the position data describes the motion of the performer, including the pendulum effect or swing of the performer on the pendulum cable. The position data may be used to generate one or more visualizations that show the performer's motion, including the pendulum effect. The choreographer may review the visualizations and make modifications any number of times, until a desired flying effect is produced, without having to physically implement the flying effect in the real world.09-13-2012
20120229473Dynamic Animation in a Mobile Device - Method and system for monitoring occurrence of an event using dynamic animation are disclosed. The method includes identifying an event to be dynamically animated, defining a set of trigger conditions of the event to be monitored, monitoring the event according to the set of trigger conditions, computing a display unit in accordance with a comparison of a status of the event to a corresponding trigger condition of the event, creating a dynamic animation for display using the display unit, and displaying the dynamic animation on a display.09-13-2012
20100328318IMAGE DISPLAY DEVICE - An image display device is constructed by a display memory, a sprite attribute table, a sprite rendering processor and an animation execution engine. The display memory stores image data to be displayed on a display. The sprite attribute table stores attribute data representing a display attribute of a sprite which is a component of the image data. The sprite rendering processor executes a drawing process for reflecting image data of the sprite to the image data stored in the display memory according to the attribute data stored in the sprite attribute table. The animation execution engine reads an animation execution program including both attribute data to be transferred and a table write command of the attribute data from an external memory, and executes the animation execution program to transfer the attribute data to the sprite attribute table according to the table write command.12-30-2010
20080297517Transitioning Between Two High Resolution Images in a Slideshow - A method of transitioning between two high resolution images in a slideshow includes replacing a first image with a lower resolution copy of that first image and fading out the lower resolution copy of the first image to reveal a second image. A system for transitioning between two high resolution images in a slideshow includes a video chip having a first video buffer for containing a first image, a second video buffer for containing a second image, and a graphic buffer for containing a lower resolution copy of the first image. The chip is configured to replace the first image with the lower resolution copy of the first image and fade out the lower resolution copy of the first image to reveal the second image.12-04-2008
20080297516Generating a Surface Representation of an Item - Among other disclosed subject matter, a computer-implemented method for generating a surface representation of an item includes identifying, for a point on an item in an animation process, at least first and second transformation points corresponding to respective first and second transformations of the point. Each of the first and second transformations represents an influence on a location of the point of respective first and second joints associated with the item. The method includes determining an axis for a cylindrical coordinate system using the first and second transformations. The method includes performing an interpolation of the first and second transformation points in the cylindrical coordinate system to obtain an interpolated point. The method includes recording the interpolated point in a surface representation of the item in the animation process.12-04-2008
20130169647DISPLAYING PARTIAL LOGOS - A processor serves instructions that set a position of a banner having a shadow line and a position of an image of a partial logo having a line crossing at least part of the partial logo, wherein the position of the image of the partial logo is set based on the position of the banner, a dimension of the banner, and a position of the line crossing the partial logo. An image of the banner having the shadow line is retrieved and served. An image of the partial logo is retrieved and served. The rendered banner and partial logo display the partial logo such that the line crossing the partial logo is aligned with a shadow line on the banner.07-04-2013
20130169648CUMULATIVE MOVEMENT ANIMATIONS - Cumulative movement animation techniques are described. In one or more implementations, an output a first animation is initiated that involves a display of movement in a user interface of a computing device. An input is received by the computing device during the output of the first animation, the input configured to cause a second display of movement in the user interface. Responsive to the receipt of the input, a remaining portion of the movement of the first animation is output along with the movement of the second animation by the computing device.07-04-2013
20130169649MOVEMENT ENDPOINT EXPOSURE - Movement endpoint exposure techniques are described. In one or more implementations, an input is received by a computing device to cause output of an animation involving movement in a user interface. Responsive to the receipt of the input, an endpoint is exposed to software of the computing device that is associated with the user interface, such as applications and controls. The endpoint references a particular location in the user interface at which the animation is calculated to end for the input.07-04-2013
20110037767VIDEO IN E-MAIL - To allow a video clip to be rendered within an e-mail, the video stream is converted into an animated image object (e.g. a GIF (Graphics Interchange Format) object) and stored on a server system. An HTML image element/tag () is created that references the animated image object at the server, for embedding in a conventional HTML-encoded e-mail message. When the receiving e-mail application processes the HTML encoding, the processing of the HTML image element causes the referenced animated image object to be downloaded and displayed, thereby automatically presenting a recreation of the video stream. To facilitate efficient transmission to the receiving device, the size of the animated image object is preferably optimized before transmission, the optimization including general optimization techniques, as well as optimizations based on the particular characteristics associated with the receiving device and/or the communications link to the receiving device.02-17-2011
20090207175Animation Using Animation Effect and Trigger Element - Among other disclosed subject matter, a computer-implemented method for animating an image element includes determining that a trigger element defined by a trigger element occurs. The method includes, in response to the trigger element, applying an animation effect to a group that comprises at least one image element. A first association between the animation effect and the group is configured for another animation effect to selectively be associated with the group, and a second association between the trigger element and the animation effect is configured for another trigger element to selectively be associated with the animation effect.08-20-2009
20120139924DYNAMIC ADAPTION OF ANIMATION TIMEFRAMES WITHIN A COMPLEX GRAPHICAL USER INTERFACE - The dynamic adaption of animation timeframes includes selecting animations to be displayed on a graphical user interface (GUI) and aligning the selected animations in a queue. An overall duration of time needed to display the selected animations in the queue is determined based on timeframes associated with the selected animations in the queue. The overall duration of time is compared with a predefined time value. If the overall duration of time is greater than the predefined time, a timeframe associated with at least one of the selected animations in the queue is reduced until the overall duration of time is less than or equal to the predefined time value. Each of the selected animations in the queue are sequentially displayed on the GUI for an amount of time that is based on the timeframes associated with the selected animations in the queue.06-07-2012
20120327090Methods and Apparatuses for Facilitating Skeletal Animation - Methods and apparatuses for facilitating skeletal animation are provided. A method may include determining a holistic motion path for a skeletal animation. The method may further include determining, independently of the determination of the holistic motion path, a limb animation for the skeletal animation based at least in part upon a plurality of skeletal key frames. The method may additionally include generating the skeletal animation by correlating the holistic motion path with the limb animation. Corresponding apparatuses are also provided.12-27-2012
20120327089Fully Automatic Dynamic Articulated Model Calibration - A depth sensor obtains images of articulated portions of a user's body such as the hand. A predefined model of the articulated body portions is provided. The model is matched to corresponding depth pixels which are obtained from the depth sensor, to provide an initial match. The initial match is then refined using distance constraints, collision constraints, angle constraints and a pixel comparison using a rasterized model. Distance constraints include constraints on distances between the articulated portions of the hand. Collision constraints can be enforced when the model meets specified conditions, such as when at least two adjacent finger segments of the model are determined to be in a specified relative position, e.g., parallel. The rasterized model includes depth pixels of the model which are compared to identify overlapping pixels. Dimension of the articulated portions of the model are individually adjusted.12-27-2012
20120327088Editable Character Action User Interfaces - A system includes a computing device that includes a memory configured to store instructions. The computing device also includes a processor configured to execute the instructions to perform a method that includes defining at least one of a location in a virtual scene and a time represented in a timeline as being associated with a performance of an animated character. The method also includes aggregating data that represents actions of the animation character for at least one of the defined location and the defined time. The method also includes presenting a user interface that includes a representation of the aggregated actions. The representation is editable to adjust at least one action included in the aggregation.12-27-2012
20130021347SYSTEMS AND METHODS FOR FINANCIAL PLANNING USING ANIMATION - CiFiCo (Cinematic Financial Concepts) simplifies finance concept by taking information and “cinematizing” it into fun, simple, engaging, moving visual representations (aka “movies”) accompanied by sound and touch control. Movies contain various assets, incomes, and insurance, as well as intergenerational timelines. CiFiCo can demonstrate the impact of asset accumulation, distribution, taxes, insurance, investments, intergenerational transfers, and other concepts. The tool allows individuals to gain a unique perspective on how the financial decisions they make (past, present and future) can impact their overall financial picture (movie). CiFiCo can illustrate and factor for contributions and distributions, as well as risks or attacks that may draw against one's financial stability (e.g., death, disabilities, long term care costs, lawsuits, natural disasters, market volatility, etc.). The application can illustrate a single financial concept, compare several financial strategies, or portray a fully integrated, multi-generational, financial plan.01-24-2013
20120092347ELECTRONIC DEVICE AND METHOD FOR DISPLAYING WEATHER INFORMATION THEREON - An electronic device and method displays weather information by different location images processed using image effects. A location of the electronic device is detected then the electronic device receives weather information of the location from a server. Upon detecting that the weather information, the electronic device reads the image effects of the images from a storage unit. After the electronic device reads the images from the server according to the location information. The images processed using the image effects are then displayed on a display unit of the electronic device.04-19-2012
20120092346GROUPING ITEMS IN A FOLDER - User interface changes and file system operations related to grouping items in a destination folder are disclosed. A user can group multiple items displayed on a user interface into a destination folder using an input command. An animation can be presented in the user interface illustrating the creation of the destination folder and the movement of each selected item into the newly created folder. The movement of each selected item can be along a respective path starting from an initial location on the user interface and terminating at the destination folder, and initiation of the movement of each selected item can be asynchronous with respect to the other selected items. Implementations showing the animations in various types of user interfaces are also disclosed.04-19-2012
20120287137Management of Presentation Time in a Digital Media Presentation System with Variable Rate Presentation Capability - Techniques are provided for managing Presentation Time in a digital rendering system for presentation of temporally-ordered data when the digital rendering system includes a Variable Rate Presentation capability. In one embodiment, Presentation Time is converted to Data Time, and Data Time is reported instead of Presentation Time when only one time can be reported. In another embodiment, a predetermined one of Presentation Time and Data Time is returned in response to a request for a Current Time.11-15-2012
20130009964METHODS AND APPARATUS TO PERFORM ANIMATION SMOOTHING - Methods and apparatus to perform animation smoothing are disclosed. An example method includes determining an estimated drawing time associated with each of a plurality of frames of an animation, calculating a metric based on the estimated drawing time associated with each of the plurality of frames, and updating an assumed frame time based on the metric.01-10-2013
20130021348SYSTEMS AND METHODS FOR ANIMATION RECOMMENDATIONS - Systems and methods for generating recommendations for animations to apply to animate 3D characters in accordance with embodiments of the invention are disclosed. One embodiment includes an animation server and a database containing metadata describing a plurality of animations and the compatibility of ordered pairs of the described animations. In addition, the animation server is configured to receive requests for animation recommendations identifying a first animation, generate a recommendation of at least one animation described in the database based upon the first animation, receive a selection of an animation described in the database, and concatenate at least the first animation and the selected animation.01-24-2013
20130147810APPARATUS RESPONSIVE TO AT LEAST ZOOM-IN USER INPUT, A METHOD AND A COMPUTER PROGRAM - A method, apparatus, computer program and user interface wherein the method comprises displaying a still image on a display; detecting user selection of a portion of the still image; and in response to the detection of the user selection, replacing the selected portion of the image with a moving image and maintaining the rest of the still image, which has not been selected, as a still image.06-13-2013
20120249555VISUAL CONNECTIVITY OF WIDGETS USING EVENT PROPAGATION - A method, system and computer program product receive a set of objects for connection, create a moving object within the set of objects, display visual connection cues on objects in the set of objects, adjust the visual connection cues of the moving object and a target object in the set of objects, identify event propagation precedence, and connect the moving object with the target object.10-04-2012
20130113807User Interface for Controlling Animation of an Object - A user can control the animation of an object via an interface that includes a control area and a user-manipulable control element. In one embodiment, the control area includes an ellipse, and the user-manipulable control element includes an arrow. In yet another embodiment, the control area includes an ellipse, and the user-manipulable control element includes two points on the circumference of the ellipse. In yet another embodiment, the control area includes a first rectangle, and the user-manipulable control element includes a second rectangle. In yet another embodiment, the user-manipulable control element includes two triangular regions, and the control area includes an area separating the two regions.05-09-2013
20130093774CLOUD-BASED ANIMATION TOOL - A cloud-based animation tool may improve the graphics capabilities of low-cost devices. A web service may allow a user to submit a text string from the device for animation by the cloud-based tool. The string may be parsed by a natural language processor into components such as nouns and verbs. The parsed words may be cross-referenced to content through a reference database, including instructions for verbs and images for nouns. An animation may be created from the images corresponding to the nouns and instructions corresponding to the verbs. The animation may be rendered for display and may be transmitted to the user through the web service. The cloud-based animation tool may improve access to educational material for students accessing content through low-cost devices made available through the one-computer-per-child program.04-18-2013
20130100142INTERFACING WITH A SPATIAL VIRTUAL COMMUNICATION ENVIRONMENT - A spatial layout of zones of a virtual area in a network communication environment is displayed. A user can have a respective presence in each of one or more of the zones. Navigation controls and interaction controls are presented. The navigation controls enable the user to specify where to establish a presence in the virtual area. The interaction controls enable the user to manage interactions with one or more other communicants in the network communication environment. A respective presence of the user is established in each of one or more of the zones on response to input received via the navigation controls. Respective graphical representations of the communicants are depicted in each of the zones where the communicants respectively have presence.04-25-2013
20130120398INPUT DEVICE AND METHOD FOR AN ELECTRONIC APPARATUS - The present specification teaches an input device and method for electronic apparatus. The input device can be based on one or more force sensitive input devices, such as force sensitive resistors. The electronic apparatus includes an output device such as a display. A processor is configured to receive input from the input device and to control the display or other output device. In certain implementations, the display is controlled to generate a first graphical object that is associated with an instruction. The processor is configured to generate a second graphical object in response to an input received from the force sensitive input device that corresponds with the instruction.05-16-2013
20130127874Physical Simulation Tools For Two-Dimensional (2D) Drawing Environments - Methods and apparatus for simulating various physical effects on 2D objects in two-dimensional (2D) drawing environments. A set of 2D physical simulation tools may be provided for editing and enhancing 2D art based on 2D physical simulations. Each 2D physical simulation tool may be associated with a particular physical simulator that may be applied to 2D objects in an image using simple and intuitive gestures applied with the respective tool. In addition, predefined materials may be specified for a 2D object to which a 2D physical simulation tool may be applied. The 2D physical simulation tools may be used to simulate physical effects in static 2D images and to generate 2D animations of the physical effects. Computing technologies may be leveraged so that the physical simulations may be executed in real-time or near-real-time as the tools are applied, thus providing immediate feedback and realistic visual effects.05-23-2013
20130127875Value Templates in Animation Timelines - Methods and systems for animation timelines using value templates are disclosed. In some embodiments, a method includes generating a data structure corresponding to a graphical representation of a timeline and creating an animation of an element along the timeline, where the animation modifies a property of the element according to a function, and where the function uses a combination of a string with a numerical value to render the animation. The method also includes adding a command corresponding to the animation into the data structure, where the command is configured to return the numerical value, and where the data structure includes a value template that produces the combination of the string with the numerical value. The method further includes passing the produced combination of the string with the numerical value to the function and executing the function to animate the element.05-23-2013
20130135315METHOD, SYSTEM AND SOFTWARE PROGRAM FOR SHOOTING AND EDITING A FILM COMPRISING AT LEAST ONE IMAGE OF A 3D COMPUTER-GENERATED ANIMATION - Method for shooting and editing a film comprising at least one image of a 3D computer-generated animation created by a cinematographic software according to mathematical model of elements that are part of the animation and according to a definition of situations and actions occurring for said elements as a function of time, said method being characterized by comprising the following: computing of alternative suggested viewpoints by the cinematographic software for an image of the 3D computer-generated animation corresponding to a particular time point according to said definition; and instructing for displaying on a display interface, all together, images corresponding to said computed alternative suggested viewpoints of the 3D computer-generated animation at that particular time point.05-30-2013
20130141439METHOD AND SYSTEM FOR GENERATING ANIMATED ART EFFECTS ON STATIC IMAGES - A method and system for generating animated art effects while viewing static images, where the appearance of effects depends upon on the content of an image and parameters of accompanying sound is provided. The method of generating animated art effects on static images, based on the static image and accompanying sound feature analysis, includes storing an original static image; detecting areas of interest on the original static image and computing features of the areas of interest; creating visual objects of art effects according to the features detected in the areas of interest; detecting features of an accompanying sound; modifying parameters of visual objects in accordance with the features of the accompanying sound; and generating a frame of an animation including the original static image with superimposed visual objects of art effects.06-06-2013
20130141440OPERATION SEQUENCE DISPLAY METHOD AND OPERATION SEQUENCE DISPLAY SYSTEM - Disclosed are an operation sequence display method and an operation sequence display system, wherein operation scenes to attach or remove one or a plurality of components are displayed by switching the scenes. And in at least one operation scene, the attachment or removal target components are displayed in a different manner from other components by changing gray scales using a single color, marking displays for emphasizing operation portions of the target components or the moving directions of the target components in the screen are blinked at a constant interval, and after the marking displays are blinked, the operations to the operation portions or the movements of the target components are displayed by animation, and displays regarding the operations to the operation portions or the movements of the target components are performed at a constant rhythm.06-06-2013
20110273456SYSTEM AND METHOD FOR PARTIAL SIMULATION AND DYNAMIC CONTROL OF SIZES OF ANIMATED OBJECTS - Systems and methods are provided for altering a portion of a simulation without deleteriously altering adjoining portions, and in so doing increasing the pace at which simulations may be made by decreasing the overall number and size of simulations required. In other implementations, the systems and method provide convenient ways to dynamically control the size of animated objects, such as hair or cloth, using animated rest poses.11-10-2011
20130147811SELECTION OF ANIMATION DATA FOR A DATA-DRIVEN MODEL - A set of animation data for an element in an animation is statistically sampled to obtain a common context. The common context is a subset of a plurality of frames of the set of animation data. Further, output of a data-driven model for the animation, which utilizes at least a subset of the common context, is compared with output of a computational model for the animation. The computational model has a first set of logic. The data-driven model has a second set of logic that has less logic than the first set of logic. In addition, an error between the computational model and the data-driven model is computed.06-13-2013
20130147812HEATING, VENTILATION AND AIR CONDITIONING SYSTEM USER INTERFACE HAVING PROPORTIONAL ANIMATION GRAPHICS AND METHOD OF OPERATION THEREOF - A user interface for use with an HVAC system, a method of providing service reminders on a single screen of a user interface of an HVAC system and an HVAC system incorporating the user interface or the method. In one embodiment, the user interface includes: (1) a display configured to provide information to a user, (2) a touchpad configured to accept input from the user and (3) a processor and memory coupled to the display and the touchpad and configured to drive the display, the display further configured to display proportional animation graphics corresponding to attributes of the HVAC system.06-13-2013
20130100140HUMAN BODY AND FACIAL ANIMATION SYSTEMS WITH 3D CAMERA AND METHOD THEREOF - An animation system integrating face and body tracking for puppet and avatar animation by using a 3D camera is provided. The 3D camera human body and facial animation system includes a 3D camera having an image sensor and a depth sensor with same fixed focal length and image resolution, equal FOV and aligned image center. The system software of the animation system provides on-line tracking and off-line learning functions. An algorithm of object detection for the on-line tracking function includes detecting and assessing a distance of an object; depending upon the distance of the object, the object can be identified as a face, body, or face/hand so as to perform face tracking, body detection, or ‘face and hand gesture’ detection procedures. The animation system can also have zoom lens which includes an image sensor with an adjustable focal length f′ and a depth sensor with a fixed focal length f.04-25-2013
20100309209System and method for database driven action capture - There is provided a system and method for database driven action capture. By utilizing low cost, lightweight MEMS devices such as accelerometers, a user friendly, wearable, and cost effective system for motion capture is provided, which relies on a motion database of previously recorded motions to reconstruct the actions of a user. By relying on the motion database, calculation errors such as integration drift are avoided and the need for complex and expensive positional compensation hardware is avoided. The accelerometers may be implemented in an E-textile embodiment using inexpensive off-the-shelf components. In some embodiments, compression techniques may be used to accelerate linear best match searching against the motion database. Adjacent selected motions may also be blended together for improved reconstruction results and visual rendering quality. Various perceivable effects may be triggered in response to the reconstructed motion, such as animating a 3D avatar, playing sounds, or operating a motor.12-09-2010
20100309208Remote Control Electronic Display System - A remotely controlled electronic display sign which operates with a plasma display and which provides for humidity and heat control and the like allowing the sign to be used in various environments. The sign is essentially self-contained and includes those components necessary for enabling a display of desired material from a remote control source or one located at the sign. A controller in or associated with the sign is accessible either electrically, or through satellite transmission or other wireless transmission from the remote source which allows the display of the sign to be changed at will. Thus, an operator at a remote source may, with the aid of a pre-prepared graphic design, transmit that design to the controller at or associated with the sign for display of that graphic information and potentially with sound.12-09-2010
20120274645ALIGNING ANIMATION STATE UPDATE AND FRAME COMPOSITION - An event, such as a vertical blank interrupt or signal, received from a display adapter in a system is identified. Activation of a timer-driven animation routine that updates a state of an animation and activation of a paint controller module that identifies updates to the state of the animation and composes a frame that includes the updates to the state of the animation are aligned, both being activated based on the identified event in the system.11-01-2012
20120274644Framework for Graphics Animation and Compositing Operations - A graphics animation and compositing operations framework has a layer tree for interfacing with the application and a render tree for interfacing with a render engine. Layers in the layer tree can be content, windows, views, video, images, text, media, or other types of objects for an application's user interface. The application commits state changes to the layers of the layer tree. The application does not need to include explicit code for animating the changes to the layers. Instead, an animation is determined for animating the change in state by the framework which can define a set of predetermined animations based on motion, visibility, and transition. The determined animation is explicitly applied to the affected layers in the render tree. A render engine renders from the render tree into a frame buffer. Portions of the render tree changing relative to prior versions can be tracked to improve resource management.11-01-2012
20130155071Document Collaboration Effects - Various features and processes related to document collaboration are disclosed. In some implementations, animations are presented when updating a local document display to reflect changes made to the document at a remote device. In some implementations, a user can selectively highlight changes made by collaborators in a document. In some implementations, a user can select an identifier associated with another user to display a portion of a document that includes the other user's cursor location. In some implementations, text in document chat sessions can be automatically converted into hyperlinks which, when selected, cause a document editor to perform an operation.06-20-2013
20130155072ELECTRONIC DEVICE AND METHOD FOR MANAGING FILES USING THE ELECTRONIC DEVICE - A method for managing files using an electronic device determines a target storage path in the electronic device in response to detecting a selection operation of the target storage path on a touch panel of the electronic device. A copy operation on a target file displayed on the touch panel is detected, while the selection operation is being implemented. The target file is stored at the target storage path, and an animated cartoon is output on the touch panel to represent a process of storing the target file to the target storage path.06-20-2013
20120281001METHOD FOR CONSTRUCTING BODIES THAT ROTATE IN THE SAME DIRECTION AND ARE IN CONTACT WITH ONE ANOTHER AND COMPUTER SYSTEM FOR CARRYING OUT SAID METHOD - The invention relates to a method for constructing bodies which, while rotating codirectionally about axes arranged in parallel, constantly touch one another at at least one point.11-08-2012
20110292055SYSTEMS AND METHODS FOR ANIMATING NON-HUMANOID CHARACTERS WITH HUMAN MOTION DATA - Systems, methods and products for animating non-humanoid characters with human motion are described. One aspect includes selecting key poses included in initial motion data at a computing system; obtaining non-humanoid character key poses which provide a one to one correspondence to selected key poses in said initial motion data; and statically mapping poses of said initial motion data to non-humanoid character poses using a model built based on said one to one correspondence from said key poses of said initial motion data to said non-humanoid character key poses. Other embodiments are described.12-01-2011
20110310104DIGITAL COMIC BOOK FRAME TRANSITION METHOD - A method is provided in which, during a first period of time, first data relating to a first frame of a digital comic book are displayed via an electronic display device. The displayed first data includes a first content element. During a second period of time, second data relating to a second frame of the digital comic book is displayed via the electronic display device. The displayed second data also includes the first content element. A frame transition effect is displayed during a third period of time intermediate the first period of time and the second period of time. During the third period of time, an animation sequence is displayed depicting translation of the first content element between a location of the first content element in the displayed first data and a location of the first content element in the displayed second data. In particular, the animation sequence is displayed superimposed on the frame transition effect.12-22-2011
20130187928SIGNAGE DISPLAY SYSTEM AND PROCESS - Display apparatus and process is provided for displaying a static source image in a manner that it is perceived as an animated sequence of images when viewed by an observer in relative motion to the apparatus. The source image is sliced or fractured to provide a plurality of image fractions of predetermined dimension. The fractions are redistributed in a predetermined sequence to provide an output image, which is placed in a preferably illuminated display apparatus provided with a mask. An observer in relative motion to the display apparatus, sequentially views a predetermined selection of image fractions through the mask, which are perceived by the observer as a changing sequence of images. Applying the concepts of persistence of vision, the observer perceives the reconstructed imagery as live action animation, a traveling singular image or series of static images, or changing image sequences, from a plurality of lines of sight.07-25-2013
20130187930METHOD AND SYSTEM FOR INTERACTIVE SIMULATION OF MATERIALS AND MODELS - A method and system for drawing, displaying, editing animating, simulating and interacting with one or more virtual polygonal, spline, volumetric models, three-dimensional visual models or robotic models. The method and system provide flexible simulation, the ability to combine rigid and flexible simulation on plural portions of a model, rendering of haptic forces and force-feedback to a user.07-25-2013
20120019540Sliding Motion To Change Computer Keys - The subject matter of this specification can be implemented in, among other things, a computer-implemented touch screen user interface method that includes displaying a plurality of keys of a virtual keyboard on a touch screen computer interface, wherein the keys each include initial labels and a first key has multi-modal input capability that include a first mode in which the key is tapped and a second mode in which the key is slid across. The method further includes identifying an occurrence of sliding motion in a first direction by a user on the touch screen and over the first key. The method further includes determining modified key labels for at least some of the plurality of keys. The method further includes displaying the plurality of keys with the modified labels in response to identifying the occurrence of sliding motion on the touch screen and over the first key.01-26-2012
20130194280SYSTEM AND METHOD FOR PROVIDING AN AVATAR SERVICE IN A MOBILE ENVIRONMENT - Provided is an avatar service system and method for providing an avatar in a service provided in a mobile environment. The avatar service system may include a request receiving unit to receive a request for the avatar to perform an action, an image data selecting unit to select image data and metadata for body layers forming a body of the avatar in response to the request, and based on the selected body image data to further select image data for a plurality of item layers disposed on the body of the avatar, and an avatar action processing unit to generate action data for applying the action of the avatar based on the selected image data and metadata.08-01-2013
20130194279OPTIMIZING GRAPH EVALUATION - A system for performing graphics processing is disclosed. A dependency graph comprising interconnected nodes is accessed. Each node has output attributes and the dependency graph receives input attributes. A first list is accessed, which includes a dirty status for each dirty output attribute of the dependency graph. A second list is accessed, which associates one of the input attributes with output attributes that are affected by the one input attribute. A third list is accessed, which associates one of the output attributes with output attributes that affect the one output attribute. An evaluation request for a requested output attribute is received. A set of output attributes are selected for evaluation based on being specified in the first list as dirty and being specified in the third list as associated with the requested output attribute. The set of output attributes are evaluated.08-01-2013
20130194278PORTABLE VIRTUAL CHARACTERS - Described herein are methods, systems, apparatuses and products for portable virtual characters. One aspect provides a method including: providing a virtual character on a first device, the virtual character having a plurality of attributes allowing for a plurality of versions of the virtual character; providing a device table listing at least one device having at least one attribute; transferring to a second device information to permit an instantiation of the virtual character on the second device, the instantiation of the virtual character on the second device including at least one attribute matching at least one attribute of the second device as determined from the device table; and receiving virtual character information from the second device related to the at least one attribute of the second device to permit updating of the virtual character on the first device. Other embodiments are disclosed.08-01-2013
20090244071Synthetic image automatic generation system and method thereof - Provided is a computer system and a computerized method to automatically generate the synthetic images that simulate the human activities in a particular environment. The program instructions are input in the form of the natural language. Particular columns are provided in the user interface to allow the user to select desired instruction elements from sets of limited candidates. The instruction elements form the program instructions. The system analyzes the program instructions to obtain the standard predetermined time evaluation codes of the instructions. Parameters not include in the input program instructions are generated automatically. Synthetic images are generated by using the input program instructions and the parameters obtained.10-01-2009
20130201194METHOD AND APPARATUS FOR PLAYING AN ANIMATION IN A MOBILE TERMINAL - A method and apparatus are provided for playing an animation in a mobile terminal. The method includes displaying content; determining an object of an animation from the content; determining whether an interaction event occurs while displaying the content; and playing an animation of the determined object, when the interaction event occurs.08-08-2013
20130100141SYSTEM AND METHOD OF PRODUCING AN ANIMATED PERFORMANCE UTILIZING MULTIPLE CAMERAS - A real-time method for producing an animated performance is disclosed. The real-time method involves receiving animation data, the animation data used to animate a computer generated character. The animation data may comprise motion capture data, or puppetry data, or a combination thereof. A computer generated animated character is rendered in real-time with receiving the animation data. A body movement of the computer generated character may be based on the motion capture data, and a head and a facial movement are based on the puppetry data. A first view of the computer generated animated character is created from a first reference point. A second view of the computer generated animated character is created from a second reference point that is distinct from the first reference point. One or more of the first and second views of the computer generated animated character are displayed in real-time with receiving the animation data.04-25-2013
20120299933Collection Rearrangement Animation - Collection rearrangement animation techniques are described herein, which can be employed to represent changes made by a rearrangement in a manner that reduces or eliminates visual confusion. A collection of items arranged at initial positions can be displayed. Various interaction can initiate a rearrangement of the collection of items, such as to sort the items, add or remove an item, or reposition an item. An animation of the rearrangement is depicted that omits at least a portion of the spatial travel along pathways from the initial positions to destination positions in the rearranged collection. In one approach, items can be animated to disappear from the initial positions and reappear at destination positions. This can occur by applying visual transitions that are bound to dimensional footprints of the items in the collection. Additionally or alternatively, intermediate and overlapping positions can be omitted by the animation.11-29-2012
20130207981APPARATUS AND METHODS FOR CURSOR ANIMATION - Methods and systems are provided for animating a cursor image. In an exemplary embodiment, image data for the cursor image maintained by a first memory is provided for display on a display device, and that image data is written to a second memory while being provided from the first memory for display. Prior to writing new image data for a portion of the cursor image to the first memory, the image data maintained by the second memory is provided for display on the display device and the new image data is written to the first memory while the image data maintained by the second memory is being provided to the display.08-15-2013
20130207982Method and System for Rendering an Application View on a Portable Communication Device - Disclosed are a method and a system for rendering an application view on a portable communication device. The method includes a step 08-15-2013

Patent applications in class Animation

Patent applications in all subclasses Animation