Patent application number | Description | Published |
20100188548 | Systems for Capturing Images Through A Display - Embodiments of the present invention are directed to a visual-collaborative system enabling geographically distributed groups to engage in face-to-face, interactive collaborative video conferences. In one aspect, a visual-collaborative system comprises a display screen, a camera system, and a projector. The display screen has a first surface and a second surface, and the camera system is positioned to capture images of objects through the display screen. The projector is positioned to project images onto a projection surface of the display screen, wherein the projected images can be observed by viewing the second surface. The system includes a first filter disposed between the camera and the first surface, where the first filter passes light receives by the camera but substantially blocks the light that is produced by the projector. | 07-29-2010 |
20100277576 | Systems for Capturing Images Through a Display - The present invention describes a visual-collaborative system comprising: a display screen having a first surface and a second surface; a first projector positioned to project images onto a projection surface of the display screen, wherein the projected images can be observed by viewing the second surface; and a first camera system positioned to capture images of objects through the display screen, the first camera system including a first filter disposed between a first camera and the first surface, wherein the first filter passes the light received by the camera but substantially blocks the light produced by the first projector, wherein the first filter is a GMR (Guided Mode Resonance) filter. | 11-04-2010 |
20100309284 | Systems and methods for dynamically displaying participant activity during video conferencing - Various aspects of the present invention are directed to systems and methods for highlighting participant activities in video conferencing. In one aspect, a method of generating a dynamic visual representation of participants taking part in a video conference comprises rendering an audio-visual representation of the one or more participants at each site taking part in the video conference using a computing device. The method includes receiving a saliency signal using the computing device, the saliency signal identifying the degree of current and/or recent activity of the one or more participants at each site. Based on the saliency signal associated with each site, the method applies image processing to elicit visual popout of active participants associated each site, while maintaining fixed scale and borders interface of the visual representation of the one or more participants at each site. | 12-09-2010 |
20100332437 | System For Generating A Media Playlist - A system for generating a media playlist comprising a media management module operable to select a first media item from a plurality of media items stored in a media database for playback; and using raw user input data representing a measure of the popularity of the first media item, generate preference data representing a refined user preference for the first media item; wherein the preference data is used to determine a second media item from the plurality of media items for playback. | 12-30-2010 |
20100332567 | Media Playlist Generation - Systems and computer readable mediums storing computer executable programs for generating a media playlist are disclosed. A first media item is selected from a plurality of media items for playback. A first determination is made regarding the number of times each of the plurality of media items was accessed for playback following an access for playback of the first media item. Each of the plurality of media items is weighted such that the probability of stochastically selecting each of the plurality of media items as a second media item for playback following a playback of the first media item is based on the first determination. The second media item is stochastically selected from the weighted plurality of media items for playback following the first media item. | 12-30-2010 |
20110085028 | METHODS AND SYSTEMS FOR OBJECT SEGMENTATION IN DIGITAL IMAGES - Various embodiments of the present invention are directed to object segmentation of digital video streams and digital images. In one aspect, a method for segmenting an object in a digitally-encoded image includes designing a non-linear local function that generates function values associated with identifying the object of interest and a combination function that combines the function values, the object of interest encoded in digital image data. The method includes forming orthogonal projections of the digital image data based on the function values and the combination function. In addition, the orthogonal projections can be used to determining boundaries segmenting the object in the digital image data. The method also includes extracting an image of the object that lies within the boundaries. | 04-14-2011 |
20110087998 | Thumbnail Based Image Quality Inspection - An input image ( | 04-14-2011 |
20110096137 | Audiovisual Feedback To Users Of Video Conferencing Applications - The present invention provides a method of providing feedback to a participant in a video conference, comprising the steps of: establishing a video conferencing session between multiple participants, wherein each participant in the video conferencing session is associated with a video capture device and an audio capture device; and establishing presentation requirements for each participant, wherein the presentation requirements are associated with the video conferencing session and the video capture and audio capture devices associated with each participant, wherein responsive to a failure to meet the presentation requirements, feedback is sent to at least the local participant who has failed to meet the presentation requirements. | 04-28-2011 |
20110096140 | Analysis Of Video Composition Of Participants In A Video Conference - A method of determining whether a video frame meets the design composition requirements associated with a video conference, said method comprising steps performed by a processor of: providing design composition requirements for the video frame, wherein the design composition requirements are available at runtime; analyzing captured video content from a video conference, to determine whether a participant of interest is present in a video frame of the video content; and analyzing the video frame to determine if it meets the design composition requirements for the video conference. | 04-28-2011 |
20110234913 | CONTROLLING ARTIFACTS IN VIDEO DATA - Controlling artifacts in video data. Image data of collocated pixels of a plurality of frames of the video data is sampled ( | 09-29-2011 |
20120062449 | REDUCING VIDEO CROSS-TALK IN A VISUAL-COLLABORATIVE SYSTEM - A visual-collaborative system including a display screen configured to display images and a camera configured to capture images. The system also includes a video cross-talk reducer configured to estimate video cross-talk that is to be displayed on the display screen and captured by the camera, and reducing the estimated video cross-talk from captured images by the camera. The estimation of the video cross-talk and reduction of the video cross-talk is signal based. | 03-15-2012 |
20120062690 | DETERMINING A SYNCHRONIZATION RELATIONSHIP - A synchronization relationship determiner comprising an input visual information signal receiver configured to receive an input visual information signal, and a capture signal receiver configured to receive a capture signal generated by a capture device. The synchronization relationship determiner is configured to determine a synchronization relationship between the input visual information signal and the capture signal. The synchronization relationship determination is signal based. | 03-15-2012 |
20120062799 | ESTIMATING VIDEO CROSS-TALK - A video cross-talk estimator comprising a visual input signal receiver configured to receive a visual input signal, a capture signal receiver configured to receive a capture signal, and a signal based video-cross talk determiner configured to estimate video cross-talk based on at least two frames of the visual input signal. The estimation of the video cross-talk is signal based. | 03-15-2012 |
20120098757 | SYSTEM AND METHOD UTILIZING BOUNDARY SENSORS FOR TOUCH DETECTION - sensors are arranged continuously adjacent along the boundary region of a touch surface. Furthermore, a touch associated with the interior area of the touch surface is detected via at least a plurality of sensors along the boundary region. | 04-26-2012 |
20120098806 | SYSTEM AND METHOD OF MODIFYING LIGHTING IN A DISPLAY SYSTEM - The present invention describes a display system. The display system includes a display, the display including a display screen capable of operating in a transparent mode; a lighting characteristic assessment component for determining the lighting characteristics of the content on the display screen and the lighting characteristics behind the display screen; and an adaptive lighting control component for controlling of an at least one lighting source and the lighting characteristics of the content on the display screen, wherein based on a comparison of the lighting characteristics of the content on the display screen and the lighting characteristics behind the display screen, modifying at least one of the lighting characteristics of the content on the display screen or the lighting characteristics of the at least one lighting source. | 04-26-2012 |
20120162630 | POSITION ESTIMATION SYSTEM - A position estimation system comprising a plurality of ‘shaped for depth sensing’ lenses comprising a lens profile directly based on distance estimation propagation of errors; a plurality of light sensing devices associated with the plurality of ‘shaped for depth sensing’ lenses; and a position estimator for estimating a position of at least a first object with respect to a second object based on the plurality of ‘shaped for depth sensing’ lenses and the plurality of light sensing devices. | 06-28-2012 |
20120194693 | VIEW ANGLE FEEDBACK DEVICE AND METHOD - The present invention provides a portable device that includes at least one view angle sensor for collecting sensor information about the view angle of the portable device. It also includes a view angle determination component for determining (1) the view angle of the portable device engaged in a videoconference session and (2) whether the view angle is within a predefined range for the videoconference session. The portable device also includes a feedback component, wherein responsive to the determination that the view angle is out of range, providing user feedback. | 08-02-2012 |
20120224019 | SYSTEM AND METHOD FOR MODIFYING IMAGES - The present invention describes a method of modifying an image in a video that includes the step of capturing a visible light image of an area, where the visible light image captured at a first frame rate. The method in addition includes the step of capturing a corresponding infrared light image of an area, the infrared light image being captured when the area is illuminated with an infrared light source, the infrared light image captured at the substantially the same frame rate as the visible light image. Based on the infrared light image, at least a subset of the human perceptible characteristics of the captured visible light image are modified. | 09-06-2012 |
20120262537 | METHODS AND SYSTEMS FOR ESTABLISHING VIDEO CONFERENCES USING PORTABLE ELECTRONIC DEVICES - Methods and systems for using portable electronic devices in video conferences are disclosed. In one aspect, a method receives each remote participant's audio data stream and at least one video stream over a network and arranges the video streams in a data structure that describes the location of each video stream's associated viewing area within a virtual meeting space. The method blends audio streams into a combined audio of the remote participants. The method presents at least one viewing area on the portable device display to be viewed by the local participant, and changes the at least one viewing area to be presented on the portable device display based on cues provided by the operator. | 10-18-2012 |
20120274732 | SYSTEMS AND METHODS FOR REDUCING VIDEO CROSSTALK - Methods and systems that reduce video crosstalk in video streams sent between participants in a video conference are disclosed. In one aspect, a method for reducing video crosstalk in a video stream sent from a local site to a remote site includes projecting a video stream of the remote site onto a screen at the local site. Each image in the video stream is dimmed according to a dimming factor of a dimming sequence. Crosstalk images of the local site are captured through the screen. Each crosstalk image is a blending of the image of the local site captured through the screen with a dimmed image of the remote site projected onto the screen. Images of the local site with reduced crosstalk are computed based on the dimming sequence. A video stream composed of the images of the local site with reduced crosstalk are sent to the remote site. | 11-01-2012 |
20120274736 | METHODS AND SYSTEMS FOR COMMUNICATING FOCUS OF ATTENTION IN A VIDEO CONFERENCE - Methods and systems for communicating each participant's focus of attention in a video conference are described. In one aspect, a method for communicating where each participant's attention is focused in a video conference includes receiving each remote participant's video and audio streams and focus of attention data, based on the remote participant's head location. The at least one remote participant's video streams are presented in separate viewing areas of the local participant's display. The viewing areas presenting the remote participants are modified to indicate to the local participant each remote participant's focus of attention, based on the focus of attention data. | 11-01-2012 |
20120320144 | VIDEO MODIFICATION SYSTEM AND METHOD - A system for coordinating image characteristics in a plurality (n) of video streams, the system includes human factor value determination component, a human factor value comparator component, a human factor modification component. The human factor modification component determines the value of at least the human perceptible factor for at least a subset of the plurality of video streams. The human factor value comparator component compares the value of the at least one human perceptible factor for each of the at least a subset of n video streams. The human factor modification component modifies the value of the human perceptible factor for the at least a subset of the n video streams to minimize the differences in the values of the human perceptible factors between the n independently captured video streams. | 12-20-2012 |
20130188094 | Combining multiple video streams - Methods, computer-readable media, and systems are provided for combining multiple video streams. One method for combining the multiple video streams includes extracting a sequence of media frames ( | 07-25-2013 |
20130286237 | SPATIALLY MODULATED IMAGE INFORMATION RECONSTRUCTION - A system and method include a color filter array configured to spatially modulate captured image information and a processor configured to reconstruct the image information. | 10-31-2013 |
20130286245 | SYSTEM AND METHOD FOR MINIMIZING FLICKER - A method of reducing flicker in video is described. The method includes the steps of: determining an initial target frame color channel statistic value (R | 10-31-2013 |
20140192170 | Model-Based Stereoscopic and Multiview Cross-Talk Reduction - A method for reducing cross-talk in a 3D display is disclosed. The cross-talk in the 3D display is characterized with a plurality of test signals to generate a forward transformation model. Input image signals are applied to the forward transformation model to generate modeled signals. The modeled signals are applied to a visual model to generate a visual measure. The input signals are modified based on the visual measure. | 07-10-2014 |
20140267835 | REDUCING CROSSTALK - A method for reducing video crosstalk in a display-camera system includes capturing a first image of a local site while projecting an image of a remote site with a first intensity gain; capturing a second image of the local site while projecting the image with a second gain that is different from the first gain; capturing a first mixed image of the local site that includes the first image combined with the projected image having first gain and a second mixed image of the local site that includes the second image combined with the projected image having second gain; performing crosstalk reduction on the mixed images to create a reconstructed image of the local site, wherein performing crosstalk reduction of the mixed images includes determining whether a pixel value variation between the mixed images is affected by motion in the first and the second image of the local site. | 09-18-2014 |