Patent application number | Description | Published |
20120293606 | TECHNIQUES AND SYSTEM FOR AUTOMATIC VIDEO CONFERENCE CAMERA FEED SELECTION BASED ON ROOM EVENTS - Techniques for automatically selecting a video camera feed based on room events in a video teleconference are described. An embodiment may receive video information from multiple cameras in a conference room. An event of interest may be detected from the video information. Events of interest may be detected, for example, by detecting faces, detecting an eye gaze or head direction, and detecting motion. When an event of interest is detected, the video camera having the optimal view of the event may be selected, and the feed from the selected video camera may be transmitted to remote participants. Other embodiments are described and claimed. | 11-22-2012 |
20120306992 | TECHNIQUES TO PROVIDE FIXED VIDEO CONFERENCE FEEDS OF REMOTE ATTENDEES WITH ATTENDEE INFORMATION - Techniques are described to provide a fixed video feed display from a remote participant to a conference room, where the display further includes remote participant information. In one embodiment, for example, a method may include receiving a connection from a remote participant, retrieving metadata related to the remote participant, and displaying a video feed from the remote participant along with the metadata in a dedicated position in a conference room. The metadata may provide information about the remote participant, as well as points of interest that may aid in conversation with the remote participant. The remote feed remains in the dedicated position throughout the conference, creating the effect of the remote participant being in the room. Other embodiments are described and claimed. | 12-06-2012 |
20120314015 | TECHNIQUES FOR MULTIPLE VIDEO SOURCE STITCHING IN A CONFERENCE ROOM - Techniques to stitch together multiple video streams are described. In an embodiment, a technique may include receiving a plurality of video streams from a plurality of video sources in a room. The video streams may be analyzed for feature points, such as furniture, light fixtures, window frames and so forth. The video streams may be processed to make the video qualities of the video streams, such as scale, color, brightness and so forth, more consistent with each other. Using the feature points, the processed video streams may be stitched together to generate a unified stream. The unified stream may be output to a display in the room and/or to remote viewers. Other embodiments are described and claimed. | 12-13-2012 |
20120327179 | AUTOMATIC VIDEO FRAMING - A dynamically adjustable framed view of occupants in a room is captured through an automatic framing system. The system employs a camera system, including a pan/tilt/zoom (PTZ) camera and one or more depth cameras, to automatically locate occupants in a room and adjust the PTZ camera's pan, tilt, and zoom settings to focus in on the occupants and center them in the main video frame. The depth cameras may distinguish between occupants and inanimate objects and adaptively determine the location of the occupants in the room. The PTZ camera may be calibrated with the depth cameras in order to use the location information determined by the depth cameras to automatically center the occupants in the main video frame for a framed view. Additionally, the system may track position changes in the room and may dynamically adjust and update the framed view when changes occur. | 12-27-2012 |
20140118403 | AUTO-ADJUSTING CONTENT SIZE RENDERED ON A DISPLAY - Various technologies described herein pertain to managing visual content rendering on a display. Audience presence and position information, which specifies respective distances from the display of a set of audience members detected within proximity of the display, can be obtained. Further, a threshold distance from the display can be determined as a function of the respective distances from the display of the set of audience members detected within proximity of the display. Moreover, responsive to the threshold distance from the display, a size of the visual content rendered on the display can be controlled. | 05-01-2014 |
20140168352 | VIDEO AND AUDIO TAGGING FOR ACTIVE SPEAKER DETECTION - A videoconferencing system is described that is configured to select an active speaker while avoiding erroneously selecting a microphone or camera that is picking up audio or video from a connected remote signal. A determination is made whether an audio signal is above a threshold level. If so, then a determination is made as to whether a tag is present in that audio signal. If so, that signal is ignored. If not, a camera is directed toward the sound source identified by the audio signal. A determination is made whether a tag is present in the video signal from that camera. If so, the camera is redirected. If not, local tag(s) are inserted into the audio signal and/or the video signal. The tagged signal(s) are transmitted. Thus, system will ignore sound or video that has an embedded tag from another videoconferencing system. | 06-19-2014 |
20150057999 | Preserving Privacy of a Conversation from Surrounding Environment - Various embodiments provide an ability to analyze an audio input signal and generate a counter audio signal based, at least in part, on the audio input signal. In some cases, combining the audio input signal with the counter audio signal renders the audio input signal incoherent and/or unintelligible to accidental listeners and/or listeners to whom the audio input signal is not directed towards. Alternately or additionally, the counter signal can mask the audio input signal to the accidental listeners. | 02-26-2015 |
20150067536 | Gesture-based Content Sharing Between Devices - Various embodiments provide an ability to join a virtual conference session using a single input-gesture and/or action. Upon joining the virtual conference, some embodiments enable a computing device to share content within the virtual conference session responsive to receiving a single input-gesture and/or action. Alternately or additionally, the computing device can acquire content being shared within the virtual conference session responsive to receiving a single input-gesture and/or action. In some cases, content can be exchanged between multiple computing devices connected to the virtual conference session. | 03-05-2015 |
20150067552 | Manipulation of Content on a Surface - Various embodiments enable expeditious manipulation of content on a surface so as to make the content quickly visually available to one or more attendees or participants. In at least some embodiments, content can be automatically manipulated to automatically present the content in a surface location that provides an unobstructed view of the content. Alternately or additionally, content can be manually selected to become “floating” in a manner which moves the content to a surface location that provides an unobstructed view of the content. | 03-05-2015 |