Patent application number | Description | Published |
20090094572 | Artifact sharing from a development environment - An identification of a multi-component development artifact to be shared is obtained in a development environment. A remote receiver with whom to share components of the artifact is designated. Components of the artifact are shared with the remote receiver by automatically locating the components and sending the located components in a package with associated type descriptions. After the package is received, a check for conflicts is made, and acceptable components are merged into the local development environment. | 04-09-2009 |
20110267419 | ACCELERATED INSTANT REPLAY FOR CO-PRESENT AND DISTRIBUTED MEETINGS - Techniques for recording and replay of a live conference while still attending the live conference are described. A conferencing system includes a user interface generator, a live conference processing module, and a replay processing module. The user interface generator is configured to generate a user interface that includes a replay control panel and one or more output panels. The live conference processing module is configured to extract information included in received conferencing data that is associated with one or more conferencing modalities, and to display the information in the one or more output panels in a live manner (e.g., as a live conference). The replay processing module is configured to enable information associated with the one or more conferencing modalities corresponding to a time of the conference session prior to live to be presented at a desired rate, possibly different from the real-time rate, if a replay mode is selected in the replay control panel. | 11-03-2011 |
20140325393 | ACCELERATED INSTANT REPLAY FOR CO-PRESENT AND DISTRIBUTED MEETINGS - Techniques for recording and replay of a live conference while still attending the live conference are described. A conferencing system includes a user interface generator, a live conference processing module, and a replay processing module. The user interface generator is configured to generate a user interface that includes a replay control panel and one or more output panels. The live conference processing module is configured to extract information included in received conferencing data that is associated with one or more conferencing modalities, and to display the information in the one or more output panels in a live manner (e.g., as a live conference). The replay processing module is configured to enable information associated with the one or more conferencing modalities corresponding to a time of the conference session prior to live to be presented at a desired rate, possibly different from the real-time rate. | 10-30-2014 |
20140351871 | LIVE MEDIA PROCESSING AND STREAMING SERVICE - A live media processing and streaming service provides a content provider with media processing and distribution capabilities for live events. The service provides capabilities for capturing a live event, configuring programs from the live event, formatting the programs into a mezzanine format suitable for streaming, storage of the presentation manifest and fragments corresponding to a program into a cloud storage, and distribution of the presentation manifest and fragments to media consumers in real time. | 11-27-2014 |
Patent application number | Description | Published |
20080209327 | Persistent spatial collaboration - Persistent, spatial collaboration on the web supports a free-form, user-intuitive approach to a variety of projects and activities. Users can place differing object types at any time any where on a web page and/or the system can automatically, and with no user effort, affect object placement based on one or more meta data characteristics. A user can, in real-time, see changes made by another user to a web page, and, if desired, react accordingly, enabling true collaboration even if the various users are at remote locations. The flexibility of the methodology and system provides a platform for users to engage in projects and activities in a manner and environment suited to the users' mind sets, creativity, and natural proclivities. | 08-28-2008 |
20080244418 | DISTRIBUTED MULTI-PARTY SOFTWARE CONSTRUCTION FOR A COLLABORATIVE WORK ENVIRONMENT - The disclosed architecture extends the traditional integrated design environment (IDE) designed for solo development work with features and capabilities that support collaborative distributed work (e.g., distributed pair programming). The architecture provides integrated communication channels that enable the participants to engage in collaborative work. The graphical user interface capabilities are also extended with distributed functionality specific to multi-party (e.g., pair) programming, including, but not limited to manual and/or automatic role control and turn-taking, multiple cursors (destructive and non-destructive), remote highlighting, decaying edit trail, easy access to history of edits, language-independent event model and, view convergence and divergence. The system uses the collaboration and communication patterns and information to identify problems, extract metrics, make recommendations, etc. | 10-02-2008 |
20090172779 | MANAGEMENT OF SPLIT AUDIO/VIDEO STREAMS - Described herein is a method that includes receiving multiple requests for access to an exposed media object, wherein the exposed media object represents a live media stream that is being generated by a media source. The method also includes receiving data associated with each entity that provided a request, and determining, for each entity, whether the entities that provided the request are authorized to access the media stream based at least in part upon the received data and splitting the media stream into multiple media streams, wherein a number of media streams corresponds to a number of authorized entities. The method also includes automatically applying at least one policy to at least one of the split media streams based at least in part upon the received data. | 07-02-2009 |
20100085416 | Multi-Device Capture and Spatial Browsing of Conferences - Multi-device capture and spatial browsing of conferences is described. In one implementation, a system detects cameras and microphones, such as the webcams on participants' notebook computers, in a conference room, group meeting, or table game, and enlists an ad-hoc array of available devices to capture each participant and the spatial relationships between participants. A video stream composited from the array is browsable by a user to navigate a 3-dimensional representation of the meeting. Each participant may be represented by a video pane, a foreground object, or a 3-D geometric model of the participant's face or body displayed in spatial relation to the other participants in a 3-dimensional arrangement analogous to the spatial arrangement of the meeting. The system may automatically re-orient the 3-dimensional representation as needed to best show the currently interesting event such as current speaker or may extend navigation controls to a user for manually viewing selected participants or nuanced interactions between participants. | 04-08-2010 |
20100198579 | UNIVERSAL TRANSLATOR - The claimed subject matter provides a system and/or a method that facilitates communication within a telepresence session. A telepresence session can be initiated within a communication framework that includes two or more virtually represented users that communicate therein. The telepresence session can include at least one virtually represented user that communicates in a first language, the communication is at least one of a portion of audio, a portion of video, a portion of graphic, a gesture, or a portion of text. An interpreter component can evaluate the communication to translate an identified first language into a second language within the telepresence session, the translation is automatically provided to at least one virtually represented user within the telepresence. | 08-05-2010 |
20100302462 | VIRTUAL MEDIA INPUT DEVICE - A virtual media device is described for processing one or more input signals from one or more physical media input devices, to thereby generate an output signal for use by a consuming application module. The consuming application module interacts with the virtual media device as if it were a physical media input device. The virtual media device thereby frees the application module and its user from the burden of having to take specific account of the physical media input devices that are connected to a computing environment. The virtual media device can be coupled to one or more microphone devices, one or more video input devices, or a combination of audio and video input devices, etc. The virtual media device can apply any number of processing modules to generate the output signal, each performing a different respective operation. | 12-02-2010 |
20100318399 | Adaptive Meeting Management - A template and/or knowledge associated with a synchronous meeting are obtained by a computing device. The computing device then adaptively manages the synchronous meeting based at least in part on the template and/or knowledge. | 12-16-2010 |
20110295392 | DETECTING REACTIONS AND PROVIDING FEEDBACK TO AN INTERACTION - Reaction information of participants to an interaction may be sensed and analyzed to determine one or more reactions or dispositions of the participants. Feedback may be provided based on the determined reactions. The participants may be given an opportunity to opt in to having their reaction information collected, and may be provided complete control over how their reaction information is shared or used. | 12-01-2011 |
20120281059 | Immersive Remote Conferencing - The subject disclosure is directed towards an immersive conference, in which participants in separate locations are brought together into a common virtual environment (scene), such that they appear to each other to be in a common space, with geometry, appearance, and real-time natural interaction (e.g., gestures) preserved. In one aspect, depth data and video data are processed to place remote participants in the common scene from the first person point of view of a local participant. Sound data may be spatially controlled, and parallax computed to provide a realistic experience. The scene may be augmented with various data, videos and other effects/animations. | 11-08-2012 |
20140009562 | MULTI-DEVICE CAPTURE AND SPATIAL BROWSING OF CONFERENCES - Multi-device capture and spatial browsing of conferences is described. In one implementation, a system detects cameras and microphones, such as the webcams on participants' notebook computers, in a conference room, group meeting, or table game, and enlists an ad-hoc array of available devices to capture each participant and the spatial relationships between participants. A video stream composited from the array is browsable by a user to navigate a 3-dimensional representation of the meeting. Each participant may be represented by a video pane, a foreground object, or a 3-D geometric model of the participant's face or body displayed in spatial relation to the other participants in a 3-dimensional arrangement analogous to the spatial arrangement of the meeting. The system may automatically re-orient the 3-dimensional representation as needed to best show a currently interesting event. | 01-09-2014 |
Patent application number | Description | Published |
20100228825 | SMART MEETING ROOM - The claimed subject matter provides a system and/or a method that facilitates enhancing the employment of a telepresence session. An automatic telepresence engine that can evaluate data associated with at least one of an attendee, a schedule for an attendee, or a portion of an electronic communication for an attendee. The automatic telepresence engine can identify at least one the following for a telepresence session based upon the evaluated data: a participant to include for the telepresence session, a portion of data related to a presentation within the telepresence session, a portion of data related to a meeting topic within the telepresence session, a device utilized by an attendee to communicate within the telepresence session. The automatic telepresence engine can initiate the telepresence session within a communication framework that includes two or more virtually represented users that communicate therein. | 09-09-2010 |
20100245536 | AMBULATORY PRESENCE FEATURES - The claimed subject matter provides a system and/or a method that facilitates managing one or more devices utilized for communicating data within a telepresence session. A telepresence session can be initiated within a communication framework that includes two or more virtually represented users that communicate therein. A device can be utilized by at least one virtually represented user that enables communication within the telepresence session, the device includes at least one of an input to transmit a portion of a communication to the telepresence session or an output to receive a portion of a communication from the telepresence session. A detection component can adjust at least one of the input related to the device or the output related to the device based upon the identification of a cue, the cue is at least one of a movement detected, an event detected, or an ambient variation. | 09-30-2010 |
20100306647 | FORCE-FEEDBACK WITHIN TELEPRESENCE - The claimed subject matter provides a system and/or a method that facilitates replicating a telepresence session with a real world physical meeting. A telepresence session can be initiated within a communication framework that includes two or more virtually represented users that communicate therein. A trigger component can monitor the telepresence session in real time to identify a participant interaction with an object, wherein the object is at least one of a real world physical object or a virtually represented object within the telepresence session. A feedback component can implement a force feedback to at least one participant within the telepresence session based upon the identified participant interaction with the object, wherein the force feedback is employed via a device associated with at least one participant. | 12-02-2010 |
20100306670 | GESTURE-BASED DOCUMENT SHARING MANIPULATION - The claimed subject matter provides a system and/or a method that facilitates interacting with data associated with a telepresence session. A telepresence session can be initiated within a communication framework that includes two or more virtually represented users that communicate therein. A portion of data can be virtually represented within the telepresence session in which at least one virtually represented user interacts therewith. A detect component can monitor motions related to at least one virtually represented user to identify a gesture, the gesture involves a virtual interaction with the portion of data within the telepresence session. An interaction component can implement a manipulation to the portion of data virtually represented within the telepresence session based upon the identified gesture. | 12-02-2010 |
20110096135 | AUTOMATIC LABELING OF A VIDEO SESSION - Described is labeling a video session with metadata representing a recognized person or object, such as to identify a person corresponding to a recognized face when that face is being shown during the video session. The identification may be made by overlaying text on the video session, e.g., the person's name and/or other related information. Facial recognition and/or other (e.g., voice) recognition may be used to identify a person. The facial recognition process may be made more efficient by using known narrowing information, such as calendar information that indicates who the invitees are to a meeting that is being shown in the video session. | 04-28-2011 |
20110170739 | Automated Acquisition of Facial Images - Described is a technology by which medical patient facial images are acquired and maintained for associating with a patient's records and/or other items. A video camera may provide video frames, such as captured when a patient is being admitted to a hospital. Face detection may be employed to clip the facial part from the frame. Multiple images of a patient's face may be displayed on a user interface to allow selection of a representative image. Also described is obtaining the patient images by processing electronic documents (e.g., patient records) to look for a face pictured therein. | 07-14-2011 |
20120306995 | Ambulatory Presence Features - A system facilitates managing one or more devices utilized for communicating data within a telepresence session. A telepresence session can be initiated within a communication framework that includes a first user and one or more second users. In response to determining a temporary absence of the first user from the telepresence session, a recordation of the telepresence session is initialized to enable a playback of a portion or a summary of the telepresence session that the first user has missed. | 12-06-2012 |
20140173594 | Scalable Services Deployment - Embodiments provide an abstraction on top of virtual machine allocation APIs to expose scalable services. The services are higher level components that expose a particular set of functionalities. A deployment manager handles matching and managing virtual machine allocations in order to meet the customer demands for the managed services. A deployment service exposes a “service” as a unit of resource allocation in a distributed computing environment or cloud computing service. Client components interact with the deployment service to request new service instances to meet customer demand. | 06-19-2014 |
20150124046 | Ambulatory Presence Features - A system facilitates managing one or more devices utilized for communicating data within a telepresence session. A telepresence session can be initiated within a communication framework that includes a first user and one or more second users. In response to determining a temporary absence of the first user from the telepresence session, a recordation of the telepresence session is initialized to enable a playback of a portion or a summary of the telepresence session that the first user has missed. | 05-07-2015 |