Patent application number | Description | Published |
20110228112 | USING ACCELEROMETER INFORMATION FOR DETERMINING ORIENTATION OF PICTURES AND VIDEO IMAGES - A computing device, such as a mobile device, can capture pictures or video images using a digital camera and obtain associated orientation information using an accelerometer. The orientation information can be used to adjust one or more of the captured pictures or video images to compensate for rotation in one or more planes of rotation. The orientation information can be saved along with the captured pictures or video images. The orientation information can also be transmitted or streamed along with the captured pictures or video images. Image matching operations can be performed using pictures or video images that have been adjusted using orientation information. | 09-22-2011 |
20120078910 | Using an ID Domain to Improve Searching - Methods which use an ID domain to improve searching are described. An embodiment describes an index phase in which an image of a document is converted into the ID domain. This is achieved by dividing the text in the image into elements and mapping each element to an identifier. Similar elements are mapped to the same identifier. Each element in the text is then replaced by the appropriate identifier to create a version of the document in the ID domain. This version may be indexed and searched. Another embodiment describes a query phase in which a query is converted into the ID domain and then used to search an index of identifiers which has been created from collections of documents which have been converted into the ID domain. The conversion of the query may use mappings which were created during the index phase or alternatively may use pre-existing mappings. | 03-29-2012 |
20120288186 | SYNTHESIZING TRAINING SAMPLES FOR OBJECT RECOGNITION - An enhanced training sample set containing new synthesized training images that are artificially generated from an original training sample set is provided to satisfactorily increase the accuracy of an object recognition system. The original sample set is artificially augmented by introducing one or more variations to the original images with little to no human input. There are a large number of possible variations that can be introduced to the original images, such as varying the image's position, orientation, and/or appearance and varying an object's context, scale, and/or rotation. Because there are computational constraints on the amount of training samples that can be processed by object recognition systems, one or more variations that will lead to a satisfactory increase in the accuracy of the object recognition performance are identified and introduced to the original images. | 11-15-2012 |
20120327172 | MODIFYING VIDEO REGIONS USING MOBILE DEVICE INPUT - Apparatus and methods are disclosed for modifying video based on user input and or face detection data received with a mobile device to generate foreground regions (e.g., to separate a user image from background in the video). According to one disclosed embodiment, a method comprises receiving user input and/or face regions generated with a mobile device, producing an initial representation for segmenting input video into a plurality of portions based on the user input, where the initial representation includes probabilities for one or more regions of the input video being designated as foreground regions or background regions. Based on the initial representation, input video is segmented by designating one or more of the regions of the input video as foreground regions or background regions. | 12-27-2012 |
20130015946 | USING FACIAL DATA FOR DEVICE AUTHENTICATION OR SUBJECT IDENTIFICATIONAANM Lau; James Kai YuAACI BellevueAAST WAAACO USAAGP Lau; James Kai Yu Bellevue WA USAANM Kaheel; AymanAACI BellevueAAST WAAACO USAAGP Kaheel; Ayman Bellevue WA USAANM El-Saban; MotazAACI CairoAACO EGAAGP El-Saban; Motaz Cairo EGAANM Shawky; MohamedAACI CairoAACO EGAAGP Shawky; Mohamed Cairo EGAANM Gonzalez; MonicaAACI SeattleAAST WAAACO USAAGP Gonzalez; Monica Seattle WA USAANM Baz; Ahmed ElAACI BellevueAAST WAAACO USAAGP Baz; Ahmed El Bellevue WA USAANM Deif; TamerAACI CairoAACO EGAAGP Deif; Tamer Cairo EGAANM Aly; Alaa Abdel-HakimAACI AssiutAACO EGAAGP Aly; Alaa Abdel-Hakim Assiut EG - Exemplary methods, apparatus, and systems are disclosed for authenticating a user to computing device. In one exemplary embodiment, an indication of a request by a user to unlock a mobile device in a locked state is received. One or more images of the face of the user are captured. Facial components of the user from the one or more captured images are extracted. A determination is made as to whether the user is an authorized user or a non-authorized user based at least in part on a comparison of the facial components of the user extracted from the one or more captured images to facial components of the authorized user from one or more authentication images of the authorized stored on the mobile device. If the user is determined to be the authorized user, the mobile device unlocked; otherwise, the mobile device is maintained in its locked state. | 01-17-2013 |
Patent application number | Description | Published |
20100214419 | Video Sharing - Video sharing is described. In an embodiment, mobile video capture devices such as mobile telephones capture video streams of the same event. A video sharing system obtains contextual information about the video streams and uses that to form a video output from the streams, that output being for sharing by other entities. For example, the formed video provides an enhanced viewing experience as compared with an individual one of the input video streams. In embodiments the contextual information may be obtained from content analysis of the video streams, from stored context information and from control information such as device characteristics. In some embodiments the video streams of a live event are received and the output video formed in real time. In examples feedback is provided to video capture devices to suggest that the zoom, viewing position or other characteristics are adjusted or to achieve this automatically. | 08-26-2010 |
20100296571 | Composite Video Generation - Composite video generation is described. In an embodiment, mobile video capture devices, such as mobile telephones, capture video streams of a common event. A network node receives the video streams and time-synchronizes them. Frames from each of the video streams are then stitched together to form a composite frame, and these are added to a composite video sequence. In embodiments, the composite video sequence is encoded and streamed to a user terminal over a communications network. In embodiments, the common event is a live event and the composite video sequence is generated in real-time. In some embodiments, the stitching of the video streams is performed by geometrically aligning the frames. In some embodiments, three or more mobile video capture devices provide video streams. | 11-25-2010 |
20110295851 | REAL-TIME ANNOTATION AND ENRICHMENT OF CAPTURED VIDEO - An annotation suggestion platform is described herein. The annotation suggestion platform may comprise a client and a server, where the client captures a media object and sends the captured object to the server, and the server provides a list of suggested annotations for a user to associate with the captured media object. The user may then select which of the suggested metadata is to be associated or stored with the captured media. In this way, a user may more easily associate metadata with a media object, facilitating the media object's search and retrieval. The server may also provide web page links related to the captured media object. A user interface for the annotation suggestion platform is also described herein, as are optimizations including indexing and tag propagation. | 12-01-2011 |
20150082173 | Real-Time Annotation and Enrichment of Captured Video - An annotation suggestion platform is described herein. The annotation suggestion platform may comprise a client and a server, where the client captures a media object and sends the captured object to the server, and the server provides a list of suggested annotations for a user to associate with the captured media object. The user may then select which of the suggested metadata is to be associated or stored with the captured media. In this way, a user may more easily associate metadata with a media object, facilitating the media object's search and retrieval. The server may also provide web page links related to the captured media object. A user interface for the annotation suggestion platform is also described herein, as are optimizations including indexing and tag propagation. | 03-19-2015 |