Patent application number | Description | Published |
20080278481 | PHOTO GENERATED 3-D NAVIGABLE STOREFRONT - Presented are techniques for creating a photo-generated navigable storefront. Such techniques include receiving a images and processing the images through an image matching algorithm. Such images may include, for example, photos taken with a camera. Additionally, the images are tagged with identifier tags in order to associate related or nearby images together. Furthermore, product/service information may be associated with an image such that a selection of a particular image causes the product/service information to be displayed. | 11-13-2008 |
20090152341 | TRADE CARD SERVICES - The claimed subject matter provides a system and/or a method that facilitates servicing a portion of a trade card via a web service. A web service can provide a portion of data to enhance a trade card, wherein the portion of data is at least one of a portion of trade card document-specific data, an intelligent gadget, or a feed driven component. A build component can leverage the web service to utilize the portion of data with the trade card. | 06-18-2009 |
20090157503 | PYRAMIDAL VOLUMES OF ADVERTISING SPACE - The claimed subject matter relates to an architecture that can facilitate advertising models in connection with pyramidal volumes of advertising space. In particular, a pixel at one plane of view of an image can be associated with four pixels at a lower plane of view and so on. Advertising rights with respect to the pixel can be offered for sale, which can include all, a subset, or a different set of advertising rights with respect to other pixels in the pyramidal volume. The architecture can construct the data for the image dynamically based upon contextual input and the advertising rights as well as image format can be constructed based upon notions of zoning. | 06-18-2009 |
20090172570 | MULTISCALED TRADE CARDS - The claimed subject matter provides a system and/or a method that facilitates interacting with a trade card that includes pyramidal volumes of data. A trade card with data can represent a computer displayable multiscale image with at least two substantially parallel planes of view in which a first plane and a second plane are alternatively displayable based upon a level of zoom and which are related by a pyramidal volume, wherein the image includes a pixel at a vertex of the pyramidal volume. An environment can host the trade card to enable access to a portion of the displayable multiscale image. | 07-02-2009 |
20090251407 | DEVICE INTERACTION WITH COMBINATION OF RINGS - The claimed subject matter provides a system and/or a method that facilitates interacting with a device and/or data associated with the device. A computing device can display a portion of data. A ring component can interact with the portion of data to control the device by detecting at least one of a movement, a gesture, an inductance, or a resistance related to a user wearing the ring component on at least one digit on at least one hand. | 10-08-2009 |
20090254820 | CLIENT-SIDE COMPOSING/WEIGHTING OF ADS - The claimed subject matter provides a system and/or a method that facilitates displaying relevant advertisements to a user. A display engine can browse a portion of image data during a browsing session. An evaluator can identify a context related to two or more concurrent and on-going browsing sessions. An ad selector can locate an ad from a data store based on the identified context and seamlessly incorporate and display the ad into at least one of the browsing sessions. | 10-08-2009 |
20090254867 | ZOOM FOR ANNOTATABLE MARGINS - The claimed subject matter provides a system and/or a method that facilitates interacting with a portion of data that includes pyramidal volumes of data. A portion of image data can represent a computer displayable multiscale image with at least two substantially parallel planes of view in which a first plane and a second plane are alternatively displayable based upon a level of zoom and which are related by a pyramidal volume, wherein the multiscale image includes a pixel at a vertex of the pyramidal volume. An edit component can receive and incorporate an annotation to the multiscale image corresponding to at least one of the two substantially parallel planes of view. A display engine can display the annotation on the multiscale image based upon navigation to the parallel plane of view corresponding to such annotation. | 10-08-2009 |
20090274391 | INTERMEDIATE POINT BETWEEN IMAGES TO INSERT/OVERLAY ADS - The claimed subject matter provides a system and/or a method that facilitates simulating a portion 2-dimensional (2D) data for implementation within a 3-dimensional (3D) virtual environment. A 3D virtual environment can enable a 3D exploration of a 3D image constructed from a collection of two or more 2D images, the 3D image is constructed by combining the two or more 2D images based upon a respective image perspective. An analyzer can evaluate the collection of two or more 2D images to identify a portion of the 3D image that is unrepresented by the combined two or more 2D images. A synthetic view generator can create a simulated synthetic view for the portion of 3D image that is unrepresented, the simulated synthetic view replicates a 2D image with a respective image perspective for the unrepresented portion of 3D image. | 11-05-2009 |
20090276445 | DYNAMIC MULTI-SCALE SCHEMA - The claimed subject matter provides a system and/or a method that facilitates organizing and presenting data within a database. A data store can store a portion of data accessible to a user. A real time monitor component can dynamically track an amount of access for the portion of data within the data store. A display engine can render a multi-scaled view of the portion of data, wherein the multi-scaled view is based on the amount of access in which a size representation of the data is correlated with the amount of access. | 11-05-2009 |
20090279784 | PROCEDURAL AUTHORING - The claimed subject matter provides a system and/or a method that facilitates generating a model from a 3-dimensional (3D) object assembled from 2-dimensional (2D) content. A content aggregator can construct a 3D object from a collection of two or more 2D images each depicting a real entity in a physical real world, wherein the 3D object is constructed by combining the two or more 2D images based upon a respective image perspective. A 3D virtual environment can allow exploration of the 3D object. A model component can extrapolate a true 3D geometric model from the 3D object, wherein the true 3D geometric model is generated to include scaling in proportion to a size within the physical real world. | 11-12-2009 |
20090289937 | MULTI-SCALE NAVIGATIONAL VISUALTIZATION - The claimed subject matter provides a system and/or a method that facilitates providing navigational assistance. An immersive view can include image data that can represent a computer displayable multi-scale image with at least two substantially parallel planes of view in which a first plane and a second plane are alternatively displayable based upon a level of zoom and which are related by a pyramidal volume, wherein the multi-scale image includes a pixel at a vertex of the pyramidal volume. A navigation component can provide navigational assistance via the immersive view based upon navigational input. A display engine can display the immersive view. | 11-26-2009 |
20090295791 | THREE-DIMENSIONAL ENVIRONMENT CREATED FROM VIDEO - The claimed subject matter provides a system and/or a method that facilitates constructing a three-dimensional (3D) virtual environment from two-dimensional (2D) content. A 3D virtual environment can enable a 3D exploration of a 3D image constructed from a collection of two or more 2D images, the 3D image is constructed by combining the two or more 2D images based upon a respective image perspective. The two or more 2D images can be provided by a video portion. An aggregator can reduce the number of frames in the video portion, construct a 3D image based upon key point features in the reduced number of frames and align the key point features geometrically in three dimensions. | 12-03-2009 |
20090303253 | PERSONALIZED SCALING OF INFORMATION - The claimed subject matter provides a system and/or a method that facilitates rendering of a portion of viewable data. A web page, a user interface or other displayable information can be personalized such that disparate portions of the displayable information are rendered at varying scales, resolutions, sizes, etc. A personalizer can generate personalization data related to a user. The personalization data can include a display property associated with a portion of viewable data. In addition, a display engine is provided that displays the portion of viewable data based upon the personalization data and display property. | 12-10-2009 |
20090307618 | ANNOTATE AT MULTIPLE LEVELS - The claimed subject matter provides a system and/or a method that facilitates interacting with a portion of data that includes pyramidal volumes of data. A portion of image data can represent a computer displayable multi-scale image with at least two substantially parallel planes of view in which a first plane and a second plane are alternatively displayable based upon a level of zoom and which are related by a pyramidal volume, wherein the multi-scale image includes a pixel at a vertex of the pyramidal volume. An annotation component can determine a set of annotations associated with at least one of the two substantially parallel planes of view. A display engine can display at least a subset of the set of annotations on the multi-scale image based upon navigation to the parallel plane of view associated with the set of annotations. | 12-10-2009 |
20090310851 | 3D CONTENT AGGREGATION BUILT INTO DEVICES - The claimed subject matter provides a system and/or a method that facilitates capturing a portion 2-dimensional (2D) data for implementation within a 3-dimensional (3D) virtual environment. A device that can capture one or more 2D images, wherein the 2D image is representative of a corporeal object from a perspective dictated by an orientation of the device. The device can comprise a content aggregator that can construct a 3D image from two or more 2D images collected by the device, in which the construction is based at least in part upon aligning each corresponding perspective associated with each 2D image. | 12-17-2009 |
20090319357 | COLLECTION REPRESENTS COMBINED INTENT - The claimed subject matter provides a system and/or a method that facilitates communicating intent-related data to a user. A display engine can enable exploration of a portion of image data during a browsing session. An intent component can receive a portion of data related to the browsing session, wherein the portion of data is at least one of a collection of browsing history or a portion of data displayed during a browsing session. The intent component can further evaluate the portion of data to ascertain a combined intent of a user. A selective ad component can infer an incompleteness of the combined intent to trigger a pre-qualification for an offer related to at least one of an item or service that fulfills the incompleteness. | 12-24-2009 |
20090319940 | NETWORK OF TRUST AS MARRIED TO MULTI-SCALE - The claimed subject matter provides a system and/or a method that facilitates visually representing data relationships within a network. A network includes a graphical representation of a user in which the network is a node structure with relationships between two or more users. An organization component that can analyze one of a degree of separation between two or more users represented within the network or an expertise level of a user represented within the network, the expertise level corresponds to a topic. The organization component can scale the portion of graphic representative based upon the analysis. | 12-24-2009 |
20130091197 | MOBILE DEVICE AS A LOCAL SERVER - Architecture that embeds a server (a local server) inside a mobile device operating system (OS) close to the data (but under the OS services) such that the server has access to native capabilities, and offers an Internet-like frontend with which a browser or application can communicate. The local server appears as a web server, and small programs can be pushed into the local server from the browser or a remote server such that the local server can be made to perform work more effectively. Local and remote events can be triggered such as launching a browser (or other application(s)), initiating remote server calls, triggering battery save mode, locking the phone, etc. The local server can run a script execution environment such as node.js, an event driven I/O model where callbacks are invoked to handle emergent conditions (e.g., explicit requests, state changes, etc.). | 04-11-2013 |
20130097440 | EVENT SERVICE FOR LOCAL CLIENT APPLICATIONS THROUGH LOCAL SERVER - In server/client architectures, the server application and client applications are often developed in different languages and execute in different environments specialized for the different contexts of each application (e.g., low-level, performant, platform-specialized, and stateless instructions on the server, and high-level, flexible, platform-agnostic, and stateful languages on the client) and are often executed on different devices. Convergence of these environments (e.g., server-side JavaScript using Node.js) enables the provision of a server that services client applications executing on the same device. The local server may monitor local events occurring on the device, and may execute one or more server scripts associated with particular local events on behalf of local clients subscribing to the local event (e.g., via a subscription model). These techniques may enable development of local event services in the same language and environment as client applications, and the use of server-side code in the provision of local event service. | 04-18-2013 |
20130263127 | PERSISTENT AND RESILIENT WORKER PROCESSES - In the field of computing, many scenarios involve the execution of an application within a virtual environment (e.g., web applications executing within a web browser). In order to perform background processing, such applications may invoke worker processes within the virtual environment; however, this configuration couples the life cycle of worker processes to the life cycle of the application and/or virtual environment. Presented herein are techniques for executing worker processes outside of the virtual environment and independently of the life cycle of the application, such that background computation may persist after the application and/or virtual environment are terminated and even after a computing environment restart, and for notifying the application upon the worker process achieving an execution event (e.g., detecting device events even while the application is not executing). Such techniques may heighten the resiliency and persistence of worker processes and expand the capabilities of applications executing within virtual environments. | 10-03-2013 |
20140172570 | MOBILE AND AUGMENTED-REALITY ADVERTISEMENTS USING DEVICE IMAGING - Mobile advertisements often involve advertisements related to the user's detected location. However, additional relevant advertisement opportunities may be identified by also identifying an image captured by the camera of the mobile device (e.g., the user may take a photo of a product under consideration, or may gaze at the product while wearing a gaze-tracking device). Advertisements relating to the product and the user's location may then be presented for a related product sold by the same store, or a lower-priced offer for the same product from a nearby competing store. Advertisements may be presented via augmented reality (e.g., integrating the advertisement with the image of the environment presented to the user), and/or compared with the cost of interrupting an inferred activity of the user. Additionally, image evaluation may be applied when the user is near an advertisement opportunity in order to conserve the resources of the mobile device. | 06-19-2014 |
20140173592 | INVERSION-OF-CONTROL COMPONENT SERVICE MODELS FOR VIRTUAL ENVIRONMENTS - In the field of computing, many scenarios involve the execution of an application within a virtual environment of a device (e.g., web applications executing within a web browser). Interactions between applications and device components are often enabled through hardware abstractions or component application programming interfaces (API), but such interactions may provide more limited and/or inconsistent access to component capabilities for virtually executing applications than for native applications. Instead, the device may provide hardware interaction as a service to the virtual environment utilizing a callback model, wherein applications within the virtual environment initiate component request specifying a callback, and the device initiates the component requests with the components and invokes associated callbacks upon completion of a component request. This model may enable the applications to interact with the full capability set of the components, and may reduce blocked execution of the application within the virtual application in furtherance of application performance. | 06-19-2014 |
20140184585 | REDUNDANT PIXEL MITIGATION - Among other things, one or more techniques and/or systems are provided for mitigating redundant pixel texture contribution for texturing a geometry. That is, the geometry may represent a multidimensional surface of a scene, such as a city. The geometry may be textured using one or more texture images (e.g., an image comprising color values and/or depth values) depicting the scene from various view directions (e.g., a top-down view, an oblique view, etc.). Because more than one texture image may contribute to texturing a pixel of the geometry (e.g., due to overlapping views of the scene), redundant pixel texture contribution may arise. Accordingly, a redundant textured pixel within a texture image may be knocked out (e.g., in-painted) from the texture image to generate a modified texture image that may be relatively efficient to store and/or stream to a client due to enhanced compression of the modified texture image. | 07-03-2014 |
20140184596 | IMAGE BASED RENDERING - Among other things, one or more techniques and/or systems are provided for generating geometry using one or more depth images and/or for texturing geometry using one or more texture imagery. That is, geometry (e.g., a three-dimensional representation of a city) may be generated based upon depth information within a depth image. The geometry may be textured by assigning color values to pixels within the geometry based upon texture imagery (e.g., a video and/or an image comprising depth values and/or color values). For example, a 3D point associated with a pixel of the geometry may be projected to a location within texture imagery. If the depth of the pixel corresponds to a depth of the location, then texture information (e.g., a color value) from the texture imagery may be assigned to the pixel. In this way, the textured geometry may be used to generate a rendered image. | 07-03-2014 |
20140184631 | VIEW DIRECTION DETERMINATION - Among other things, one or more techniques and/or systems are provided for defining a view direction for a texture image used to texture a geometry. That is, a geometry may represent a multi-dimensional surface of a scene, such as a city. The geometry may be textured using one or more texture images depicting the scene from various view directions. Because more than one texture image may contribute to texturing portions of the geometry, a view direction for a texture image may be selectively defined based upon a coverage metric associated with an amount of non-textured geometry pixels that are textured by the texture image along the view direction. In an example, a texture image may be defined according to a customized configuration, such as a spherical configuration, a cylindrical configuration, etc. In this way, redundant texturing of the geometry may be mitigated based upon the selectively identified view direction(s). | 07-03-2014 |
20140254921 | PROCEDURAL AUTHORING - The claimed subject matter provides a system and/or a method that facilitates generating a model from a 3-dimensional (3D) object assembled from 2-dimensional (2D) content. A content aggregator can construct a 3D object from a collection of two or more 2D images each depicting a real entity in a physical real world, wherein the 3D object is constructed by combining the two or more 2D images based upon a respective image perspective. A 3D virtual environment can allow exploration of the 3D object. A model component can extrapolate a true 3D geometric model from the 3D object, wherein the true 3D geometric model is generated to include scaling in proportion to a size within the physical real world. | 09-11-2014 |
20140267343 | TRANSLATED VIEW NAVIGATION FOR VISUALIZATIONS - Among other things, one or more techniques and/or systems are provided for defining transition zones for navigating a visualization. The visualization may be constructed from geometry of a scene and one or more texture images depicted the scene from various viewpoints. A transition zone may correspond to portions of the visualization that do not have a one-to-one correspondence with a single texture image, but are generated from textured geometry (e.g., a projection of texture imagery onto the geometry). Because a translated view may have visual error (e.g., a portion of the translated view is not correctly represented by the textured geometry), one or more transition zones, specifying translated view experiences (e.g., unrestricted view navigation, restricted view navigation, etc.), may be defined. For example, a snapback force may be applied when a current view corresponds to a transition zone having a relatively higher error. | 09-18-2014 |
20140267587 | PANORAMA PACKET - One or more techniques and/or systems are provided for generating a panorama packet and/or for utilizing a panorama packet. That is, a panorama packet may be generated and/or consumed to provide an interactive panorama view experience of a scene depicted by one or more input images within the panorama packet (e.g., a user may explore the scene through multi-dimensional navigation of a panorama generated from the panorama packet). The panorama packet may comprise a set of input images may depict the scene from various viewpoints. The panorama packet may comprise a camera pose manifold that may define one or more perspectives of the scene that may be used to generate a current view of the scene. The panorama packet may comprise a coarse geometry corresponding to a multi-dimensional representation of a surface of the scene. An interactive panorama of the scene may be generated based upon the panorama packet. | 09-18-2014 |
20140267588 | IMAGE CAPTURE AND ORDERING - One or more techniques and/or systems are provided for ordering images for panorama stitching and/or for providing a focal point indicator for image capture. For example, one or more images, which may be stitched together to create a panorama of a scene, may be stored within an image stack according to one or more ordering preferences, such as where manually captured images are stored within a first/higher priority region of the image stack as compared to automatically captured images. One or more images within the image stack may be stitched according to a stitching order to create the panorama, such as using images in the first region for a foreground of the panorama. Also, a current position of a camera may be tracked and compared with a focal point of a scene to generate a focal point indicator to assist with capturing a new/current image of the scene. | 09-18-2014 |
20140267600 | SYNTH PACKET FOR INTERACTIVE VIEW NAVIGATION OF A SCENE - One or more techniques and/or systems are provided for generating a synth packet and/or for providing an interactive view experience of a scene utilizing the synth packet. In particular, the synth packet comprises a set of input images depicting a scene from various viewpoints, a local graph comprising navigational relationships between input images, a coarse geometry comprising a multi-dimensional representation of a surface of the scene, and/or a camera pose manifold specifying view perspectives of the scene. An interactive view experience of the scene may be provided using the synth packet, such that a user may seamlessly navigate the scene in multi-dimensional space based upon navigational relationship information specified within the local graph. | 09-18-2014 |
20140379521 | ACTIVITY-BASED PERSONAL PROFILE INFERENCE - The aggregation of facts from various sources about an individual may produce an individual profile that may inform personalized services. However, a compilation of facts may be supplemented by monitoring activities of the individual and formulating inferences regarding the individual's individual details, and the confidence of such inferences. Accordingly, a device may compare the detected activities with a behavioral rule set indicating correlations between activities and inferred individual details (e.g., frequently spent weekday evenings and morning departures from a residence imply that the residence is the individual's home; frequent bicycling to work, chosen over other available modes of transportation, implies that the individual is a bicycling enthusiast) to add inferred individual details to the individual profile. Continued monitoring may enable updating based on changes to the individual details. Multiple profiles may be synchronized while respecting the individual's privacy, obtaining the individual's consent to share information, and automatically resolving information conflicts. | 12-25-2014 |