Patent application number | Description | Published |
20090044090 | Referring to cells using header cell values - Referring to cells using header cell values is disclosed. In some embodiments, a header cell value of a header cell is allowed to be used to refer to one or more other cells that are associated with the header cell. The header cell may be included in a header row or column included in a table. A header row cell value may be employed to refer to one or more other cells in a corresponding column, and a header column cell value may be employed to refer to one or more other cells in a corresponding row. | 02-12-2009 |
20090044091 | Reference adding behavior in formula editing mode - Reference adding behavior in a formula editing mode is disclosed. In some embodiments, in response to receiving an indication of a selection of a cell, a reference to the selected cell is inserted into a formula being entered into a host cell if the host cell is not a header cell, and a reference to a row or column with which the selected cell is associated is inserted into a formula being entered into a host cell if the host cell is a header cell. | 02-12-2009 |
20090044093 | Cutting and copying discontiguous selections of cells - Cutting and copying discontiguous selections of cells is disclosed. In some embodiments, in response to receiving an indication of a selection of a set of cells that does not include only a continuous grid of selected cells and receiving an indication of a selection of a paste destination in which the set of cells is desired to be pasted, the set of cells is pasted in the paste destination in a manner that preserves a respective relative position of each cell in the set. In some embodiments, if a paste destination is not large enough to accommodate a paste operation, the paste destination is automatically expanding so that it is large enough to accommodate the paste operation. | 02-12-2009 |
20090044095 | Automatically populating and/or generating tables using data extracted from files - Automatically populating and/or generating tables using data extracted from files is disclosed. In some embodiments, in response to receiving an indication that at least a portion of a data object is desired to be included in a table, a set of one or more data values associated with the data object is selected for inclusion in the table and automatically included as an entry corresponding to the data object in the table. In various embodiments, the table may comprise an existing table and/or a newly generated table. | 02-12-2009 |
20090144651 | INTERACTIVE FRAMES FOR IMAGES AND VIDEOS DISPLAYED IN A PRESENTATION APPLICATION - A presentation application for framing objects, such as images and videos, is provided. Using the presentation application, the user may select a frame from a plurality of available frames. The presentation application may mask portions of the displayed object that would lie outside of the selected frame before displaying the selected frame. The presentation application may provide an interface that allows the user to adjust the size of the frame and the object. The presentation application may automatically adjust the size of the frame when the size of the object is changed, and vice versa. | 06-04-2009 |
20110074694 | Device and Method for Jitter Reduction on Touch-Sensitive Surfaces and Displays - Methods for reducing jitter on a device with a touch-sensitive surface and a display are disclosed. In one embodiment, an object on the display moves in accordance with detected movements of a user's finger on the touch-sensitive surface, though movement may be delayed until subsequent movement events are detected when detected movement is less than a predefined distance threshold. In response to a movement less than the predefined distance threshold, or detecting lift off of the user's finger, the object is not moved from the current location so as to prevent jitter from affecting the final position of the object. A log is kept of the touch inputs by the user's finger so as to move the object appropriately when object movement is delayed. These methods permit an object to be placed on the display with single pixel accuracy. | 03-31-2011 |
20110074695 | Device, Method, and Graphical User Interface Using Mid-Drag Gestures - A method for modifying user interface behavior on a device with a touch-sensitive surface and a display includes: displaying a user interface; detecting a first portion of a single finger gesture on the touch-sensitive surface, wherein the single finger gesture has a finger contact with a first size; performing a first responsive behavior within the user interface in accordance with the first portion of the first gesture; detecting an increase in size of the single finger contact on the touch-sensitive surface; in response to detecting the increase in size of the single finger contact, performing a second responsive behavior within the user interface; detecting a second portion of the single finger gesture on the touch-sensitive surface; and, performing a third responsive behavior within the user interface in accordance with the second portion of the single finger gesture, wherein the third responsive behavior is different from the first responsive behavior. | 03-31-2011 |
20110074696 | Device, Method, and Graphical User Interface Using Mid-Drag Gestures - A method for modifying user interface behavior on a device with a touch-sensitive surface and a display includes: displaying a user interface; while simultaneously detecting a first and a second point of contact on the touch-sensitive surface, wherein the first and second points of contact define a perimeter of a circle: detecting a first portion of a first gesture made with at least one of the points of contact on the touch-sensitive surface; performing a first responsive behavior in accordance with the first gesture; detecting a second gesture which deviates from the perimeter of the circle; performing a second responsive behavior in response to the second gesture; detecting a second portion of the first gesture; and, performing a third responsive behavior in accordance with the second portion of the first gesture, wherein the third responsive behavior is different from the first responsive behavior. | 03-31-2011 |
20110074697 | Device, Method, and Graphical User Interface for Manipulation of User Interface Objects with Activation Regions - A computing device with a display simultaneously displays a plurality of user-repositionable user interface objects with one or more activation regions. The device receives a first input from the user. Based at least in part on the first input, the device determines a first plurality of candidate actions for manipulating a user interface object. The device performs a first candidate action of the first plurality of candidate actions as determined in accordance with the first ordering. After performing the first candidate action, the device undoes the first candidate action, receives a third input that is a repetition of the first input, and determines a second plurality of candidate. The second plurality of candidate actions is ordered such that second candidate action in the second plurality of candidate actions has a higher position than the first candidate action in the second ordering. The device performs the second candidate action. | 03-31-2011 |
20110074698 | Device, Method, and Graphical User Interface for Manipulation of User Interface Objects with Activation Regions - A computing device with a touch screen display simultaneously displays on the touch screen display a plurality of user interface objects displayed at a first magnification level in a display area. The device detects a first contact on a first handle activation region for a first handle of a user interface object. In response to continuing to detect the first contact at for a predefined amount of time, the device zooms the display area to a second magnification level. While the display area is at the second magnification level, the device: detects a movement of the first contact across the touch screen display; moves the first handle in accordance with the detected movement of the first contact; and detects liftoff of the first contact. In response to detecting liftoff of the first contact, the device zooms the display area to the first magnification level. | 03-31-2011 |
20110074699 | Device, Method, and Graphical User Interface for Scrolling a Multi-Section Document - A method for scrolling a multi-section document is disclosed, including displaying on a display an electronic document that includes a plurality of document sections separated by respective logical structure boundaries; detecting a gesture on a touch-sensitive surface, the gesture having an initial velocity that exceeds a predefined speed threshold such that the gesture will scroll the electronic document more than one document section; initiating scrolling of the electronic document on the display at the initial velocity in accordance with an initial scrolling speed versus scrolling distance function; while scrolling the electronic document, adjusting the scrolling speed versus scrolling distance function so that when the scrolling speed becomes zero, a first logical structure boundary in the electronic document is displayed at a predefined location on the display; and, scrolling the electronic document in accordance with the adjusted scrolling speed versus scrolling distance function. | 03-31-2011 |
20110074710 | Device, Method, and Graphical User Interface for Manipulating User Interface Objects - A method is performed at a multifunction device with a display and a touch-sensitive surface. The method includes: displaying a first user interface for an application at a first magnification level. The first user interface includes a first plurality of user interface objects. The application has a range of magnification levels, including a predefined magnification level for requesting a second user interface with a multi-finger pinch gesture. The method also includes: detecting a first multi-finger pinch gesture on the touch-sensitive surface; and, in response: when the first magnification level is the predefined magnification level, displaying the second user interface simultaneously with the first user interface, wherein the second user interface includes a second plurality of user interface objects that are distinct from the first plurality of user interface objects in the first user interface; and when the first magnification level is greater than the predefined magnification level, zooming out the first user interface in accordance with the first multi-finger pinch gesture. | 03-31-2011 |
20110074828 | Device, Method, and Graphical User Interface for Touch-Based Gestural Input on an Electronic Canvas - Methods for touch-based gestural command input on a device with a touch-sensitive surface and a display are disclosed. In one embodiment, a method includes displaying an electronic canvas including an object at a first magnification level; simultaneously detecting a first and a second contact on the touch-sensitive surface, wherein at least one of the first contact and the second contact on the touch-sensitive surface is at a location that corresponds to a location on the display that is away from the object; detecting a gesture made with the first and second contacts; when a velocity of the gesture is less than a predefined gesture velocity threshold, scaling the electronic canvas in accordance with the gesture; and, when the velocity of the gesture is greater than the predefined gesture velocity threshold transitioning the electronic canvas from the first magnification level to a second, predefined magnification level in response to the gesture. | 03-31-2011 |
20110074830 | Device, Method, and Graphical User Interface Using Mid-Drag Gestures - A method for modifying user interface behavior on a device with a touch-sensitive surface and a display includes displaying a user interface, and while detecting a contact on the touch-sensitive surface: detecting a first movement of the contact corresponding to a first portion of a first gesture; performing a first responsive behavior in accordance with the first portion of the first gesture; detecting a second movement of the contact corresponding to a second gesture; performing a second responsive behavior in response to the second gesture, wherein the second responsive behavior is different from the first responsive behavior; detecting a third movement of the contact, wherein the third movement corresponds to a second portion of the first gesture; and performing a third responsive behavior in accordance with the second portion of the first gesture. The third responsive behavior is different from the first responsive behavior. | 03-31-2011 |
20110078560 | Device, Method, and Graphical User Interface for Displaying Emphasis Animations for an Electronic Document in a Presentation Mode - A computing device with a display displays a first portion of an electronic document in a presentation mode of an electronic document authoring application. The first portion of the electronic document includes predefined activation regions for a plurality of presentation emphasis objects. While displaying the first portion of the electronic document the device detects a first input by a user on a respective predefined activation region for a first presentation emphasis object in the plurality of presentation emphasis objects. In response to detecting the first input on the respective predefined activation region for the first presentation emphasis object, the device: selects a first emphasis animation for the first presentation emphasis object based on the first input; displays the first emphasis animation; and displays the first presentation emphasis object. | 03-31-2011 |
20110078597 | Device, Method, and Graphical User Interface for Manipulation of User Interface Objects with Activation Regions - A computing device with a display simultaneously displays a plurality of user interface objects, a currently selected user interface object, and a plurality of resizing handles for the currently selected user interface object. Each respective resizing handle has a corresponding handle activation region with a default position relative to the respective resizing handle, a default size, and a default shape. The device detects a first input on a first handle activation region for a first resizing handle in the plurality of resizing handles. In response to detecting the first input, the device: resizes the currently selected user interface object, and for at least one resizing handle in the plurality of resizing handles, modifies a corresponding handle activation region by changing the position of the handle activation region relative to the resizing handle from the default position to a modified position. | 03-31-2011 |
20110145759 | Device, Method, and Graphical User Interface for Resizing User Interface Content - Heuristics for resizing displayed objects within an electronic document are disclosed. The heuristics include resizing displayed objects to predefined ratios, resizing displayed objects by predefined increments, relating resizing of displayed objects to a global reference grid, and resizing a plurality of displayed objects aligned to an axis. | 06-16-2011 |
20110181527 | Device, Method, and Graphical User Interface for Resizing Objects - A method for resizing a currently selected user interface object includes simultaneously displaying on a touch-sensitive display the currently selected user interface object having a center, and a plurality of resizing handles for the currently selected user interface object. The method also includes detecting a first contact on a first resizing handle in the plurality of resizing handles, and detecting movement of the first contact across the touch-sensitive display. The method further includes, in response to detecting movement of the first contact, when a second contact is detected on the touch-sensitive display while detecting movement of the first contact, resizing the currently selected user interface object about the center of the currently selected user interface object. | 07-28-2011 |
20110181528 | Device, Method, and Graphical User Interface for Resizing Objects - A method for resizing a currently selected user interface object includes simultaneously displaying on a touch-sensitive display the currently selected user interface object having a center, and a plurality of resizing handles for the currently selected user interface object. The method also includes detecting a first contact on a first resizing handle in the plurality of resizing handles, and detecting movement of the first contact across the touch-sensitive display. The method further includes, in response to detecting movement of the first contact, when a second contact is detected on the touch-sensitive display while detecting movement of the first contact, resizing the currently selected user interface object about the center of the currently selected user interface object. | 07-28-2011 |
20110181529 | Device, Method, and Graphical User Interface for Selecting and Moving Objects - A method performed at a computing device with a touch-sensitive display includes: displaying a plurality of user interface objects on the display, including a currently selected first user interface object; detecting a first contact on the first user interface object; detecting movement of the first contact across the display; moving the first user interface object in accordance with the movement of the first contact; while detecting movement of the first contact across the display: detecting a first finger gesture on a second user interface object; and, in response: selecting the second user interface object; moving the second user interface object in accordance with movement of the first contact subsequent to detecting the first finger gesture; and continuing to move the first user interface object in accordance with the movement of the first contact. | 07-28-2011 |
20110185317 | Device, Method, and Graphical User Interface for Resizing User Interface Content - Aspect ratio locking alignment guides for gestures are disclosed. In one embodiment, a gesture is detected to resize a user interface element, and in response, a first alignment guide is visibly displayed, wherein the first alignment guide includes positions representing different sizes the user interface element can be resized to while maintaining the initial aspect ratio of the user interface element. While the user interface element is resized in accordance with the user gesture, and while the first alignment guide is visibly displayed: when the user gesture is substantially aligned with the first alignment guide, visible display of the first alignment guide is maintained; and when the user gesture substantially deviates from the first alignment guide, visible display of the first alignment guide is terminated. | 07-28-2011 |
20110185321 | Device, Method, and Graphical User Interface for Precise Positioning of Objects - A method includes, at a computing device with a touch-sensitive display: displaying a user interface object on the touch-sensitive display; detecting a contact on the user interface object; while continuing to detect the contact on the user interface object: detecting an M-finger gesture, distinct from the contact, in a first direction on the touch-sensitive display, where M is an integer; and, in response to detecting the M-finger gesture, translating the user interface object a predefined number of pixels in a direction in accordance with the first direction. | 07-28-2011 |
20110202823 | PASTING A SET OF CELLS - Pasting a set of cells is disclosed. In some embodiments, a selection of an option to paste a set of cells in a paste destination is received; and in response to determining that the paste destination is not large enough to accommodate a paste operation associated with the selected option, the paste destination is automatically expanded so that the paste destination is large enough to accommodate the paste operation. | 08-18-2011 |
20120026100 | Device, Method, and Graphical User Interface for Aligning and Distributing Objects - At a multifunction device with a display and a touch-sensitive surface, a plurality of objects are displayed on the display. The device detects a first contact on the touch-sensitive surface. While detecting the first contact, the device detects a first gesture that includes movement of a second contact and a third contact on the touch-sensitive surface. In response to detecting the first gesture, the device determines a contact axis based on a location of the second contact relative to a location of the third contact on the touch-sensitive surface. The device determines an object-alignment axis based on the contact axis, and repositions one or more of the objects so as to align at least a subset of the objects on the display along the object-alignment axis. | 02-02-2012 |
20120030568 | Device, Method, and Graphical User Interface for Copying User Interface Objects Between Content Regions - An electronic device displays a user interface object in a first content region on a touch-sensitive display. The device detects a first finger input on the user interface object. While detecting the first finger input, the device detects a second finger input on the touch-sensitive display. When the first finger input is an M-finger contact, wherein M is an integer, in response to detecting the second finger input, the device selects a second content region and displays a copy of the user interface object in the second content region. After detecting the second finger input, the device detects termination of the first finger input while the copy of the user interface object is displayed in the second content region. In response to detecting termination of the first finger input, the device maintains display of the copy of the user interface object in the second content region. | 02-02-2012 |
20120030569 | Device, Method, and Graphical User Interface for Reordering the Front-to-Back Positions of Objects - At a multifunction device with a display and a touch-sensitive surface, a plurality of objects are displayed on the display. The plurality of objects have a first layer order. A first contact is detected at a location on the touch-sensitive surface that corresponds to a location of a respective object of the plurality of objects. While detecting the first contact, a gesture that includes a second contact is detected on the touch-sensitive surface. In response to detecting the gesture, the plurality of objects are reordered in accordance with the gesture to create a second layer order that is different from the first layer order. In some embodiments, the position of the respective object within the first order is different from the position of the respective object within the second order. | 02-02-2012 |
20120188174 | Device, Method, and Graphical User Interface for Navigating and Annotating an Electronic Document - A device, configured to operate in a first operational mode at some times and in a second operational mode at other times, detects a first gesture having a first gesture type; in response to detecting the first gesture: in accordance with a determination that the device is in the first operational mode, performs an operation having a first operation type; and, in accordance with a determination that the device is in the second operational mode, performs an operation having a second operation type; detects a second gesture having a second gesture type; and in response to detecting the second gesture: in accordance with a determination that the device is in the first operational mode, performs an operation having the second operation type; and in accordance with a determination that the device is in the second operational mode, performs an operation having the first operation type. | 07-26-2012 |
20120192057 | Device, Method, and Graphical User Interface for Navigating through an Electronic Document - An electronic device with a display and a touch-sensitive surface stores a document having primary content, supplementary content, and user-generated content. The device displays a representation of the document in a segmented user interface on the display. Primary content of the document is displayed in a first segment of the segmented user interface and supplementary content of the document is concurrently displayed in a second segment of the segmented user interface distinct from the first segment. The device receives a request to view user-generated content of the document. In response to the request, the device maintains display of the previously displayed primary content, ceases to display at least a portion of the previously displayed supplementary content, and displays user-generated content of the document in a third segment of the segmented user interface distinct from the first segment and the second segment. | 07-26-2012 |
20120192065 | Device, Method, and Graphical User Interface for Navigating and Annotating an Electronic Document - A device, configured to operate in a first operational mode at some times and in a second operational mode at other times, detects a first gesture having a first gesture type; in response to detecting the first gesture: in accordance with a determination that the device is in the first operational mode, performs an operation having a first operation type; and, in accordance with a determination that the device is in the second operational mode, performs an operation having a second operation type; detects a second gesture having a second gesture type; and in response to detecting the second gesture: in accordance with a determination that the device is in the first operational mode, performs an operation having the second operation type; and in accordance with a determination that the device is in the second operational mode, performs an operation having the first operation type. | 07-26-2012 |
20120192068 | Device, Method, and Graphical User Interface for Navigating through an Electronic Document - An electronic device with a display and a touch-sensitive surface stores a document having primary content, supplementary content, and user-generated content. The device displays a representation of the document in a segmented user interface on the display. Primary content of the document is displayed in a first segment of the segmented user interface and supplementary content of the document is concurrently displayed in a second segment of the segmented user interface distinct from the first segment. The device receives a request to view user-generated content of the document. In response to the request, the device maintains display of the previously displayed primary content, ceases to display at least a portion of the previously displayed supplementary content, and displays user-generated content of the document in a third segment of the segmented user interface distinct from the first segment and the second segment. | 07-26-2012 |
20120192093 | Device, Method, and Graphical User Interface for Navigating and Annotating an Electronic Document - A device, configured to operate in a first operational mode at some times and in a second operational mode at other times, detects a first gesture having a first gesture type; in response to detecting the first gesture: in accordance with a determination that the device is in the first operational mode, performs an operation having a first operation type; and, in accordance with a determination that the device is in the second operational mode, performs an operation having a second operation type; detects a second gesture having a second gesture type; and in response to detecting the second gesture: in accordance with a determination that the device is in the first operational mode, performs an operation having the second operation type; and in accordance with a determination that the device is in the second operational mode, performs an operation having the first operation type. | 07-26-2012 |
20120192101 | Device, Method, and Graphical User Interface for Navigating through an Electronic Document - An electronic device with a display and a touch-sensitive surface stores a document having primary content, supplementary content, and user-generated content. The device displays a representation of the document in a segmented user interface on the display. Primary content of the document is displayed in a first segment of the segmented user interface and supplementary content of the document is concurrently displayed in a second segment of the segmented user interface distinct from the first segment. The device receives a request to view user-generated content of the document. In response to the request, the device maintains display of the previously displayed primary content, ceases to display at least a portion of the previously displayed supplementary content, and displays user-generated content of the document in a third segment of the segmented user interface distinct from the first segment and the second segment. | 07-26-2012 |
20120192102 | Device, Method, and Graphical User Interface for Navigating through an Electronic Document - An electronic device with a display and a touch-sensitive surface stores a document having primary content, supplementary content, and user-generated content. The device displays a representation of the document in a segmented user interface on the display. Primary content of the document is displayed in a first segment of the segmented user interface and supplementary content of the document is concurrently displayed in a second segment of the segmented user interface distinct from the first segment. The device receives a request to view user-generated content of the document. In response to the request, the device maintains display of the previously displayed primary content, ceases to display at least a portion of the previously displayed supplementary content, and displays user-generated content of the document in a third segment of the segmented user interface distinct from the first segment and the second segment. | 07-26-2012 |
20120192118 | Device, Method, and Graphical User Interface for Navigating through an Electronic Document - An electronic device with a display and a touch-sensitive surface stores a document having primary content, supplementary content, and user-generated content. The device displays a representation of the document in a segmented user interface on the display. Primary content of the document is displayed in a first segment of the segmented user interface and supplementary content of the document is concurrently displayed in a second segment of the segmented user interface distinct from the first segment. The device receives a request to view user-generated content of the document. In response to the request, the device maintains display of the previously displayed primary content, ceases to display at least a portion of the previously displayed supplementary content, and displays user-generated content of the document in a third segment of the segmented user interface distinct from the first segment and the second segment. | 07-26-2012 |
20120235925 | Device, Method, and Graphical User Interface for Establishing an Impromptu Network - An electronic device with a touch-sensitive surface and a device motion sensor detects a predefined gesture on the touch-sensitive surface. The predefined gesture has one or more gesture components. The device detects a predefined movement of the electronic device with the device motion sensor. The predefined movement has one or more movement components. In response to detecting the predefined gesture and the predefined movement, the device, in accordance with a determination that the one or more gesture components and the one or more movement components satisfy predefined concurrency criteria, performs a predefined operation that is associated with concurrent detection of the predefined gesture and the predefined movement, and in accordance with a determination that the one or more gesture components and the one or more movement components do not satisfy the predefined concurrency criteria, foregoes performing the predefined operation. | 09-20-2012 |
20120240025 | Device, Method, and Graphical User Interface for Automatically Generating Supplemental Content - An electronic device with a display and a touch-sensitive surface displays a portion of a document in a primary user interface for the document. The portion of the document includes a respective author-specified term. The respective author-specified term is associated with corresponding additional information supplied by an author of the document, and the corresponding additional information is not concurrently displayed with the author-specified term in the portion of the document. The device also receives a request to annotate the respective author-specified term in the portion of the document; and in response to the request to annotate the respective author-specified term: annotates the respective author-specified term in the primary user interface; and generates instructions for displaying, in a supplemental user interface for the document distinct from the primary user interface, the respective author-specified term and at least a portion of the corresponding additional information for the respective author-specified term. | 09-20-2012 |
20120240037 | Device, Method, and Graphical User Interface for Displaying Additional Snippet Content - An electronic device concurrently displays snippets including a first snippet and a second snippet. The first snippet includes first displayed snippet content corresponding to a first portion of content from a document associated with the first snippet. The second snippet includes second displayed snippet content corresponding to a second portion of content from a document associated with the second snippet. The device detects a gesture associated with the first snippet, which includes detecting a first contact and a second contact and detecting movement of the first contact relative to the second contact. In response, the device modifies the first snippet to include an additional portion of content from the document associated with the first snippet that was not included in the first displayed snippet content and maintains display of the second snippet without adding any additional content from the document associated with the second snippet. | 09-20-2012 |
20120240042 | Device, Method, and Graphical User Interface for Establishing an Impromptu Network - An electronic device with a touch-sensitive surface and a device motion sensor detects a predefined gesture on the touch-sensitive surface. The predefined gesture has one or more gesture components. The device detects a predefined movement of the electronic device with the device motion sensor. The predefined movement has one or more movement components. In response to detecting the predefined gesture and the predefined movement, the device, in accordance with a determination that the one or more gesture components and the one or more movement components satisfy predefined concurrency criteria, performs a predefined operation that is associated with concurrent detection of the predefined gesture and the predefined movement, and in accordance with a determination that the one or more gesture components and the one or more movement components do not satisfy the predefined concurrency criteria, foregoes performing the predefined operation. | 09-20-2012 |
20120240074 | Device, Method, and Graphical User Interface for Navigating Between Document Sections - An electronic device with a display and a touch-sensitive surface displays a page of a first multi-page section of a document and a navigation bar configured to navigate through only pages in the first multi-page section of the document. The device detects a predefined gesture at a location on the touch-sensitive surface that corresponds to a predefined portion of the navigation bar. In response to detecting the predefined gesture, the device displays a navigation user interface that enables selection of a page of the document that is outside of the first multi-page section. The device receives an input in the navigation user interface that indicates selection of a page of a second multi-page section of the document outside of the first multi-page section. In response to receiving the input, the device displays the selected page of the second multi-page section of the document. | 09-20-2012 |
20130047115 | CREATING AND VIEWING DIGITAL NOTE CARDS - Systems, techniques, and methods are presented for creating digital note cards and presenting a graphical user interface for interacting with digital note cards. For example, content from an electronic book can be displayed in a graphical user interface. Input can be received in the graphical user interface highlighting a portion of the content and creating a note, the note including user generated content. A digital note card can be created where one side of the digital note card includes the highlighted text, and the other side of the digital note card includes the note. The digital note card can be displayed in the graphical user interface. | 02-21-2013 |
20130073932 | Interactive Content for Digital Books - This disclosure describes systems, methods, and computer program products for presenting interactive content for digital books. In some implementations, a graphical user interface (GUI) is presented that allows a user to view and interact with content embedded in a digital book. The interactive content can include, but is not limited to, text, image galleries, multimedia presentations, video, HTML, animated and static diagrams, charts, tables, visual dictionaries, review questions, three-dimensional (3D) animation and any other known media content. For example, various touch gestures can be used by the user to move through images and multimedia presentations, play video, answer review questions, manipulate three-dimensional objects, and interact with HTML. | 03-21-2013 |
20130125003 | ACTION REPRESENTATION DURING SLIDE GENERATION - Techniques for displaying object animations on a slide are disclosed. In accordance with these techniques, objects on a slide may be assigned actions when generating or editing the slide. The effects of the actions on the slide are depicted using one or more respective representations which represent the slide as it will appear after implementation of one or more corresponding actions. | 05-16-2013 |
20150020021 | Device, Method, and Graphical User Interface for Scrolling a Multi-Section Document - A method for scrolling a multi-section document is disclosed, including displaying on a display an electronic document that includes a plurality of document sections separated by respective logical structure boundaries; detecting a gesture on a touch-sensitive surface, the gesture having an initial velocity that exceeds a predefined speed threshold such that the gesture will scroll the electronic document more than one document section; initiating scrolling of the electronic document on the display at the initial velocity in accordance with an initial scrolling speed versus scrolling distance function; while scrolling the electronic document, adjusting the scrolling speed versus scrolling distance function so that when the scrolling speed becomes zero, a first logical structure boundary in the electronic document is displayed at a predefined location on the display; and, scrolling the electronic document in accordance with the adjusted scrolling speed versus scrolling distance function. | 01-15-2015 |