Patent application number | Description | Published |
20090192524 | SYNTHETIC REPRESENTATION OF A SURGICAL ROBOT - A synthetic representation of a robot tool for display on a user interface of a robotic system. The synthetic representation may be used to show the position of a view volume of an image capture device with respect to the robot. The synthetic representation may also be used to find a tool that is outside of the field of view, to display range of motion limits for a tool, to remotely communicate information about the robot, and to detect collisions. | 07-30-2009 |
20090212995 | DISTRIBUTED ITERATIVE MULTIMODAL SENSOR FUSION METHOD FOR IMPROVED COLLABORATIVE LOCALIZATION AND NAVIGATION - A computer implemented method for fusing position and range measurements to determine the position of at least one node of a plurality of distributed nodes is disclosed. The method includes (a) measuring the position of at least one node; (b) computing an estimate of the position of the at least one node based on the measured position using a filter that takes account of past estimates of position; (c) receiving an estimate of position of at least a second node; (d) measuring inter-node range to the at least a second node; (e) combining the measured inter-node range with the estimate of position of at least a second node using a second filter that takes account of past estimates of position to generate a refined estimate of the position of the at least one node; and (f) when a change in the position of the at least one node is above a predetermined threshold value, setting the refined estimate of the position of the at least one node to the estimate of the position of the at least one node and repeating (c), (e), and (f). | 08-27-2009 |
20090268010 | AUGMENTED STEREOSCOPIC VISUALIZATION FOR A SURGICAL ROBOT USING A CAPTURED FLUORESCENCE IMAGE AND CAPTURED STEREOSCOPIC VISIBLE IMAGES - An illumination channel, a stereoscopic optical channel and another optical channel are held and positioned by a robotic surgical system. A first capture unit captures a stereoscopic visible image from the first light from the stereoscopic optical channel while a second capture unit captures a fluorescence image from the second light from the other optical channel. An intelligent image processing system receives the captured stereoscopic visible image and the captured fluorescence image and generates a stereoscopic pair of fluorescence images. An augmented stereoscopic display system outputs a real-time stereoscopic image comprising a three-dimensional presentation of a blend of the stereoscopic visible image and the stereoscopic pair of fluorescence images. | 10-29-2009 |
20090268012 | AUGMENTED STEREOSCOPIC VISUALIZATION FOR A SURGICAL ROBOT USING A CAPTURED VISIBLE IMAGE COMBINED WITH A FLUORESCENCE IMAGE AND A CAPTURED VISIBLE IMAGE - An endoscope with a stereoscopic optical channel is held and positioned by a robotic surgical system. A capture unit captures (1) a visible first image and (2) a visible second image combined with a fluorescence second image from the light. An intelligent image processing system receives (1) the visible first image and (2) the visible second image combined with the fluorescence second image and generates at least one fluorescence image of a stereoscopic pair of fluorescence images and a visible second image. An augmented stereoscopic display system outputs a real-time stereoscopic image including a three-dimensional presentation including in one eye, a blend of the at least one fluorescence image of a stereoscopic pair of fluorescence images and one of the visible first and second images; and in the other eye, the other of the visible first and second images. | 10-29-2009 |
20090268015 | AUGMENTED STEREOSCOPIC VISUALIZATION FOR A SURGICAL ROBOT - A robotic surgical system positions and holds an endoscope. A visible imaging system is coupled to the endoscope. The visible imaging system captures a visible image of tissue. An alternate imaging system is also coupled to the endoscope. The alternate imaging system captures a fluorescence image of at least a portion of the tissue. A stereoscopic video display system is coupled to the visible imaging system and to the alternate imaging system. The stereoscopic video display system outputs a real-time stereoscopic image comprising a three-dimensional presentation of a blend of a fluorescence image associated with the captured fluorescence image, and the visible image. | 10-29-2009 |
20100164950 | EFFICIENT 3-D TELESTRATION FOR LOCAL ROBOTIC PROCTORING - An apparatus is configured to show telestration in 3-D to a surgeon in real time. A proctor is shown one side of a stereo image pair, such that the proctor can draw a telestration line on the one side with an input device. Points of interest are identified for matching to the other side of the stereo image pair. In response to the identified points of interest, regions and features are identified and used to match the points of interest to the other side. Regions can be used to match the points of interest. Features of the first image can be matched to the second image and used to match the points of interest to the second image, for example when the confidence scores for the regions are below a threshold value. Constraints can be used to evaluate the matched points of interest, for example by excluding bad points. | 07-01-2010 |
20100166323 | ROBUST SPARSE IMAGE MATCHING FOR ROBOTIC SURGERY - Systems, methods, and devices are used to match images. Points of interest from a first image are identified for matching to a second image. In response to the identified points of interest, regions and features can be identified and used to match the points of interest to a corresponding second image or second series of images. Regions can be used to match the points of interest when regions of the first image are matched to the second image with high confidence scores, for example above a threshold. Features of the first image can be matched to the second image, and these matched features may be used to match the points of interest to the second image, for example when the confidence scores for the regions are below the threshold value. Constraint can be used to evaluate the matched points of interest, for example by excluding bad points. | 07-01-2010 |
20100168562 | FIDUCIAL MARKER DESIGN AND DETECTION FOR LOCATING SURGICAL INSTRUMENT IN IMAGES - The present disclosure relates to systems, methods, and tools for tool tracking using image-derived data from one or more tool-located reference features. A method includes: capturing a first image of a tool that includes multiple features that define a first marker, where at least one of the features of the first marker includes an identification feature; determining a position for the first marker by processing the first image; determining an identification for the first marker by using the at least one identification feature by processing the first image; and determining a tool state for the tool by using the position and the identification of the first marker. | 07-01-2010 |
20100168763 | CONFIGURATION MARKER DESIGN AND DETECTION FOR INSTRUMENT TRACKING - The present disclosure relates to systems, methods, and tools for tool tracking using image-derived data from one or more tool located reference features. A method includes: directing illuminating light from a light source onto a robotic surgical tool within a patient body, wherein the tool includes a plurality of primitive features having known positions on the tool, and wherein each feature includes a spherical reflective surface; capturing stereo images of the plurality of primitive features when the tool is within the patient body, wherein the stereo images are captured by an image capture device adjacent the illumination source so that the illumination light reflected from the imaged primitive features toward the image capture device substantially aligns with spherical centers of the surfaces of the imaged primitive features, and determining a position for the tool by processing the stereo images so as to locate the spherical centers of the imaged primitive features by using the reflected light. | 07-01-2010 |
20100168918 | OBTAINING FORCE INFORMATION IN A MINIMALLY INVASIVE SURGICAL PROCEDURE - Methods of and a system for providing force information for a robotic surgical system. The method includes storing first kinematic position information and first actual position information for a first position of an end effector; moving the end effector via the robotic surgical system from the first position to a second position; storing second kinematic position information and second actual position information for the second position; and providing force information regarding force applied to the end effector at the second position utilizing the first actual position information, the second actual position information, the first kinematic position information, and the second kinematic position information. Visual force feedback is also provided via superimposing an estimated position of an end effector without force over an image of the actual position of the end effector. Similarly, tissue elasticity visual displays may be shown. | 07-01-2010 |
20100169815 | VISUAL FORCE FEEDBACK IN A MINIMALLY INVASIVE SURGICAL PROCEDURE - Methods of and a system for providing a visual representation of force information in a robotic surgical system. A real position of a surgical end effector is determined. A projected position of the surgical end effector if no force were applied against the end effector is also determined. Images representing the real and projected positions are output superimposed on a display. The offset between the two images provides a visual indication of a force applied to the end effector or to the kinematic chain that supports the end effector. In addition, tissue deformation information is determined and displayed. | 07-01-2010 |
20100245541 | TARGETS, FIXTURES, AND WORKFLOWS FOR CALIBRATING AN ENDOSCOPIC CAMERA - The present disclosure relates to calibration assemblies and methods for use with an imaging system, such as an endoscopic imaging system. A calibration assembly includes: an interface for constraining engagement with an endoscopic imaging system; a target coupled with the interface so as to be within the field of view of the imaging system, the target including multiple of markers having calibration features that include identification features; and a processor configured to identify from first and second images obtained at first and second relative spatial arrangements between the imaging system and the target, respectively, at least some of the markers from the identification features, and using the identified markers and calibration feature positions within the images to generate calibration data. | 09-30-2010 |
20100317965 | VIRTUAL MEASUREMENT TOOL FOR MINIMALLY INVASIVE SURGERY - Robotic and/or measurement devices, systems, and methods for telesurgical and other applications employ input devices operatively coupled to tools so as to allow a system user to manipulate tissues and other structures being measured. The system may make use of three dimensional position information from stereoscopic images. Two or more discrete points can be designated in three dimensions so as to provide a cumulative length along a straight or curving structure, an area measurement, a volume measurement, or the like. The discrete points may be identified by a single surgical tool or by distances separating two or more surgical tools, with the user optionally measuring a structure longer than a field of view of the stereoscopic image capture device by walking a pair of tools “hand-over-hand” along the structure. By allowing the system user to interact with the tissues while designating the tissue locations, and by employing imaging data to determine the measurements, the measurement accuracy and ease of measurement may be enhanced. | 12-16-2010 |
20100318099 | VIRTUAL MEASUREMENT TOOL FOR MINIMALLY INVASIVE SURGERY - Robotic and/or measurement devices, systems, and methods for telesurgical and other applications employ input devices operatively coupled to tools so as to allow a system user to manipulate tissues and other structures being measured. The system may make use of three dimensional position information from stereoscopic images. Two or more discrete points can be designated in three dimensions so as to provide a cumulative length along a straight or curving structure, an area measurement, a volume measurement, or the like. The discrete points may be identified by a single surgical tool or by distances separating two or more surgical tools, with the user optionally measuring a structure longer than a field of view of the stereoscopic image capture device by walking a pair of tools “hand-over-hand” along the structure. By allowing the system user to interact with the tissues while designating the tissue locations, and by employing imaging data to determine the measurements, the measurement accuracy and ease of measurement may be enhanced. | 12-16-2010 |
20100331855 | Efficient Vision and Kinematic Data Fusion For Robotic Surgical Instruments and Other Applications - Robotic devices, systems, and methods for use in telesurgical therapies through minimally invasive apertures make use of joint-based data throughout much of the robotic kinematic chain, but selectively rely on information from an image capture device to determine location and orientation along the linkage adjacent a pivotal center at which a shaft of the robotic surgical tool enters the patient. A bias offset may be applied to a pose (including both an orientation and a location) at the pivotal center to enhance accuracy. The bias offset may be applied as a simple rigid transformation from the image-based pivotal center pose to a joint-based pivotal center pose. | 12-30-2010 |
20110118752 | METHOD AND SYSTEM FOR HAND CONTROL OF A TELEOPERATED MINIMALLY INVASIVE SLAVE SURGICAL INSTRUMENT - In a minimally invasive surgical system, a hand tracking system tracks a location of a sensor element mounted on part of a human hand. A system control parameter is generated based on the location of the part of the human hand. Operation of the minimally invasive surgical system is controlled using the system control parameter. Thus, the minimally invasive surgical system includes a hand tracking system. The hand tracking system tracks a location of part of a human hand. A controller coupled to the hand tracking system converts the location to a system control parameter, and injects into the minimally invasive surgical system a command based on the system control parameter. | 05-19-2011 |
20110282140 | METHOD AND SYSTEM OF HAND SEGMENTATION AND OVERLAY USING DEPTH DATA - In a minimally invasive surgical system, a plurality of video images is acquired. Each image includes a hand pose image. Depth data for the hand pose image is also acquired or synthesized. The hand pose image is segmented from the image using the depth data. The segmented image is combined with an acquired surgical site image using the depth data. The combined image is displayed to a person at a surgeon's console of the minimally invasive surgical system. Processing each of the video images in the plurality video images in this way reproduces the hand gesture overlaid on the video of the surgical site in the display. | 11-17-2011 |
20110282141 | METHOD AND SYSTEM OF SEE-THROUGH CONSOLE OVERLAY - In a minimally invasive surgical system, a plurality of video images is acquired. Each video image includes images of the surgeon's hand(s), and of a master manipulator. The images of the surgeon's hand(s) and the master manipulator are segmented from the video image. The segmented images are combined with an acquired surgical site image. The combined image is displayed to the person at the surgeon's console so that the console functions as a see-through console. | 11-17-2011 |
20120071891 | METHOD AND APPARATUS FOR HAND GESTURE CONTROL IN A MINIMALLY INVASIVE SURGICAL SYSTEM - In a minimally invasive surgical system, a hand tracking system tracks a location of a sensor element mounted on part of a human hand. A system control parameter is generated based on the location of the part of the human hand. Operation of the minimally invasive surgical system is controlled using the system control parameter. Thus, the minimally invasive surgical system includes a hand tracking system. The hand tracking system tracks a location of part of a human hand. A controller coupled to the hand tracking system converts the location to a system control parameter, and injects into the minimally invasive surgical system a command based on the system control parameter. | 03-22-2012 |
20120071892 | METHOD AND SYSTEM FOR HAND PRESENCE DETECTION IN A MINIMALLY INVASIVE SURGICAL SYSTEM - In a minimally invasive surgical system, a hand tracking system tracks a location of a sensor element mounted on part of a human hand. A system control parameter is generated based on the location of the part of the human hand. Operation of the minimally invasive surgical system is controlled using the system control parameter. Thus, the minimally invasive surgical system includes a hand tracking system. The hand tracking system tracks a location of part of a human hand. A controller coupled to the hand tracking system converts the location to a system control parameter, and injects into the minimally invasive surgical system a command based on the system control parameter. | 03-22-2012 |
20120209287 | METHOD AND STRUCTURE FOR IMAGE LOCAL CONTRAST ENHANCEMENT - A local contrast enhancement method transforms a first plurality of color components of a first visual color image into a modified brightness component by using a first transformation. The first plurality of color components are in a first color space. The modified brightness component is a brightness component of a second color space. The second color space also includes a plurality of chromatic components. The method transforms all the color components of the first color space into the chromatic components of the second color space. The method then transforms the modified brightness component and the chromatic components of the second color space into a plurality of new color components, in the first color space, of a second visual color image. The method transmits the plurality of new color components to a device such as a display device. The second visual color image has enhanced contrast in comparison to the first visual color image. | 08-16-2012 |
20120237095 | ROBUST SPARSE IMAGE MATCHING FOR ROBOTIC SURGERY - Systems, methods, and devices are used to match images. Points of interest from a first image are identified for matching to a second image. In response to the identified points of interest, regions and features can be identified and used to match the points of interest to a corresponding second image or second series of images. Regions can be used to match the points of interest when regions of the first image are matched to the second image with high confidence scores, for example above a threshold. Features of the first image can be matched to the second image, and these matched features may be used to match the points of interest to the second image, for example when the confidence scores for the regions are below the threshold value. Constraint can be used to evaluate the matched points of interest, for example by excluding bad points. | 09-20-2012 |
20120268579 | TARGETS, FIXTURES, AND WORKFLOWS FOR CALIBRATING AN ENDOSCOPIC CAMERA - The present disclosure relates to calibration assemblies and methods for use with an imaging system, such as an endoscopic imaging system. A calibration assembly includes: an interface for constraining engagement with an endoscopic imaging system; a target coupled with the interface on as to be within the field of view of the imaging system, the target including multiple of markers having calibration features that include identification features; and a processor configured to identify from first and second images obtained at first and second relative spatial arrangements between the imaging system and the target, respectively, at least some of the markers from the identification features, and using the identified markers and calibration feature positions within the images to generate calibration data. | 10-25-2012 |
20120290134 | ESTIMATION OF A POSITION AND ORIENTATION OF A FRAME USED IN CONTROLLING MOVEMENT OF A TOOL - A robotic system includes a camera having an image frame whose position and orientation relative to a fixed frame is determinable through one or more image frame transforms, a tool disposed within a field of view of the camera and having a tool frame whose position and orientation relative to the fixed frame is determinable through one or more tool frame transforms, and at least one processor programmed to identify pose indicating points of the tool from one or more camera captured images, determine an estimated transform for an unknown one of the image and tool frame transforms using the identified pose indicating points and known ones of the image and tool frame transforms, update a master-to-tool transform using the estimated and known ones of the image and tool frame transforms, and command movement of the tool in response to movement of a master using the updated master-to-tool transform. | 11-15-2012 |
20130046137 | SURGICAL INSTRUMENT AND METHOD WITH MULTIPLE IMAGE CAPTURE SENSORS - A surgical instrument has a distal end portion with an outer surface with an outer radius. One or more image capture elements are movably mounted in the distal end portion. In a first state, the one or more image capture elements are un-deployed. In the first state, a surface having an aperture of at least one of the one or more image capture elements is enclosed within the outer surface of the surgical instrument so that the surface having the aperture does not extend beyond the outer surface. In a second state, the one or more image capture elements are deployed. In the second state the surface having the aperture of the at least one of the one or more image capture elements extends beyond the outer surface. | 02-21-2013 |
20130085329 | THREE-DIMENSIONAL TARGET DEVICES, ASSEMBLIES AND METHODS FOR CALIBRATING AN ENDOSCOPIC CAMERA - The present disclosure relates to calibration target devices, assemblies and methods for use with imaging systems, such as a stereoscopic endoscope. A calibration assembly includes: a target surface extends in three dimensions with calibration markers and a body with an interface that engages an endoscope so the markers are within the field of view. A first calibration marker extends along a first plane of the target surface and a second marker extends along a second plane of the target surface. The planes are different and asymmetric relative to the field of view as seen through the endoscope. Three-dimensional targets, in particular, enable endoscopic calibration using a single image (or pair of images for a stereoscopic endoscope) to reduce the calibration process complexity, calibration time and chance of error as well as allow the efficient calibration of cameras at different focus positions. | 04-04-2013 |
20130166070 | OBTAINING FORCE INFORMATION IN A MINIMALLY INVASIVE SURGICAL PROCEDURE - Methods of and a system for providing force information for a robotic surgical system. The method includes storing first kinematic position information and first actual position information for a first position of an end effector; moving the end effector via the robotic surgical system from the first position to a second position; storing second kinematic position information and second actual position information for the second position; and providing force information regarding force applied to the end effector at the second position utilizing the first actual position information, the second actual position information, the first kinematic position information, and the second kinematic position information. Visual force feedback is also provided via superimposing an estimated position of an end effector without force over an image of the actual position of the end effector. Similarly, tissue elasticity visual displays may be shown. | 06-27-2013 |
20130209208 | COMPACT NEEDLE MANIPULATOR FOR TARGETED INTERVENTIONS - Embodiments of an instrument manipulator are disclosed. An instrument manipulator can include a track; a translational carriage coupled to ride along the track; a shoulder yaw joint coupled to the translational carriage; a shoulder pitch joint coupled to the shoulder yaw joint, the shoulder pith joint including an arm, a wrist mount coupled to the arm, struts coupled between the wrist mount and the shoulder yaw joint, and a shoulder pitch mechanism coupled to the arm; a yaw-pitch-roll wrist coupled to the wrist mount, the yaw-pitch-roll wrist including a yaw joint and a differentially driven pitch-roll joint; and an instrument mount coupled to the wrist. The various joints and carriages can be driven by motors. | 08-15-2013 |
20130303892 | Systems and Methods for Navigation Based on Ordered Sensor Records - A method of tracking a medical instrument comprises receiving a model of an anatomical passageway formation and receiving a set of ordered sensor records for the medical instrument. The set of ordered sensor records provide a path history of the medical instrument. The method further comprises registering the medical instrument with the model of the anatomical passageway formation based on the path history. | 11-14-2013 |
20130303893 | Systems and Methods for Deformation Compensation Using Shape Sensing - A method and medical system for estimating the deformation of an anatomic structure that comprises generating a first model of at least one anatomical passageway from anatomical data describing a patient anatomy and determining a shape of a device positioned within the branched anatomical passageways. The method and medical system also comprise generating a second model of the plurality of branched anatomical passageways by adjusting the first model relative to the determined shape of the device. | 11-14-2013 |
20140051986 | Systems and Methods for Registration of Multiple Vision Systems - A method comprises generating a model of an anatomic region and receiving a true image from an endoscopic image capture probe positioned within the anatomic region. The method further comprises identifying a true fiducial region in the true image and identifying a plurality of virtual tissue structures in the model of the anatomic region. The method further comprises matching one of the plurality of the virtual tissue structures with the true fiducial region and determining a probe pose of the endoscopic image capture probe from the matched one of the plurality of virtual tissue structures. | 02-20-2014 |
20140055489 | RENDERING TOOL INFORMATION AS GRAPHIC OVERLAYS ON DISPLAYED IMAGES OF TOOLS - An operator telerobotically controls tools to perform a procedure on an object at a work site while viewing real-time images of the work site on a display. Tool information is provided in the operator's current gaze area on the display by rendering the tool information over the tool so as not to obscure objects being worked on at the time by the tool nor to require eyes of the user to refocus when looking at the tool information and the image of the tool on a stereo viewer. | 02-27-2014 |
20140058564 | VISUAL FORCE FEEDBACK IN A MINIMALLY INVASIVE SURGICAL PROCEDURE - Methods of and a system for providing a visual representation of force information in a robotic surgical system. A real position of a surgical end effector is determined. A projected position of the surgical end effector if no force were applied against the end effector is also determined. Images representing the real and projected positions are output superimposed on a display. The offset between the two images provides a visual indication of a force applied to the end effector or to the kinematic chain that supports the end effector. In addition, tissue deformation information is determined and displayed. | 02-27-2014 |
20140180063 | DETERMINING POSITION OF MEDICAL DEVICE IN BRANCHED ANATOMICAL STRUCTURE - Information extracted from sequential images captured from the perspective of a distal end of a medical device moving through an anatomical structure are compared with corresponding information extracted from a computer model of the anatomical structure. A most likely match between the information extracted from the sequential images and the corresponding information extracted from the computer model is then determined using probabilities associated with a set of potential matches so as to register the computer model of the anatomical structure to the medical device and thereby determine the lumen of the anatomical structure which the medical device is currently in. Sensor information may be used to limit the set of potential matches. Feature attributes associated with the sequence of images and the set of potential matches may be quantitatively compared as part of the determination of the most likely match. | 06-26-2014 |
20140187949 | Systems and Methods For Interventional Procedure Planning - A method of deploying an interventional instrument is performed by a processing system. The method comprises receiving an image of an anatomic structure from an imaging probe. The image has an image frame of reference. The imaging probe is extendable distally beyond a distal end of a guide catheter. The distal end of the guide catheter has a catheter frame of reference. The method further includes receiving location data associated with a target structure identified in the image. The received location data is in the image frame of reference. The method further comprises transforming the location data associated with the target structure from the image frame of reference to the catheter frame of reference. | 07-03-2014 |
20140188440 | Systems And Methods For Interventional Procedure Planning - A method of planning a procedure to deploy an interventional instrument comprises receiving a model of an anatomic structure. The anatomic structure includes a plurality of passageways. The method further includes identifying a target structure in the model and receiving information about an operational capability of the interventional instrument within the plurality of passageways. The method further comprises identifying a planned deployment location for positioning a distal tip of the interventional instrument to perform the procedure on the target structure based upon the operational capability of the interventional instrument. | 07-03-2014 |
20140232824 | PROVIDING INFORMATION OF TOOLS BY FILTERING IMAGE AREAS ADJACENT TO OR ON DISPLAYED IMAGES OF THE TOOLS - An operator telerobotically controls tools to perform a procedure on an object at a work site while viewing real-time images of the object, tools and work site on a display. Tool information is provided by filtering a part of the real-time images for enhancement or degradation to indicate a state of a tool and displaying the filtered images on the display. | 08-21-2014 |
20140235943 | Vision Probe with Access Port - An anatomical probe system comprises an elongated flexible body and an elongated probe extending within the flexible body. The probe has a distal end and includes an outer wall defining a channel. The probe also includes an access port in the outer wall in communication with the channel. The access port is spaced proximally of the distal end. | 08-21-2014 |
20140275997 | SHAPE SENSOR SYSTEMS FOR TRACKING INTERVENTIONAL INSTRUMENTS AND MEHODS OF USE - A medical tracking system comprises a fiducial apparatus that includes a sensor docking feature configured to mate with a mating portion of a sensor device. The sensor docking feature retains the mating portion in a known configuration. The fiducial apparatus also includes at least one imageable fiducial marker and a surface configured for attachment to an anatomy of a patient. | 09-18-2014 |
20140282196 | ROBOTIC SYSTEM PROVIDING USER SELECTABLE ACTIONS ASSOCIATED WITH GAZE TRACKING - A robotic system provides user selectable actions associated with gaze tracking according to user interface types. User initiated correction and/or recalibration of the gaze tracking may be performed during the processing of individual of the user selectable actions. | 09-18-2014 |
20140343416 | SYSTEMS AND METHODS FOR ROBOTIC MEDICAL SYSTEM INTEGRATION WITH EXTERNAL IMAGING - A medical robotic system and method of operating such comprises taking intraoperative external image data of a patient anatomy, and using that image data to generate a modeling adjustment for a control system of the medical robotic system (e.g., updating anatomic model and/or refining instrument registration), and/or adjust a procedure control aspect (e.g., regulating substance or therapy delivery, improving targeting, and/or tracking performance). | 11-20-2014 |
20150025392 | EFFICIENT 3-D TELESTRATION FOR LOCAL AND REMOTE ROBOTIC PROCTORING - An apparatus is configured to show telestration in 3-D to a surgeon in real time. A proctor is shown one side of a stereo image pair, such that the proctor can draw a telestration line on the one side with an input device. Points of interest are identified for matching to the other side of the stereo image pair. In response to the identified points of interest, regions and features are identified and used to match the points of interest to the other side. Regions can be used to match the points of interest. Features of the first image can be matched to the second image and used to match the points of interest to the second image, for example when the confidence scores for the regions are below a threshold value. Constraints can be used to evaluate the matched points of interest, for example by excluding bad points. | 01-22-2015 |
20150077519 | AUGMENTED STEREOSCOPIC VISUALIZATION FOR A SURGICAL ROBOT USING A CAPTURED VISIBLE IMAGE COMBINED WITH A FLUOROESCENCE IMAGE AND A CAPTURED VISIBLE IMAGE - An endoscope with a stereoscopic optical channel is held and positioned by a robotic surgical system. A capture unit captures (1) a visible first image and (2) a visible second image combined with a fluorescence second image from the light. An intelligent image processing system receives (1) the visible first image and (2) the visible second image combined with the fluorescence second image and generates at least one fluorescence image of a stereoscopic pair of fluorescence images and a visible second image. An augmented stereoscopic display system outputs a real-time stereoscopic image including a three-dimensional presentation including in one eye, a blend of the at least one fluorescence image of a stereoscopic pair of fluorescence images and one of the visible first and second images; and in the other eye, the other of the visible first and second images. | 03-19-2015 |
20150080909 | Method and System for Hand Presence Detection in a Minimally Invasive Surgical System - In a minimally invasive surgical system, a hand tracking system tracks a location of a sensor element mounted on part of a human hand. A system control parameter is generated based on the location of the part of the human hand. Operation of the minimally invasive surgical system is controlled using the system control parameter. Thus, the minimally invasive surgical system includes a hand tracking system. The hand tracking system tracks a location of part of a human hand. A controller coupled to the hand tracking system converts the location to a system control parameter, and injects into the minimally invasive surgical system a command based on the system control parameter. | 03-19-2015 |