Patent application number | Description | Published |
20120147176 | ADAPTATION FOR CLEAR PATH DETECTION WITH ADDITIONAL CLASSIFIERS - A method and system for vehicular clear path detection using adaptive machine learning techniques including additional classifiers. Digital camera images are segmented into patches, from which characteristic features are extracted representing attributes such as color and texture. The patch features are analyzed by a Support Vector Machine (SVM) or other machine learning classifier, which has been previously trained to recognize clear path image regions. For image regions or patches which result in a low confidence value, an additional classifier can be used, where the additional classifier is adaptively trained using real world test samples which were previously classified with high confidence as clear path roadway. Output from the original, offline trained classifier and the additional, adaptively-updated classifier are then used to make a joint decision about clear path existence in subsequent image patches. | 06-14-2012 |
20130128044 | VISION-BASED SCENE DETECTION - A method of distinguishing between daytime lighting conditions and nighttime lighting conditions based on a captured image by a vision-based imaging device along a path of travel. An image is captured by a vision-based imaging device. A region of interest is selected in the captured image. A light intensity value is determined for each pixel within the region of interest. A cumulative histogram is generated based on light intensity values within the region of interest. The cumulative histogram including a plurality of category bins representing the light intensity values. Each category bin identifies an aggregate value of light intensity values assigned to each respective category bin. An aggregate value is compared within a predetermined category bin of the histogram to a first predetermined threshold. A determination is made whether the image is captured during the daytime lighting conditions as a function of the aggregate value within the predetermined category bin. | 05-23-2013 |
20130202152 | Selecting Visible Regions in Nighttime Images for Performing Clear Path Detection - A method provides for determining visible regions in a captured image during a nighttime lighting condition. An image is captured from an image capture device mounted to a vehicle. An intensity histogram of the captured image is generated. An intensity threshold is applied to the intensity histogram for identifying visible candidate regions of a path of travel. The intensity threshold is determined from a training technique that utilizes a plurality of training-based captured images of various scenes. An objective function is used to determine objective function values for each correlating intensity value of each training-based captured image. The objective function values and associated intensity values for each of the training-based captured images are processed for identifying a minimum objective function value and associated optimum intensity threshold for identifying the visible candidate regions of the captured image. | 08-08-2013 |
20130265424 | RECONFIGURABLE CLEAR PATH DETECTION SYSTEM - A reconfigurable clear path detection system includes an image capture device and a primary clear path detection module for determining corresponding probability values of identified patches within a captured image representing a likelihood of whether a respective patch is a clear path of the road. A plurality of secondary clear path detection modules each are used to assist in identifying a respective clear path of the traveled road in the input image. One or more of the secondary clear path detection modules are selectively enabled for identifying the clear path. The selectively enabled secondary clear path detection modules are used to identify the clear path of the road of travel in the input image. A fusion module collectively analyzes the clear path detection results of the primary clear path detection module and the selectively enabled secondary clear path detection modules for identifying the clear path in the input image. | 10-10-2013 |
20130266175 | ROAD STRUCTURE DETECTION AND TRACKING - Method for detecting road edges in a road of travel for clear path detection. Input images are captured at various time step frames. An illumination intensity image and a yellow image are generated from the captured image. Edge analysis is performed. The line candidates identified in a next frame are tracked. A vanishing point is estimated in the next frame based on the tracked line candidates. Respective line candidates are selected in the next frame. A region of interest is identified in the captured image for each line candidate. Features relating to the line candidate are extracted from the region of interest and input to a classifier. The classifier assigns a confidence value to the line candidate identifying a probability of whether the line candidate is a road edge. The potential line candidate is identified as a reliable road edge if the confidence value is greater than a predetermined value. | 10-10-2013 |
20130266186 | TOP-DOWN VIEW CLASSIFICATION IN CLEAR PATH DETECTION - A method of detecting a clear path in a road of travel for a vehicle utilizing a top-down view classification technique. An input image of a scene exterior of the vehicle is captured. The captured input image represents a perspective view of the road of travel. The captured input image is analyzed. A segmented top-down image that includes potential clear path regions and potential non-clear path regions are generated. The segmented top-down image represents a viewing angle perpendicular to a ground plane. The segmented regions of the segmented top-down view are input to a classifier for identifying the clear path regions of travel. The identified clear path regions are utilized for navigating the road of travel. | 10-10-2013 |
20130266226 | TEMPORAL COHERENCE IN CLEAR PATH DETECTION - A method of detecting a clear path of travel. Input images are captured at various time step frames. Clear path probability maps of a current and previous time step frames are generated. A corresponding clear path probability map is generated for the current time step frame derived as a function of the clear path probability map of the previous time step frame and of a corresponding mapping that coherently links the previous time step frame to the current time step frame. A weight-matching map is generated. The probability values of the current time step frame are updated as a function of the corresponding probability map. A current frame probability decision map is generated based on updated probability values of the current time step frame. The clear path in the image of the current time step is identified based on the current frame probability decision map. | 10-10-2013 |
20140085409 | WIDE FOV CAMERA IMAGE CALIBRATION AND DE-WARPING - A system and method for providing calibration and de-warping for ultra-wide FOV cameras. The method includes estimating intrinsic parameters such as the focal length of the camera and an image center of the camera using multiple measurements of the near optical axis object points and a pinhole camera model. The method further includes estimating distortion parameters of the camera using an angular distortion model that defines an angular relationship between an incident optical ray passing an object point in an object space and an image point on an image plane that is an image of the object point on the incident optical ray. The method can include a parameter optimization process to refine the parameter estimation. | 03-27-2014 |
20140104424 | IMAGING SURFACE MODELING FOR CAMERA MODELING AND VIRTUAL VIEW SYNTHESIS - A method for displaying a captured image on a display device. A real image is captured by a vision-based imaging device. A virtual image is generated from the captured real image based on a mapping by a processor. The mapping utilizes a virtual camera model with a non-planar imaging surface. Projecting the virtual image formed on the non-planar image surface of the virtual camera model to the display device. | 04-17-2014 |
20140111637 | Dynamic Rearview Mirror Adaptive Dimming Overlay Through Scene Brightness Estimation - A vehicle imaging system includes an image capture device capturing an image exterior of a vehicle. The captured image includes at least a portion of a sky scene. A processor generates a virtual image of a virtual sky scene from the portion of the sky scene captured by the image capture device. The processor determines a brightness of the virtual sky scene from the virtual image. The processor dynamically adjusts a brightness of the captured image based the determined brightness of the virtual image. A rear view mirror display device displays the adjusted captured image. | 04-24-2014 |
20140114534 | DYNAMIC REARVIEW MIRROR DISPLAY FEATURES - A method for displaying a captured image on a display device. A scene is captured by at least one vision-based imaging device. A virtual image of the captured scene is generated by a processor using a camera model. A view synthesis technique is applied to the captured image by the processor for generating a de-warped virtual image. A dynamic rearview mirror display mode is actuated for enabling a viewing mode of the de-warped image on the rearview mirror display device. The de-warped image is displayed in the enabled viewing mode on the rearview mirror display device. | 04-24-2014 |
20140176724 | SPLIT SUB-PIXEL IMAGING CHIP WITH IR-PASS FILTER COATING APPLIED ON SELECTED SUB-PIXELS - An apparatus for capturing an image includes a plurality of lens elements coaxially encompassed within a lens housing. A split-sub-pixel imaging chip includes an IR-pass filter coating applied on selected sub-pixels. The sub-pixels include a long exposure sub-pixel and a short-exposure sub-pixel for each of a plurality of green blue and red pixels. | 06-26-2014 |
20140176781 | CAMERA HARDWARE DESIGN FOR DYNAMIC REARVIEW MIRROR - An apparatus for capturing an image includes a plurality of lens elements coaxially encompassed within a lens housing. One of the lens elements includes an aspheric lens element having a surface profile configured to enhance a desired region of a captured image. At least one glare-reducing element coaxial with the plurality of lens elements receives light subsequent to the light sequentially passing through each of the lens elements. An imaging chip receives the light subsequent to the light passing through the at least one glare-reducing element. The imaging chip includes a plurality of green, blue and red pixels. | 06-26-2014 |
20140192227 | GLARING REDUCTION FOR DYNAMIC REARVIEW MIRROR - A method for generating a glare-reduced image from images captured by a camera device of a subject vehicle includes obtaining a short-exposure image and a long-exposure image and generating a resulting high dynamic range image based on the short-exposure and long-exposure images. Pixel values are monitored within both the short- and long-exposure images. A light source region is identified within both the short- and long-exposure images based on the monitored pixel values. A glaring region is identified based on the identified light source region and one of calculated pixel ratios and calculated pixel differences between the monitored pixel values of the long- and short-exposure images. The identified glaring region upon the resulting high dynamic range image is modified with the identified light source region within the short-exposure image. The glare-reduced image is generated based on the modified identified glaring region upon the resulting HDR image. | 07-10-2014 |
20140193032 | IMAGE SUPER-RESOLUTION FOR DYNAMIC REARVIEW MIRROR - Method for applying super-resolution to images captured by a camera device of a vehicle includes receiving a plurality of image frames captured by the camera device. For each image frame, a region of interest is identified within the image frame requiring resolution related to detail per pixel to be increased. Spatially-implemented super-resolution is applied to the region of interest within each image to enhance image sharpness within the region of interest. | 07-10-2014 |
20140347469 | ENHANCED PERSPECTIVE VIEW GENERATION IN A FRONT CURB VIEWING SYSTEM - A system and method for creating an enhanced perspective view of an area in front of a vehicle, using images from left-front and right-front cameras. The enhanced perspective view removes the distortion and exaggerated perspective effects which are inherent in wide-angle lens images. The enhanced perspective view uses a camera model including a virtual image surface and other processing techniques which provide corrections for two types of problems which are typically present in de-warped perspective images—including a stretching effect at the peripheral area of a wide-angle image de-warped by rectilinear projection, and double image of objects in an area where left-front and right-front camera images overlap. | 11-27-2014 |
20140347470 | ENHANCED TOP-DOWN VIEW GENERATION IN A FRONT CURB VIEWING SYSTEM - A system and method for creating an enhanced virtual top-down view of an area in front of a vehicle, using images from left-front and right-front cameras. The enhanced virtual top-down view not only provides the driver with a top-down view perspective which is not directly available from raw camera images, but also removes the distortion and exaggerated perspective effects which are inherent in wide-angle lens images. The enhanced virtual top-down view also includes corrections for three types of problems which are typically present in de-warped images—including artificial protrusion of vehicle body parts into the image, low resolution and noise around the edges of the image, and a “double vision” effect for objects above ground level. | 11-27-2014 |
20140347485 | ENHANCED FRONT CURB VIEWING SYSTEM - A system and method for determining when to display frontal curb view images to a driver of a vehicle, and what types of images to display. A variety of factors—such as vehicle speed, GPS/location data, the existence of a curb in forward-view images, and vehicle driving history—are evaluated as potential triggers for the curb view display, which is intended for situations where the driver is pulling the vehicle into a parking spot which is bounded in front by a curb or other structure. When forward curb-view display is triggered, a second evaluation is performed to determine what image or images to display which will provide the best view of the vehicle's position relative to the curb. The selected images are digitally synthesized or enhanced, and displayed on a console-mounted or in-dash display device. | 11-27-2014 |
20150042799 | OBJECT HIGHLIGHTING AND SENSING IN VEHICLE IMAGE DISPLAY SYSTEMS - A method of displaying a captured image on a display device of a driven vehicle. A scene exterior of the driven vehicle is captured by an at least one vision-based imaging device mounted on the driven vehicle. Objects in a vicinity of the driven vehicle are sensed. An image of the captured scene is generated by a processor. The image is dynamically expanded to include sensed objects in the image. The sensed objects are highlighted in the dynamically expanded image. The highlighted objects identify vehicles proximate to the driven vehicle that are potential collisions to the driven vehicle. The dynamically expanded image is displayed with highlighted objects in the display device. | 02-12-2015 |
20150077560 | FRONT CURB VIEWING SYSTEM BASED UPON DUAL CAMERAS - Methods and systems are provided for generating a curb view virtual image to assist a driver of a vehicle. The method includes capturing a first and second real image from a first and second camera having a forward-looking field of view. The first and second images are de-warped and combined to form a curb view virtual image view of the vehicle, which is displayed on display within the vehicle. The system includes a first and second camera having a forward-looking field of view to provide a first and second real image. A processor coupled to the first camera and the second camera configured to de-warps and combines the first and second real images to form a curb view virtual image view for display within the vehicle. The curb view virtual image may be a top-down virtual image view or a perspective virtual image view. | 03-19-2015 |