This tutorial show the CARS OTB version usage through docker and adds AI MCCNN advanced tutorial through pandora configuration.
CARS Team
Check Docker install:
docker -v
Get CARS tutorial Docker:
docker pull cnes/cars
docker pull cnes/cars-jupyter
docker pull cnes/cars-tutorial
Show docker images:
docker images
You should get cnes/cars , cnes/cars-jupyter, cnes/cars-tutorial docker images.
Run tutorial :
docker run -p 8000:8000 cnes/cars-tutorial
Go to http://localhost:8000 for this tutorial.
Run jupyter notebook:
docker run -p 8888:8888 cnes/cars-jupyter
Go to the provided output URL and navigate into CARS tutorials notebooks. The URL must be of this kind with a token: http://localhost:8888
CARS produces Digital Surface Models from satellite imaging by photogrammetry.
Main goals:
Be aware that CARS is new and evolving to maturity with CNES roadmaps
License: Apache-2.0
Web sites:
|
|
Sensors images | Elevation model |
|
|
|
Pandora tool, dense matching tool | Orfeo tool box, global image library | VLfeat, sparse matching SIFT |
|
|
Demcompare, to compare DEM | Shareloc, a simple geometry library |
and others to come :)
See Authors.md for full contributions in Github repository.
See Contributing.md
DSM : Digital Surface Model
DEM: Digital Elevation Model. Usually means all elevation models in raster: DSM, DTM,…
DSM: Digital Surface Model. Represents the earth’s surface and includes all objects on it. CARS generates DSMs.
DTM: Digital Terrain Model. Represents bare ground surface without any objects like plants and buildings.
ROI: Region of Interest means a subpart of the DSM raster in CARS.
3D: geometric setting in which three values are required to determine the position of an element (typically a point)
A lot of applications in a lot of fields: 3D printing, biology, architecture, ...
Our application here:
GIS 3D Earth cartography with Digital Surface Models !
DSM = a raster grid image with each point containing an elevation information.
Several methods:
|
|
||
Sensor Inputs: N raster images + geometric models: altitude, satellite position ! |
|
|
|
Output:
Raster terrain DSM
|
|
From one image point and geometric direction, how to get altitude ? Needs at least 2 !
For each point in one image, find the correspondent point in the other image.
Triangulation generates point clouds.
To be use as a raster image, a rasterization process project each point in 2D grids to generate DSM.
Many methods possibilities here also.
Objectives:
Technologies used :
[1] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2(60):91-110, 2004.
[2] H. Hirschmuller, "Stereo Processing by Semiglobal Matching and Mutual Information," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 328-341, Feb. 2008. doi: 10.1109/TPAMI.2007.1166
a generic "CarsDataset" has been set as internal data structure.
can contain array (ex: for images) or list of point (ex: for sparse matches).
can contains georeference, geometry information.
can be used for parallel/distributed computation, i.e tiled data.
can contained overlaps between tiled data;
An orchestration framework has been set to manage computing distribution.
DASK framework can be used locally (local_dask) or through PBS on a HPC (pbs_dask). The orchestrator framework separates 3D pipeline from computing distribution. Features:
cars -h
usage: cars [-h] [--loglevel {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [--version] conf
CARS: CNES Algorithms to Reconstruct Surface
positional arguments:
conf Inputs Configuration File
optional arguments:
-h, --help show this help message and exit
--loglevel {DEBUG,INFO,WARNING,ERROR,CRITICAL}
Logger level (default: WARNING. Should be one of (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--version, -v show program's version number and exit
cars configfile.json
{
"pipeline": ...,
"inputs": {
...
},
"applications":{
...
}
"orchestrator": {
...
},
"output": {
...
}
}
Two possibilities:
Set sensors, geometric models, pairing, initial_elevation.
{
"inputs": {
"sensors" : {
"one": {
"image": "img1.tif",
"geomodel": "img1.geom",
"no_data": 0
},
"two": {
"image": "img2.tif",
"geomodel": "img2.geom",
"no_data": 0
}
},
"pairing": [["one", "two"]],
"initial_elevation": "srtm_dir"
},
Allows to redefine default parameters for each application used by pipeline and parameter the pipeline.
Define orchestrator parameters that control the distributed computations:
mode: Parallelization mode “local_dask”, “pbs_dask” or “mp”
nb_workers: Number of workers
walltime: dependent on the mode.
dependent on the pipeline. For main pipeline example:
"output": {
"out_dir": "myoutputfolder",
"dsm_basename": "mydsm.tif"
}
Follows general 3D concepts
Epipolar resampling |
Dense Matching | Triangulation | Rasterization | |
First SIFT Sparse matching steps for each pair:
use an adapted epipolar geometry : null disparity is based on a reference DTM (SRTM typically)
Web site: https://github.com/CNES/pandora
[1] A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms, D. Scharstein and R. Szeliski, vol. 47, International Journal of Computer Vision, 2002
For each stereo sensors pair:
Compute the stereo-rectification grids of the input pair’s images.
Resample the images pairs in epipolar geometry.
Compute sift matches between the left and right images in epipolar geometry.
Predict an optimal disparity range from the filtered point cloud resulting from the sift matches triangulation.
Create a bilinear correction model of the right image's stereo-rectification grid in order to minimize the epipolar error. Apply the estimated correction to the right grid.
Resample again the stereo pair in epipolar geometry (using corrected grid for the right image) by using input :term:DTM
(such as SRTM) in order to reduce the disparity intervals to explore.
Compute disparity for each image pair in epipolar geometry.
Triangule the matches and get for each pixel of the reference image a latitude, longitude and altitude coordinate.
For all pairs:
Merge point clouds coming from each stereo pairs.
Filtering the 3D points cloud via two consecutive filters. The first one removes the small groups of 3D points. The second filters the points which have the most scattered neighbors.
Projecting these altitudes on a regular grid as well as the associated color.
Download CARS Quick Start
mkdir /tmp/cars-tuto/
cd /tmp/cars-tuto/
wget https://raw.githubusercontent.com/CNES/cars/master/tutorials/quick_start.sh
Warning: Internet needed to download demo data.
Run quick_start.sh script
./quick_start.sh
==== Demo CARS (with Docker) =====
- Docker must be installed:
# docker -v
Docker version 20.10.17, build 100c701
- Get CARS dockerfile image:
# docker pull cnes/cars
- Get and extract data samples from CARS repository:
! File data_gizeh.tar.bz2 already exists.
# md5sum --status -c data_gizeh.tar.bz2.md5sum
md5sum: data_gizeh.tar.bz2.md5sum: Aucun fichier ou dossier de ce type
! Md5sum check: KO. Exit.
# tar xvfj data_gizeh.tar.bz2
data_gizeh/srtm_dir/N29E031_KHEOPS.tif
data_gizeh/configfile.json
data_gizeh/img1.geom
data_gizeh/img2.geom
data_gizeh/color1.geom
data_gizeh/img3.tif
data_gizeh/img2.tif
data_gizeh/open-licence-etalab-v2.0-fr.pdf
data_gizeh/img3.geom
data_gizeh/color1.tif
data_gizeh/open-licence-etalab-v2.0-en.pdf
data_gizeh/img1.tif
data_gizeh/srtm_dir/
data_gizeh/
Run quick_start.sh script
./quick_start.sh
- Launch CARS with sensor_to_full_resolution_dsm pipeline for img1+img2 and img1+img3 pairs:
# cars configfile.json
Processing Tiles : [ epi_matches_left ] ...: 100%|█| 16/16 [00:13<00:00, 1.20it
Processing Tiles : [ epi_matches_left ] ...: 100%|██████████████████████████████████████████████████████████| 16/16 [00:13<00:00, 1.22it/s]
Processing Tiles : [ color , dsm ] ...: 100%|█████████████████████████████████████████████████████████████████| 4/4 [01:12<00:00, 18.13s/it]
Run quick_start.sh script
./quick_start.sh
- Show resulting DSM:
# ls -al outresults/
total 37556
-rw-rw-r-- 1 duboise duboise 0 juin 16 15:36 23-06-16_15h36m_sensor_to_dense_dsm.log
-rw-rw-r-- 1 duboise duboise 25166744 juin 16 15:38 clr.tif
-rw-rw-r-- 1 duboise duboise 7806 juin 16 15:37 content.json
-rw-rw-r-- 1 duboise duboise 9621 juin 16 15:36 dask_config_unknown.yaml
-rw-rw-r-- 1 duboise duboise 16778119 juin 16 15:38 dsm.tif
drwxrwxr-x 2 duboise duboise 4096 juin 16 15:38 one_three
drwxrwxr-x 2 duboise duboise 4096 juin 16 15:38 one_two
-rw-rw-r-- 1 duboise duboise 8616 juin 16 15:37 used_conf.json
drwxrwxr-x 2 duboise duboise 4096 juin 16 15:36 workers_log
|
|
|
dsm.tif | clr.tif | clr and dsm colored composition |
See configuration
cat configfile.json
{
"inputs": {
"sensors" : {
"one": {
"image": "img1.tif",
"geomodel": "img1.geom",
"color": "color1.tif",
"no_data": 0
},
"two": {
"image": "img2.tif",
"geomodel": "img2.geom",
"no_data": 0
},
"three": {
"image": "img3.tif",
"geomodel": "img3.geom",
"no_data": 0
}
},
"pairing": [["one", "two"],["one", "three"]],
"initial_elevation": "srtm_dir"
},
"output": {
"out_dir": "outresults"
}
}
When installing CARS directly (but needs OTB, VLFeat installation), a quick_start_advanced.sh runs the same example but without Docker
./quick_start_advanced.sh
For more details, see Dockerfile on Github repo.
From cars-jupyter docker:
docker run -p 8888:8888 cnes/cars-jupyter
This runs a jupyter notebook directly to https://localhost:8888/
Add orchestration configuration in input json file:
"orchestrator": {
"mode": "local_dask",
"nb_workers": 4
},
Run CARS again to see 4 workers : cars --loglevel INFO configfile.json
Add orchestration configuration in input json file:
"orchestrator": {
"mode": "sequential"
},
Run CARS again : cars --loglevel INFO configfile.json
Add application configuration in input json file and define parameters for dense matching application
"applications": {
"dense_matching":{
"method": "census_sgm",
"loader": "pandora",
"save_disparity_map": true
}
},
Run CARS again : cars --loglevel INFO configfile.json
# ls -l data_gizeh/outresults/
total 44580
-rw-r--r-- 1 carcars carcars 0 août 6 00:42 22-08-05_22h42m_sensor_to_full_res_dsm.log
-rw-r--r-- 1 carcars carcars 33555362 août 6 00:46 clr.tif
-rw-r--r-- 1 carcars carcars 9120 août 6 00:43 content.json
-rw-r--r-- 1 carcars carcars 7864 août 6 00:42 dask_config_unknown.yaml
-rw-r--r-- 1 carcars carcars 16778119 août 6 00:46 dsm.tif
drwxr-xr-x 2 carcars carcars 4096 août 6 00:46 one_three
drwxr-xr-x 2 carcars carcars 4096 août 6 00:46 one_two
# ls -l data_gizeh/outresults/one_two
-rw-r--r-- 1 carcars carcars 9120 août 6 00:43 epi_disp_color_left.tif
-rw-r--r-- 1 carcars carcars 7864 août 6 00:42 epi_disp_left.tif
-rw-r--r-- 1 carcars carcars 16778119 août 6 00:46 epi_disp_mask_left.tif
Add application configuration in input json file and define parameters for rasterization application
"applications": {
"point_cloud_rasterization": {
"method": "simple_gaussian",
"dsm_radius": 3,
"sigma": 0.3
}
},
Run CARS again : cars --loglevel INFO configfile.json
Step by step tutorial of sensor_to_dense_dsm_pipeline for one pair.
Tutorial can be run through "sensor_to_dense_dsm_step_by_step.ipynb" notebook directly (no presentation mode)
# import external notebooks helpers function for tutorial
from notebook_helpers import get_full_data, show_data, save_data
from notebook_helpers import get_dir_path, set_up_demo_inputs
# CARS imports
from cars.applications.application import Application
from cars.applications.grid_generation import grid_correction
from cars.applications.sparse_matching import sparse_matching_tools
import cars.pipelines.sensor_to_dense_dsm.sensor_dense_dsm_constants as sens_cst
from cars.pipelines.sensor_to_dense_dsm import sensors_inputs
from cars.pipelines.sensor_to_dense_dsm import dsm_output
from cars.core import cars_logging
from cars.core import inputs, preprocessing
from cars.core.utils import safe_makedirs
from cars.orchestrator import orchestrator
from cars.core.utils import make_relative_path_absolute
Matplotlib created a temporary config/cache directory at /tmp/pbs.47360041.admin01/matplotlib-o8zp50n2 because the default path (/home/ad/cars_admin/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
from cars import __version__
#print("CARS version used : {}".format(__version__))
# Modify with your own output path if needed
output_dir = os.path.join(get_dir_path(), "output_tutorial")
#print(output_dir)
# By default, the tutorial use data_gizeh_small.tar.bz2
input_dir_path = set_up_demo_inputs("data_gizeh_small")
inputs_conf = {
"sensors": {
"left": {
"image": os.path.join(input_dir_path, "img1.tif"),
"geomodel": os.path.join(input_dir_path, "img1.geom"),
"color": os.path.join(input_dir_path, "color1.tif"),
"no_data": 0,
},
"right": {
"image": os.path.join(input_dir_path, "img2.tif"),
"geomodel": os.path.join(input_dir_path, "img2.geom"),
"no_data": 0,
},
},
"pairing": [["left", "right"]],
"initial_elevation": os.path.join(input_dir_path, "srtm_dir/N29E031_KHEOPS.tif")
}
inputs = sensors_inputs.sensors_check_inputs(inputs_conf)
#pp.pprint(inputs)
{'check_inputs': False,
'default_alt': 0,
'epsg': None,
'geoid': 'PATH_CARS_GEOID',
'initial_elevation': 'PATH_TUTORIAL/data_gizeh_small/srtm_dir',
'pairing': [['left', 'right']],
'roi': None,
'sensors': {
'left': { 'color': 'PATH_TUTORIAL/data_gizeh_small/color1.tif',
'geomodel': 'PATH_TUTORIAL/data_gizeh_small/img1.geom',
'image': 'PATH_TUTORIAL/data_gizeh_small/img1.tif',
'mask': None,
'classification': None,
'no_data': 0},
'right': { 'color': 'PATH_TUTORIAL/data_gizeh_small/img2.tif',
'geomodel': 'PATH_TUTORIAL/data_gizeh_small/img2.geom',
'image': 'PATH_TUTORIAL/data_gizeh_small/img2.tif',
'mask': None,
'classification': None,
'no_data': 0}}}
This application generates epipolar grids corresponding to sensor pair
epipolar_grid_generation_application = Application("grid_generation")
This application generates epipolar images from epipolar grids
resampling_application = Application("resampling")
This application generates sparse matches of stereo images pairs
sparse_matching_application = Application("sparse_matching")
This application generates dense matches of stereo images pairs
dense_matching_application = Application("dense_matching")
This application triangulates matches, in order to get each (X, Y, Z) point position
triangulation_application = Application("triangulation")
This application performs the fusion of epipolar points from pairs to a terrain point cloud
pc_fusion_application = Application("point_cloud_fusion")
This application removes outliers points. The method used is the "small components removing"
conf_outlier_removing_small_components = {"method": "small_components", "activated": True}
(
pc_outlier_removing_small_comp_application
) = Application("point_cloud_outliers_removing",
cfg=conf_outlier_removing_small_components)
This application removes outliers points. The method used is the "statistical removing"
conf_outlier_removing_small_statistical = {"method": "statistical", "activated": True}
pc_outlier_removing_stats_application = Application(
"point_cloud_outliers_removing",
cfg=conf_outlier_removing_small_statistical,
)
This application performs the rasterization of a terrain point cloint.
conf_rasterization = {
"method": "simple_gaussian",
"dsm_radius": 3,
"sigma": 0.3
}
rasterization_application = Application("point_cloud_rasterization",
cfg=conf_rasterization)
# Example with dense matching application
dense_matching_application.print_config()
{ 'epipolar_tile_margin_in_percent': 60, 'generate_performance_map': False, 'loader': 'pandora', 'loader_conf': OrderedDict([ ( 'input', { 'nodata_left': -9999, 'nodata_right': -9999}), ( 'pipeline', { 'cost_volume_confidence': { 'confidence_method': 'ambiguity', 'eta_max': 0.7, 'eta_step': 0.01, 'indicator': ''}, 'disparity': { 'disparity_method': 'wta', 'invalid_disparity': nan}, 'filter': { 'filter_method': 'median', 'filter_size': 3}, 'matching_cost': { 'matching_cost_method': 'census', 'subpix': 1, 'window_size': 5}, 'optimization': { 'min_cost_paths': False, 'optimization_method': 'sgm', 'overcounting': False, 'penalty': { 'P1': 8, 'P2': 32, 'p2_method': 'constant', 'penalty_method': 'sgm_penalty'}, 'sgm_version': 'c++', 'use_confidence': False}, 'refinement': { 'refinement_method': 'vfit'}, 'right_disp_map': { 'method': 'accurate'}, 'validation': { 'cross_checking_threshold': 1.0, 'validation_method': 'cross_checking'}})]), 'max_elevation_offset': None, 'max_epi_tile_size': 1500, 'method': 'census_sgm', 'min_elevation_offset': None, 'min_epi_tile_size': 300, 'perf_ambiguity_threshold': 0.6, 'perf_eta_max_ambiguity': 0.99, 'perf_eta_max_risk': 0.25, 'perf_eta_step': 0.04, 'save_disparity_map': False}
{'epipolar_tile_margin_in_percent': 60,
'loader': 'pandora',
'loader_conf': {
'input': {'nodata_left': -9999, 'nodata_right': -9999},
'pipeline': { 'disparity': { 'disparity_method': 'wta',
'invalid_disparity': nan},
'filter': { 'filter_method': 'median',
'filter_size': 3},
'matching_cost': { 'matching_cost_method': 'census',
'subpix': 1,
'window_size': 5},
'optimization': { 'P1': 8,
'P2': 32,
'min_cost_paths': False,
'optimization_method': 'sgm',
'overcounting': False,
'p2_method': 'constant',
'penalty_method': 'sgm_penalty',
'piecewise_optimization_layer': 'None',
'sgm_version': 'c++',
'use_confidence': False},
'refinement': { 'refinement_method': 'vfit'},
'right_disp_map': {'method': 'accurate'},
'validation': { 'cross_checking_threshold': 1.0,
'validation_method': 'cross_checking'}}},
'max_elevation_offset': None,
'max_epi_tile_size': 1500,
'method': 'census_sgm',
'min_elevation_offset': None,
'min_epi_tile_size': 300,
'save_disparity_map': False}
# Use sequential mode in notebook
orchestrator_conf = {"mode": "sequential"}
cars_orchestrator = orchestrator.Orchestrator(
orchestrator_conf=orchestrator_conf,
out_dir=output_dir)
From input configuration "inputs" seen before
(
_,
sensor_image_left,
sensor_image_right
) = sensors_inputs.generate_inputs(inputs)[0]
grid_left, grid_right = epipolar_grid_generation_application.run(
sensor_image_left,
sensor_image_right,
orchestrator=cars_orchestrator,
srtm_dir=inputs[sens_cst.INITIAL_ELEVATION],
default_alt=inputs[sens_cst.DEFAULT_ALT],
geoid_path=inputs[sens_cst.GEOID],
)
Computing epipolar grids ...: 100% [**************************************************] (0s)
(
epipolar_image_left,
epipolar_image_right
) = resampling_application.run(
sensor_image_left,
sensor_image_right,
grid_left,
grid_right,
orchestrator=cars_orchestrator,
margins=sparse_matching_application.get_margins()
)
data_image_left = get_full_data(epipolar_image_left, "im")
show_data(data_image_left, mode="image")
epipolar_matches_left, _ = sparse_matching_application.run(
epipolar_image_left,
epipolar_image_right,
grid_left.attributes["disp_to_alt_ratio"],
orchestrator=cars_orchestrator
)
Find correction to apply
matches_array = sparse_matching_application.filter_matches(
epipolar_matches_left,
orchestrator=cars_orchestrator)
(
grid_correction_coef,
corrected_matches_array, _, _, _
) = grid_correction.estimate_right_grid_correction(
matches_array, grid_right)
Generate new right epipolar grid from correction
corrected_grid_right = grid_correction.correct_grid(
grid_right, grid_correction_coef)
dmin, dmax = sparse_matching_tools.compute_disp_min_disp_max(
sensor_image_left,
sensor_image_right,
grid_left,
corrected_grid_right,
grid_right,
corrected_matches_array,
orchestrator=cars_orchestrator,
disp_margin=(
sparse_matching_application.get_disparity_margin()
),
disp_to_alt_ratio=grid_left.attributes["disp_to_alt_ratio"],
geometry_loader=triangulation_application.get_geometry_loader(),
srtm_dir=inputs[sens_cst.INITIAL_ELEVATION],
default_alt=inputs[sens_cst.DEFAULT_ALT],
)
(
dense_matching_margins,
disp_min,
disp_max
) = dense_matching_application.get_margins(
grid_left, disp_min=dmin, disp_max=dmax)
(
new_epipolar_image_left,
new_epipolar_image_right
) = resampling_application.run(
sensor_image_left,
sensor_image_right,
grid_left,
corrected_grid_right,
orchestrator=cars_orchestrator,
margins=dense_matching_margins,
optimum_tile_size=(
dense_matching_application.get_optimal_tile_size(
disp_min,
disp_max,
cars_orchestrator.cluster.checked_conf_cluster[
"max_ram_per_worker"
],
)
),
add_color=True,
)
epipolar_disparity_map = dense_matching_application.run(
new_epipolar_image_left,
new_epipolar_image_right,
orchestrator=cars_orchestrator,
disp_min=disp_min,
disp_max=disp_max,
)
data_disparity = get_full_data(epipolar_disparity_map, "disp")
show_data(data_disparity)
epsg = preprocessing.compute_epsg(
sensor_image_left,
sensor_image_right,
grid_left,
corrected_grid_right,
triangulation_application.get_geometry_loader(),
orchestrator=cars_orchestrator,
srtm_dir=inputs[sens_cst.INITIAL_ELEVATION],
default_alt=inputs[sens_cst.DEFAULT_ALT],
disp_min=disp_min,
disp_max=disp_max
)
epipolar_points_cloud = triangulation_application.run(
sensor_image_left,
sensor_image_right,
new_epipolar_image_left,
grid_left,
corrected_grid_right,
epipolar_disparity_map,
epsg,
orchestrator=cars_orchestrator,
uncorrected_grid_right=grid_right,
geoid_path=inputs[sens_cst.GEOID],
disp_min=disp_min,
disp_max=disp_max,
)
current_terrain_roi_bbox = preprocessing.compute_terrain_bbox(
inputs[sens_cst.INITIAL_ELEVATION],
inputs[sens_cst.DEFAULT_ALT],
inputs[sens_cst.GEOID],
sensor_image_left,
sensor_image_right,
new_epipolar_image_left,
grid_left,
corrected_grid_right,
epsg,
triangulation_application.get_geometry_loader(),
resolution=rasterization_application.get_resolution(),
disp_min=disp_min,
disp_max=disp_max,
orchestrator=cars_orchestrator
)
(
terrain_bounds,
optimal_terrain_tile_width
) = preprocessing.compute_terrain_bounds(
[current_terrain_roi_bbox],
resolution=rasterization_application.get_resolution()
)
(0s) (0s)
merged_points_clouds = pc_fusion_application.run(
[epipolar_points_cloud],
terrain_bounds,
epsg,
orchestrator=cars_orchestrator,
margins=rasterization_application.get_margins(),
optimal_terrain_tile_width=optimal_terrain_tile_width
)
(
filtered_sc_merged_points_clouds
) = pc_outlier_removing_small_comp_application.run(
merged_points_clouds,
orchestrator=cars_orchestrator)
(
filtered_stats_merged_points_clouds
) = pc_outlier_removing_stats_application.run(
filtered_sc_merged_points_clouds,
orchestrator=cars_orchestrator)
dsm = rasterization_application.run(
filtered_stats_merged_points_clouds,
epsg,
orchestrator=cars_orchestrator
)
data_dsm = get_full_data(dsm, "hgt")
show_data(data_dsm, mode="dsm")
data_ortho = get_full_data(dsm, "img")[..., 0:3]
show_data(data_ortho, mode='image')
save_data(dsm, os.path.join(output_dir, "dsm.tif"), "hgt")
Web site: https://github.com/CNES/pandora
[1] A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms, D. Scharstein and R. Szeliski, vol. 47, International Journal of Computer Vision, 2002
mc-cnn [3] is a neural network which computes a similarity measure on pair of small image patches
Implemented in MC-CNN project used by Pandora as plugin
Pretrained weights for mc-cnn neural networks are available in mc-ccnn plugin repository
[3] Zbontar, J., & LeCun, Y. (2016). Stereo matching by training a convolutional neural network to compare image patches. J. Mach. Learn. Res., 17(1), 2287-2318.
Tutorial can be run through "sensor_to_dense_dsm_matching_methods_comparison.ipynb" notebook directly (no presentation mode)
# CARS imports
# Applications
from cars.applications.application import Application
from cars.applications.grid_generation import grid_correction
from cars.applications.sparse_matching import sparse_matching_tools
# Pipelines
import cars.pipelines.sensor_to_dense_dsm.sensor_dense_dsm_constants as sens_cst
from cars.pipelines.sensor_to_dense_dsm import sensors_inputs
from cars.pipelines.sensor_to_dense_dsm import dsm_output
# Conf, core, orchestrator
from cars.core import cars_logging
from cars.core import inputs, preprocessing
from cars.core.utils import safe_makedirs
from cars.orchestrator import orchestrator
from cars.core.utils import make_relative_path_absolute
epipolar_grid_generation_application = Application("grid_generation")
resampling_application = Application("resampling")
sparse_matching_application = Application("sparse_matching")
From input configuration "inputs" seen before
_, sensor_image_left, sensor_image_right = sensors_inputs.generate_inputs(inputs)[0]
grid_left, grid_right = epipolar_grid_generation_application.run(
sensor_image_left,
sensor_image_right,
orchestrator=cars_orchestrator,
srtm_dir=inputs[sens_cst.INITIAL_ELEVATION],
default_alt=inputs[sens_cst.DEFAULT_ALT],
geoid_path=inputs[sens_cst.GEOID],
)
Computing epipolar grids ...: 100% [**************************************************] (0s)
epipolar_image_left, epipolar_image_right = resampling_application.run(
sensor_image_left,
sensor_image_right,
grid_left,
grid_right,
orchestrator=cars_orchestrator,
margins=sparse_matching_application.get_margins()
)
epipolar_matches_left, _ = sparse_matching_application.run(
epipolar_image_left,
epipolar_image_right,
grid_left.attributes["disp_to_alt_ratio"],
orchestrator=cars_orchestrator
)
Find correction to apply, and generate new right epipolar grid
matches_array = sparse_matching_application.filter_matches(epipolar_matches_left, orchestrator=cars_orchestrator)
grid_correction_coef, corrected_matches_array,_, _, _ = grid_correction.estimate_right_grid_correction(matches_array, grid_right)
corrected_grid_right = grid_correction.correct_grid(grid_right, grid_correction_coef)
dmin, dmax = sparse_matching_tools.compute_disp_min_disp_max(
sensor_image_left,
sensor_image_right,
grid_left,
corrected_grid_right,
grid_right,
corrected_matches_array,
orchestrator=cars_orchestrator,
disp_margin=(
sparse_matching_application.get_disparity_margin()
),
disp_to_alt_ratio=grid_left.attributes["disp_to_alt_ratio"],
geometry_loader=triangulation_application.get_geometry_loader(),
srtm_dir=inputs[sens_cst.INITIAL_ELEVATION],
default_alt=inputs[sens_cst.DEFAULT_ALT],
)
dense_matching_application = Application("dense_matching")
dense_matching_margins, disp_min, disp_max = dense_matching_application.get_margins(
grid_left, disp_min=dmin, disp_max=dmax)
new_epipolar_image_left, new_epipolar_image_right = resampling_application.run(
sensor_image_left,
sensor_image_right,
grid_left,
corrected_grid_right,
orchestrator=cars_orchestrator,
margins=dense_matching_margins,
optimum_tile_size=(
dense_matching_application.get_optimal_tile_size(
disp_min,
disp_max,
cars_orchestrator.cluster.checked_conf_cluster[
"max_ram_per_worker"
]
)
),
add_color=True,
)
dense_matching_census_application = Application("dense_matching")
epipolar_disparity_map_census = dense_matching_census_application.run(
new_epipolar_image_left,
new_epipolar_image_right,
orchestrator=cars_orchestrator,
disp_min=disp_min,
disp_max=disp_max,
)
data_disparity_census = get_full_data(epipolar_disparity_map_census, "disp")
show_data(data_disparity_census)
MC-CNN algorithm used by Pandora as plugin
dense_matching_mccnn_application = Application("dense_matching", cfg={"method": "mccnn_sgm"})
epipolar_disparity_map_mccnn = dense_matching_mccnn_application.run(
new_epipolar_image_left,
new_epipolar_image_right,
orchestrator=cars_orchestrator,
disp_min=disp_min,
disp_max=disp_max,
)
MC-CNN algorithm used by Pandora as plugin
data_disparity_mccnn = get_full_data(epipolar_disparity_map_mccnn, "disp")
show_data(data_disparity_mccnn)
One from disparity map computed by Census similarity measure and the other from disparity map from MC-CNN similarity measure
triangulation_application = triangulation_application = Application("triangulation")
conf_outlier_removing_small_components = {"method": "small_components", "activated": True}
pc_outlier_removing_small_comp_application = Application("point_cloud_outliers_removing", cfg=conf_outlier_removing_small_components)
conf_outlier_removing_small_statistical = {"method": "statistical", "activated": True}
pc_outlier_removing_stats_application = Application("point_cloud_outliers_removing", cfg=conf_outlier_removing_small_statistical)
pc_fusion_application = Application("point_cloud_fusion")
conf_rasterization = {
"method": "simple_gaussian",
"dsm_radius": 3,
"sigma": 0.3
}
rasterization_application = Application("point_cloud_rasterization", cfg=conf_rasterization)
From census disparity map
epipolar_points_cloud_census = triangulation_application.run(
sensor_image_left,
sensor_image_right,
new_epipolar_image_left,
grid_left,
corrected_grid_right,
epipolar_disparity_map_census,
epsg,
orchestrator=cars_orchestrator,
uncorrected_grid_right=grid_right,
geoid_path=inputs[sens_cst.GEOID],
disp_min=disp_min,
disp_max=disp_max,
)
From mccnn disparity map
epipolar_points_cloud_mccnn = triangulation_application.run(
sensor_image_left,
sensor_image_right,
new_epipolar_image_left,
grid_left,
corrected_grid_right,
epipolar_disparity_map_mccnn,
epsg,
orchestrator=cars_orchestrator,
uncorrected_grid_right=grid_right,
geoid_path=inputs[sens_cst.GEOID],
disp_min=disp_min,
disp_max=disp_max,
)
From census disparity map
merged_points_clouds_census = pc_fusion_application.run(
[epipolar_points_cloud_census],
terrain_bounds,
epsg,
orchestrator=cars_orchestrator,
margins=rasterization_application.get_margins(),
optimal_terrain_tile_width=optimal_terrain_tile_width
)
From mccnn disparity map
merged_points_clouds_mccnn = pc_fusion_application.run(
[epipolar_points_cloud_mccnn],
terrain_bounds,
epsg,
orchestrator=cars_orchestrator,
margins=rasterization_application.get_margins(),
optimal_terrain_tile_width=optimal_terrain_tile_width
)
From census disparity map
filtered_sc_merged_points_clouds_census = pc_outlier_removing_small_comp_application.run(
merged_points_clouds_census,
orchestrator=cars_orchestrator,
)
From mccnn disparity map
filtered_sc_merged_points_clouds_mccnn = pc_outlier_removing_small_comp_application.run(
merged_points_clouds_mccnn,
orchestrator=cars_orchestrator,
)
From census disparity map
filtered_stats_merged_points_clouds_census = pc_outlier_removing_stats_application.run(
filtered_sc_merged_points_clouds_census,
orchestrator=cars_orchestrator,
)
From mccnn disparity map
filtered_stats_merged_points_clouds_mccnn = pc_outlier_removing_stats_application.run(
filtered_sc_merged_points_clouds_mccnn,
orchestrator=cars_orchestrator,
)
From census disparity map
dsm_census = rasterization_application.run(
filtered_stats_merged_points_clouds_census,
epsg,
orchestrator=cars_orchestrator
)
From mccnn disparity map
dsm_mccnn = rasterization_application.run(
filtered_stats_merged_points_clouds_mccnn,
epsg,
orchestrator=cars_orchestrator
)
From census disparity map
data_dsm_census = get_full_data(dsm_census, "hgt")
show_data(data_dsm_census, mode="dsm")
From mccnn disparity map
data_dsm_mccnn = get_full_data(dsm_mccnn, "hgt")
show_data(data_dsm_mccnn, mode="dsm")
From census disparity map
data_ortho_census = get_full_data(dsm_census, "img")[..., 0:3]
show_data(data_ortho_census, mode='image')
From mccnn disparity map
data_ortho_mccnn = get_full_data(dsm_mccnn, "img")[..., 0:3]
show_data(data_ortho_mccnn, mode='image')