The bracket detection performed in Meshroom used to differ from the one
performed in AliceVision.
The metadata that were retrieved to perform the exposure comparisons were
not the same, and where AliceVision was actually computing an exposure
value, Meshroom was just performing a comparison between the shutter
speed, fnumber and ISO values, which resulted in less accurate groups.
The `getExposure` static method that is added to the `LdrToHdrSampling`,
`LdrToHdrCalibration` and `LdrToHdrMerge` node is the pythonic version
of the `getExposure` method from the View class in AliceVision.
Prior to this commit, only the shutter speed was compared between two
images to determine whether they belonged to the same group. The fnumber
and ISO were assumed to be fixed within a group, which is not always true,
and differs from what is done on the AliceVision's side.
This commit aligns Meshroom's bracket detection with AliceVision's.
If there are several groups with different bracket numbers but identical
counts (e.g. 3 groups with 7 brackets, and 3 groups with 3 brackets),
select the groups with the largest bracket number (e.g. groups with 7
brackets instead of 3).
Outliers are now supported for the HDR fusion nodes and are excluded
from the computations as soon as they have been detected. However,
the chunks' size computation still includes them, as it simply uses the
number of detected brackets and the total number of input images.
With this commit, the detected outliers are excluded from the chunks' size
computation, thus preventing any error that might occur because there
are too many chunks compared to the number of images that actually need
to be processed.
Reset the command line at every iteration to ensure that there is no
"--nbBrackets" leftovers when we are switching between the automatic
bracket detection and the manually specified brackets number.
In addition to comparing the exposure levels of the sorted input images
to determine whether a new exposure group must be created, this commit
adds a detection based on the path of the image: if the directory
containing the current image is not the same as the one of the previous
image, then we assume the dataset is different, and there is no point
in comparing the current image's exposure level with the previous one's.
Instead, we directly create a new exposure group.
In cases where images from different datasets have very similar
exposure levels, this prevents having outliers identified as being part
of the current exposure group.
The detection of the number of brackets used to only work in a case where
there was a single dataset / a single camera intrinsics. If two datasets
with the same number of brackets were provided, the detection was failing
because we expected the exposure levels to be uniform across all the
images.
If more than one dataset is provided, there is no guarantee that the
exposure groups will be identical although the number of brackets is the
same.
The inputs are however sorted, and the shutter speeds are expected to be
decreasing, meaning that a shutter speed N superior to a shutter speed N-1
indicates a new group. In the same manner, ISO or aperture values that
change from one input to the next one indicate a new group.
For the comparison between exposure levels to be valid, the aperture,
shutter speed and ISO values need to be stored in tuples as floats instead
of strings.
"Robertson" is not an available calibration method in the
LdrToHdrCalibration and LdrToHdrSampling, so it should be removed from the
nodes' documentation.
Add calibrationMethod as parameter at sampling.
Link calibrationMethod parameter of calib node to sampling node's calibrationMethod parameter in both HDR pipelines.
Add working color space as input parameter in Sampling, Calibration and Merging HDR nodes. (unused in calibration, but useful to transfer from sampling to merging in pipelines)
Update rawColorInterpretation default value and add some comments in cameraInit.