* Width of columns is now provided by a column width provider
* Text inputs now cover all the column width
* Text inputs now are right aligned for easier reading/comparing
Adding new properties and updating the call to previous ones.
Properties are now :
* label : getLabel
* fullLabel : getFullLabel
* fullLabelToNode : getFullLabelToNode
* fullLabelToGraph : getFullLabelToGraph
Same for the name.
Add the possibility to filter the images inside the image gallery.
* Clickables buttons
* All images filter
* Reconstructed images filter
* Non reconstructed images filter
* According to Meshroom issue #1179 (https://github.com/alicevision/meshroom/issues/1179),
add the describer type "tag16h5" to the following modules:
- ConvertSfmFormat (e.g., to be able to export the 3D AprilTag positions in a human-readable format as .sfm,
or to see only the AprilTag marker positions in the 3D view via .abc)
- FeatureExtraction (to be able to detect AprilTag markers from the tag16h5 family)
- FeatureMatching (to be able to match AprilTag markers)
- SfmTransform (to be able to use AprilTag markers, e.g., for the auto_from_markers transform)
- StructureFromMotion (to be able to compute the 3D positions of AprilTag markers)
* Added a new input to sfmTransform: markerDistances, which is a pair of marker IDs associated with the distance between them.
Added a corresponding new transform: from_marker_distances, which scales the model according to the given distances between pairs of markers.
Added another transform: auto_from_markers, which uses the existing markers parameter (ignoring their x,y,z positions) and applies the auto_from_... function only based on these given markers. The latter transform can, e.g., be used to align a set of markers with the ground plane.
* Revert "Added a new input to sfmTransform: markerDistances, which is a pair of marker IDs associated with the distance between them."
This reverts commit ed87c68f39.
Co-authored-by: jarne <jarne@ieee.org>
Co-authored-by: Fabien Castan <fabcastan@gmail.com>
Added the possibility of rendering the output of the meshing node into a edge detection render of the obj. Added the activation and deactivation of the background images as an option. Improving the way the arguments are shown with a conditionnal display of some arguments.
Many minor bug fixes and added the possibility to change the particle color of the rendering to let the user chose the clearest color in their case. Commented a lot of my code to make it readable to someone else that myself.
Almost complete version of the node, I added a background that can render with eevee and changed the cubes used as particles by a plane that always follows the camera. The one of the only thing left is the option to change the color of the particle (among other things).
I had to use the graphe node to render the image in the back. I also made the node much more adaptable. I'll verify if it works with another set of image.
The returns in ExportAnimatedCamera didn't include the path toward the undistorted images so I added it.
The Blender Rendition nodes can now (among other things) diplay cloud of points. The code is cleaned up and only the background image sequence remains to be implemented...
The node is almost functional. The animated camera works but the imported cloud of point isn't visible is rendering... I'll need to find a way to make display it.
(For now there is a cube as a placeholder in the scene to show the movement of the camera)