Merge branch 'develop' of github.com:alicevision/meshroom into dev/nodesAndTaskManager

Conflicts:
	meshroom/core/graph.py
	meshroom/ui/qml/main.qml
This commit is contained in:
Fabien Castan 2020-12-01 20:02:43 +01:00
commit 1102ce84e0
61 changed files with 1241 additions and 345 deletions

2
.github/FUNDING.yml vendored Normal file
View file

@ -0,0 +1,2 @@
github: [alicevision]
custom: ['https://alicevision.org/association/#donate']

View file

@ -0,0 +1,45 @@
name: Continuous Integration
on:
push:
branches:
- master
- develop
# Skip jobs when only documentation files are changed
paths-ignore:
- '**.md'
- '**.rst'
- 'docs/**'
pull_request:
paths-ignore:
- '**.md'
- '**.rst'
- 'docs/**'
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [2.7, 3.8]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
pip install -r requirements.txt -r dev_requirements.txt --timeout 45
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest tests/

2
.gitignore vendored
View file

@ -25,7 +25,7 @@ __pycache__
/scripts /scripts
/build /build
/dist /dist
/*.sh /dl
# tests # tests
/.tests /.tests

View file

@ -3,6 +3,88 @@
For algorithmic changes related to the photogrammetric pipeline, For algorithmic changes related to the photogrammetric pipeline,
please refer to [AliceVision changelog](https://github.com/alicevision/AliceVision/blob/develop/CHANGES.md). please refer to [AliceVision changelog](https://github.com/alicevision/AliceVision/blob/develop/CHANGES.md).
## Release 2020.1.1 (2020.10.14)
Based on [AliceVision 2.3.1](https://github.com/alicevision/AliceVision/tree/v2.3.1).
- [core] Fix crashes on process statistics (windows-only) [PR](https://github.com/alicevision/meshroom/pull/1096)
## Release 2020.1.0 (2020.10.09)
Based on [AliceVision 2.3.0](https://github.com/alicevision/AliceVision/tree/v2.3.0).
### Release Notes Summary
- [nodes] New Panorama Stitching nodes with support for fisheye lenses [PR](https://github.com/alicevision/meshroom/pull/639) [PR](https://github.com/alicevision/meshroom/pull/808)
- [nodes] HDR: Largely improved HDR calibration, including new LdrToHdrSampling for optimal sample selection [PR](https://github.com/alicevision/meshroom/pull/808) [PR](https://github.com/alicevision/meshroom/pull/1016) [PR](https://github.com/alicevision/meshroom/pull/990)
- [ui] Viewer3D: Input bounding box (Meshing) & manual transformation (SfMTransform) thanks to a new 3D Gizmo [PR](https://github.com/alicevision/meshroom/pull/978)
- [ui] Sync 3D camera with image selection [PR](https://github.com/alicevision/meshroom/pull/633)
- [ui] New HDR (floating point) Image Viewer [PR](https://github.com/alicevision/meshroom/pull/795)
- [ui] Ability to load depth maps into 2D and 3D Viewers [PR](https://github.com/alicevision/meshroom/pull/769) [PR](https://github.com/alicevision/meshroom/pull/657)
- [ui] New features overlay in Viewer2D allows to display tracks and landmarks [PR](https://github.com/alicevision/meshroom/pull/873) [PR](https://github.com/alicevision/meshroom/pull/1001)
- [ui] Add SfM statistics [PR](https://github.com/alicevision/meshroom/pull/873)
- [ui] Visual interface for node resources usage [PR](https://github.com/alicevision/meshroom/pull/564)
- [nodes] Coordinate system alignment to specific markers or between scenes [PR](https://github.com/alicevision/meshroom/pull/652)
- [nodes] New Sketchfab upload node [PR](https://github.com/alicevision/meshroom/pull/712)
- [ui] Dynamic Parameters: add a new 'enabled' property to node's attributes [PR](https://github.com/alicevision/meshroom/pull/1007) [PR](https://github.com/alicevision/meshroom/pull/1027)
- [ui] Viewer: add Camera Response Function display [PR](https://github.com/alicevision/meshroom/pull/1020) [PR](https://github.com/alicevision/meshroom/pull/1041)
- [ui] UI improvements in the Viewer2D and ImageGallery [PR](https://github.com/alicevision/meshroom/pull/823)
- [bin] Improve Meshroom command line [PR](https://github.com/alicevision/meshroom/pull/759) [PR](https://github.com/alicevision/meshroom/pull/632)
- [nodes] New ImageProcessing node [PR](https://github.com/alicevision/meshroom/pull/839) [PR](https://github.com/alicevision/meshroom/pull/970) [PR](https://github.com/alicevision/meshroom/pull/941)
- [nodes] `FeatureMatching` Add `fundamental_with_distortion` option [PR](https://github.com/alicevision/meshroom/pull/931)
- [multiview] Declare more recognized image file extensions [PR](https://github.com/alicevision/meshroom/pull/965)
- [multiview] More generic metadata support [PR](https://github.com/alicevision/meshroom/pull/957)
### Other Improvements and Bug Fixes
- [nodes] CameraInit: New viewId generation and selection of allowed intrinsics [PR](https://github.com/alicevision/meshroom/pull/973)
- [core] Avoid error during project load on border cases [PR](https://github.com/alicevision/meshroom/pull/991)
- [core] Compatibility : Improve list of groups update [PR](https://github.com/alicevision/meshroom/pull/791)
- [core] Invalidation hooks [PR](https://github.com/alicevision/meshroom/pull/732)
- [core] Log manager for Python based nodes [PR](https://github.com/alicevision/meshroom/pull/631)
- [core] new Node Update Hooks mechanism [PR](https://github.com/alicevision/meshroom/pull/733)
- [core] Option to make chunks optional [PR](https://github.com/alicevision/meshroom/pull/778)
- [nodes] Add methods in ImageMatching and features in StructureFromMotion and FeatureMatching [PR](https://github.com/alicevision/meshroom/pull/768)
- [nodes] FeatureExtraction: add maxThreads argument [PR](https://github.com/alicevision/meshroom/pull/647)
- [nodes] Fix python nodes being blocked by log [PR](https://github.com/alicevision/meshroom/pull/783)
- [nodes] ImageProcessing: add new option to fix non finite pixels [PR](https://github.com/alicevision/meshroom/pull/1057)
- [nodes] Meshing: simplify input depth map folders [PR](https://github.com/alicevision/meshroom/pull/951)
- [nodes] PanoramaCompositing: add a new graphcut option to improve seams [PR](https://github.com/alicevision/meshroom/pull/1026)
- [nodes] PanoramaCompositing: option to select the percentage of upscaled pixels [PR](https://github.com/alicevision/meshroom/pull/1049)
- [nodes] PanoramaInit: add debug circle detection option [PR](https://github.com/alicevision/meshroom/pull/1069)
- [nodes] PanoramaInit: New parameter to set an extra image rotation to each camera declared the input xml [PR](https://github.com/alicevision/meshroom/pull/1046)
- [nodes] SfmTransfer: New option to transfer intrinsics parameters [PR](https://github.com/alicevision/meshroom/pull/1053)
- [nodes] StructureFromMotion: Add featuress scale as an option [PR](https://github.com/alicevision/meshroom/pull/822) [PR](https://github.com/alicevision/meshroom/pull/817)
- [nodes] Texturing: add options for retopoMesh & reorganise options [PR](https://github.com/alicevision/meshroom/pull/571)
- [nodes] Texturing: put downscale to 2 by default [PR](https://github.com/alicevision/meshroom/pull/1048)
- [sfm] Add option to include 'unknown' feature types in ConvertSfMFormat, needed to be used on dense point cloud from the Meshing node [PR](https://github.com/alicevision/meshroom/pull/584)
- [ui] Automatically update layout when needed [PR](https://github.com/alicevision/meshroom/pull/989)
- [ui] Avoid crash in 3D with large panoramas [PR](https://github.com/alicevision/meshroom/pull/1061)
- [ui] Fix graph axes naming for ram statistics [PR](https://github.com/alicevision/meshroom/pull/1033)
- [ui] NodeEditor: minor improvements with single tab group and status table [PR](https://github.com/alicevision/meshroom/pull/637)
- [ui] Viewer3D: Display equirectangular images as environment maps [PR](https://github.com/alicevision/meshroom/pull/731)
- [windows] Fix open recent broken on windows and remove unnecessary warnings [PR](https://github.com/alicevision/meshroom/pull/940)
### Build, CI, Documentation
- [build] Fix cxFreeze version for Python 2.7 compatibility [PR](https://github.com/alicevision/meshroom/pull/634)
- [ci] Add github Actions [PR](https://github.com/alicevision/meshroom/pull/1051)
- [ci] AppVeyor: Update build environment and save artifacts [PR](https://github.com/alicevision/meshroom/pull/875)
- [ci] Travis: Update environment, remove Python 2.7 & add 3.8 [PR](https://github.com/alicevision/meshroom/pull/874)
- [docker] Clean Dockerfiles [PR](https://github.com/alicevision/meshroom/pull/1054)
- [docker] Move to PySide2 / Qt 5.14.1
- [docker] Fix some packaging issues of the release 2019.2.0 [PR](https://github.com/alicevision/meshroom/pull/627)
- [github] Add exemptLabels [PR](https://github.com/alicevision/meshroom/pull/801)
- [github] Add issue templates [PR](https://github.com/alicevision/meshroom/pull/579)
- [github] Add template for questions / help only [PR](https://github.com/alicevision/meshroom/pull/629)
- [github] Added automatic stale detection and closing for issues [PR](https://github.com/alicevision/meshroom/pull/598)
- [python] Import ABC from collections.abc [PR](https://github.com/alicevision/meshroom/pull/983)
For more details see all PR merged: https://github.com/alicevision/meshroom/milestone/10
See [AliceVision 2.3.0 Release Notes](https://github.com/alicevision/AliceVision/blob/v2.3.0/CHANGES.md) for more details about algorithmic changes.
## Release 2019.2.0 (2019.08.08) ## Release 2019.2.0 (2019.08.08)

View file

@ -1,39 +0,0 @@
ARG MR_VERSION
ARG CUDA_VERSION=9.0
ARG OS_VERSION=7
FROM alicevision/meshroom-deps:${MR_VERSION}-centos${OS_VERSION}-cuda${CUDA_VERSION}
LABEL maintainer="AliceVision Team alicevision-team@googlegroups.com"
# Execute with nvidia docker (https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0))
# docker run -it --runtime=nvidia meshroom
ENV MESHROOM_DEV=/opt/Meshroom \
MESHROOM_BUILD=/tmp/Meshroom_build \
MESHROOM_BUNDLE=/opt/Meshroom_bundle \
QT_DIR=/opt/Qt5.14.1/5.14.1/gcc_64 \
PATH="${PATH}:${MESHROOM_BUNDLE}"
COPY . "${MESHROOM_DEV}"
WORKDIR ${MESHROOM_DEV}
RUN source scl_source enable rh-python36 && python setup.py install_exe -d "${MESHROOM_BUNDLE}" && \
find ${MESHROOM_BUNDLE} -name "*Qt5Web*" -delete && \
find ${MESHROOM_BUNDLE} -name "*Qt5Designer*" -delete && \
rm -rf ${MESHROOM_BUNDLE}/lib/PySide2/typesystems/ ${MESHROOM_BUNDLE}/lib/PySide2/examples/ ${MESHROOM_BUNDLE}/lib/PySide2/include/ ${MESHROOM_BUNDLE}/lib/PySide2/Qt/translations/ ${MESHROOM_BUNDLE}/lib/PySide2/Qt/resources/ && \
rm ${MESHROOM_BUNDLE}/lib/PySide2/QtWeb* && \
rm ${MESHROOM_BUNDLE}/lib/PySide2/pyside2-lupdate ${MESHROOM_BUNDLE}/lib/PySide2/rcc ${MESHROOM_BUNDLE}/lib/PySide2/designer
WORKDIR ${MESHROOM_BUILD}
# Build Meshroom plugins
RUN cmake "${MESHROOM_DEV}" -DALICEVISION_ROOT="${AV_INSTALL}" -DQT_DIR="${QT_DIR}" -DCMAKE_INSTALL_PREFIX="${MESHROOM_BUNDLE}/qtPlugins"
# RUN make -j8 qtOIIO
# RUN make -j8 qmlAlembic
# RUN make -j8 qtAliceVision
RUN make -j8 && cd /tmp && rm -rf ${MESHROOM_BUILD}
RUN mv "${AV_BUNDLE}" "${MESHROOM_BUNDLE}/aliceVision"
RUN rm -rf ${MESHROOM_BUNDLE}/aliceVision/share/doc ${MESHROOM_BUNDLE}/aliceVision/share/eigen3 ${MESHROOM_BUNDLE}/aliceVision/share/fonts ${MESHROOM_BUNDLE}/aliceVision/share/lemon ${MESHROOM_BUNDLE}/aliceVision/share/libraw ${MESHROOM_BUNDLE}/aliceVision/share/man/ aliceVision/share/pkgconfig

View file

@ -1,74 +0,0 @@
ARG CUDA_TAG=7.0
ARG OS_TAG=7
FROM alicevision/alicevision:2.2.0-centos${OS_TAG}-cuda${CUDA_TAG}
LABEL maintainer="AliceVision Team alicevision-team@googlegroups.com"
# Execute with nvidia docker (https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0))
# docker run -it --runtime=nvidia meshroom
ENV MESHROOM_DEV=/opt/Meshroom \
MESHROOM_BUILD=/tmp/Meshroom_build \
MESHROOM_BUNDLE=/opt/Meshroom_bundle \
QT_DIR=/opt/qt/5.13.0/gcc_64 \
PATH="${PATH}:${MESHROOM_BUNDLE}"
# Workaround for qmlAlembic/qtAliceVision builds: fuse lib/lib64 folders
RUN cp -rf ${AV_INSTALL}/lib/* ${AV_INSTALL}/lib64 && rm -rf ${AV_INSTALL}/lib && ln -s ${AV_INSTALL}/lib64 ${AV_INSTALL}/lib
# Install libs needed by Qt
RUN yum install -y \
flex \
fontconfig \
freetype \
glib2 \
libICE \
libX11 \
libxcb \
libXext \
libXi \
libXrender \
libSM \
libXt-devel \
libGLU-devel \
mesa-libOSMesa-devel \
mesa-libGL-devel \
mesa-libGLU-devel \
xcb-util-keysyms \
xcb-util-image
# Install Python2
RUN yum install -y python-devel && curl https://bootstrap.pypa.io/get-pip.py -o /tmp/get-pip.py && python /tmp/get-pip.py && pip install --upgrade pip
COPY . "${MESHROOM_DEV}"
WORKDIR "${MESHROOM_DEV}"
# Install Meshroom requirements and freeze bundle
RUN pip install -r dev_requirements.txt -r requirements.txt && python setup.py install_exe -d "${MESHROOM_BUNDLE}" && \
find ${MESHROOM_BUNDLE} -name "*Qt5Web*" -delete && \
find ${MESHROOM_BUNDLE} -name "*Qt5Designer*" -delete && \
rm -rf ${MESHROOM_BUNDLE}/lib/PySide2/typesystems/ ${MESHROOM_BUNDLE}/lib/PySide2/examples/ ${MESHROOM_BUNDLE}/lib/PySide2/include/ ${MESHROOM_BUNDLE}/lib/PySide2/Qt/translations/ ${MESHROOM_BUNDLE}/lib/PySide2/Qt/resources/ && \
rm ${MESHROOM_BUNDLE}/lib/PySide2/QtWeb* && \
rm ${MESHROOM_BUNDLE}/lib/PySide2/pyside2-lupdate ${MESHROOM_BUNDLE}/lib/PySide2/pyside2-rcc
# Install Qt (to build plugins)
WORKDIR /tmp/qt
# Qt version in specified in docker/qt-installer-noninteractive.qs
RUN curl -LO http://download.qt.io/official_releases/online_installers/qt-unified-linux-x64-online.run && \
chmod u+x qt-unified-linux-x64-online.run && \
./qt-unified-linux-x64-online.run --verbose --platform minimal --script "${MESHROOM_DEV}/docker/qt-installer-noninteractive.qs" && \
rm ./qt-unified-linux-x64-online.run
WORKDIR ${MESHROOM_BUILD}
# Build Meshroom plugins
RUN cmake "${MESHROOM_DEV}" -DALICEVISION_ROOT="${AV_INSTALL}" -DQT_DIR="${QT_DIR}" -DCMAKE_INSTALL_PREFIX="${MESHROOM_BUNDLE}/qtPlugins"
# RUN make -j8 qtOIIO
# RUN make -j8 qmlAlembic
# RUN make -j8 qtAliceVision
RUN make -j8 && cd /tmp && rm -rf ${MESHROOM_BUILD}
RUN mv "${AV_BUNDLE}" "${MESHROOM_BUNDLE}/aliceVision"
RUN rm -rf ${MESHROOM_BUNDLE}/aliceVision/share/doc ${MESHROOM_BUNDLE}/aliceVision/share/eigen3 ${MESHROOM_BUNDLE}/aliceVision/share/fonts ${MESHROOM_BUNDLE}/aliceVision/share/lemon ${MESHROOM_BUNDLE}/aliceVision/share/libraw ${MESHROOM_BUNDLE}/aliceVision/share/man/ aliceVision/share/pkgconfig

View file

@ -11,7 +11,7 @@ import meshroom.core.graph
import meshroom.core.taskManager import meshroom.core.taskManager
from meshroom import multiview from meshroom import multiview
parser = argparse.ArgumentParser(description='Launch the full photogrammetry or HDRI pipeline.') parser = argparse.ArgumentParser(description='Launch the full photogrammetry or Panorama HDR pipeline.')
parser.add_argument('-i', '--input', metavar='SFM/FOLDERS/IMAGES', type=str, nargs='*', parser.add_argument('-i', '--input', metavar='SFM/FOLDERS/IMAGES', type=str, nargs='*',
default=[], default=[],
help='Input folder containing images or folders of images or file (.sfm or .json) ' help='Input folder containing images or folders of images or file (.sfm or .json) '
@ -20,8 +20,8 @@ parser.add_argument('-I', '--inputRecursive', metavar='FOLDERS/IMAGES', type=str
default=[], default=[],
help='Input folders containing all images recursively.') help='Input folders containing all images recursively.')
parser.add_argument('-p', '--pipeline', metavar='photogrammetry/hdri/MG_FILE', type=str, default='photogrammetry', parser.add_argument('-p', '--pipeline', metavar='photogrammetry/panoramaHdr/panoramaFisheyeHdr/MG_FILE', type=str, default='photogrammetry',
help='"photogrammetry" pipeline, "hdri" pipeline or a Meshroom file containing a custom pipeline to run on input images. ' help='"photogrammetry" pipeline, "panotamaHdr" pipeline, "panotamaFisheyeHdr" pipeline or a Meshroom file containing a custom pipeline to run on input images. '
'Requirements: the graph must contain one CameraInit node, ' 'Requirements: the graph must contain one CameraInit node, '
'and one Publish node if --output is set.') 'and one Publish node if --output is set.')
@ -113,12 +113,12 @@ with multiview.GraphModification(graph):
if args.pipeline.lower() == "photogrammetry": if args.pipeline.lower() == "photogrammetry":
# default photogrammetry pipeline # default photogrammetry pipeline
multiview.photogrammetry(inputViewpoints=views, inputIntrinsics=intrinsics, output=args.output, graph=graph) multiview.photogrammetry(inputViewpoints=views, inputIntrinsics=intrinsics, output=args.output, graph=graph)
elif args.pipeline.lower() == "hdri": elif args.pipeline.lower() == "panoramahdr":
# default hdri pipeline # default panorama Hdr pipeline
multiview.hdri(inputViewpoints=views, inputIntrinsics=intrinsics, output=args.output, graph=graph) multiview.panoramaHdr(inputViewpoints=views, inputIntrinsics=intrinsics, output=args.output, graph=graph)
elif args.pipeline.lower() == "hdrifisheye": elif args.pipeline.lower() == "panoramafisheyehdr":
# default hdriFisheye pipeline # default panorama Fisheye Hdr pipeline
multiview.hdriFisheye(inputViewpoints=views, inputIntrinsics=intrinsics, output=args.output, graph=graph) multiview.panoramaFisheyeHdr(inputViewpoints=views, inputIntrinsics=intrinsics, output=args.output, graph=graph)
else: else:
# custom pipeline # custom pipeline
graph.load(args.pipeline) graph.load(args.pipeline)

View file

@ -1,6 +1,8 @@
# packaging # packaging
cx_Freeze
# use cx_Freeze==5.1.1 for Python-2 cx_Freeze==5.1.1;python_version<"3.5"
# Problem with cx_freeze-6.2, see https://github.com/marcelotduarte/cx_Freeze/issues/652
cx_Freeze==6.1;python_version>="3.5"
# testing # testing
pytest pytest

72
docker/Dockerfile_centos Normal file
View file

@ -0,0 +1,72 @@
ARG MESHROOM_VERSION
ARG AV_VERSION
ARG CUDA_VERSION
ARG CENTOS_VERSION
FROM alicevision/meshroom-deps:${MESHROOM_VERSION}-av${AV_VERSION}-centos${CENTOS_VERSION}-cuda${CUDA_VERSION}
LABEL maintainer="AliceVision Team alicevision-team@googlegroups.com"
# Execute with nvidia docker (https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0))
# docker run -it --runtime nvidia -p 2222:22 --name meshroom -v</path/to/your/data>:/data alicevision/meshroom:develop-av2.2.8.develop-ubuntu20.04-cuda11.0
# ssh -p 2222 -X root@<docker host> /opt/Meshroom_bundle/Meshroom # Password is 'meshroom'
ENV MESHROOM_DEV=/opt/Meshroom \
MESHROOM_BUILD=/tmp/Meshroom_build \
MESHROOM_BUNDLE=/opt/Meshroom_bundle \
AV_INSTALL=/opt/AliceVision_install \
QT_DIR=/opt/Qt5.14.1/5.14.1/gcc_64 \
PATH="${PATH}:${MESHROOM_BUNDLE}" \
OPENIMAGEIO_LIBRARY=/opt/AliceVision_install/lib
COPY *.txt *.md *.py ${MESHROOM_DEV}/
COPY ./docs ${MESHROOM_DEV}/docs
COPY ./meshroom ${MESHROOM_DEV}/meshroom
COPY ./tests ${MESHROOM_DEV}/tests
COPY ./bin ${MESHROOM_DEV}/bin
WORKDIR ${MESHROOM_DEV}
RUN source scl_source enable rh-python36 && python setup.py install_exe -d "${MESHROOM_BUNDLE}" && \
find ${MESHROOM_BUNDLE} -name "*Qt5Web*" -delete && \
find ${MESHROOM_BUNDLE} -name "*Qt5Designer*" -delete && \
rm -rf ${MESHROOM_BUNDLE}/lib/PySide2/typesystems/ \
${MESHROOM_BUNDLE}/lib/PySide2/examples/ \
${MESHROOM_BUNDLE}/lib/PySide2/include/ \
${MESHROOM_BUNDLE}/lib/PySide2/Qt/translations/ \
${MESHROOM_BUNDLE}/lib/PySide2/Qt/resources/ \
${MESHROOM_BUNDLE}/lib/PySide2/QtWeb* \
${MESHROOM_BUNDLE}/lib/PySide2/pyside2-lupdate \
${MESHROOM_BUNDLE}/lib/PySide2/rcc \
${MESHROOM_BUNDLE}/lib/PySide2/designer
WORKDIR ${MESHROOM_BUILD}
# Build Meshroom plugins
RUN cmake "${MESHROOM_DEV}" -DALICEVISION_ROOT="${AV_INSTALL}" -DCMAKE_INSTALL_PREFIX="${MESHROOM_BUNDLE}/qtPlugins"
RUN make "-j$(nproc)" qtOIIO
RUN make "-j$(nproc)" qmlAlembic
RUN make "-j$(nproc)" qtAliceVision
RUN make "-j$(nproc)" && \
rm -rf "${MESHROOM_BUILD}" "${MESHROOM_DEV}" \
${MESHROOM_BUNDLE}/aliceVision/share/doc \
${MESHROOM_BUNDLE}/aliceVision/share/eigen3 \
${MESHROOM_BUNDLE}/aliceVision/share/fonts \
${MESHROOM_BUNDLE}/aliceVision/share/lemon \
${MESHROOM_BUNDLE}/aliceVision/share/libraw \
${MESHROOM_BUNDLE}/aliceVision/share/man/ \
aliceVision/share/pkgconfig
# Enable SSH X11 forwarding, needed when the Docker image
# is run on a remote machine
RUN yum -y install openssh-server xauth mesa-dri-drivers && \
systemctl enable sshd && \
mkdir -p /run/sshd
RUN sed -i "s/^.*X11Forwarding.*$/X11Forwarding yes/; s/^.*X11UseLocalhost.*$/X11UseLocalhost no/; s/^.*PermitRootLogin prohibit-password/PermitRootLogin yes/; s/^.*X11UseLocalhost.*/X11UseLocalhost no/;" /etc/ssh/sshd_config
RUN echo "root:meshroom" | chpasswd
WORKDIR /root
EXPOSE 22
CMD bash -c "test -s /etc/machine-id || systemd-machine-id-setup; sshd-keygen; /usr/sbin/sshd -D"

View file

@ -1,7 +1,7 @@
ARG AV_VERSION ARG AV_VERSION
ARG CUDA_VERSION=9.0 ARG CUDA_VERSION
ARG OS_VERSION=7 ARG CENTOS_VERSION=7
FROM alicevision/alicevision:${AV_VERSION}-centos${OS_VERSION}-cuda${CUDA_VERSION} FROM alicevision/alicevision:${AV_VERSION}-centos${CENTOS_VERSION}-cuda${CUDA_VERSION}
LABEL maintainer="AliceVision Team alicevision-team@googlegroups.com" LABEL maintainer="AliceVision Team alicevision-team@googlegroups.com"
# Execute with nvidia docker (https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)) # Execute with nvidia docker (https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0))
@ -9,12 +9,20 @@ LABEL maintainer="AliceVision Team alicevision-team@googlegroups.com"
ENV MESHROOM_DEV=/opt/Meshroom \ ENV MESHROOM_DEV=/opt/Meshroom \
MESHROOM_BUILD=/tmp/Meshroom_build \ MESHROOM_BUILD=/tmp/Meshroom_build \
MESHROOM_BUNDLE=/opt/Meshroom_bundle \
QT_DIR=/opt/Qt5.14.1/5.14.1/gcc_64 \ QT_DIR=/opt/Qt5.14.1/5.14.1/gcc_64 \
QT_CI_LOGIN=alicevisionjunk@gmail.com \ QT_CI_LOGIN=alicevisionjunk@gmail.com \
QT_CI_PASSWORD=azerty1. QT_CI_PASSWORD=azerty1.
# Workaround for qmlAlembic/qtAliceVision builds: fuse lib/lib64 folders WORKDIR ${MESHROOM_BUNDLE}
RUN cp -rf ${AV_INSTALL}/lib/* ${AV_INSTALL}/lib64 && rm -rf ${AV_INSTALL}/lib && ln -s ${AV_INSTALL}/lib64 ${AV_INSTALL}/lib RUN mv "${AV_BUNDLE}" "${MESHROOM_BUNDLE}/aliceVision" && \
rm -rf ${MESHROOM_BUNDLE}/aliceVision/share/doc \
${MESHROOM_BUNDLE}/aliceVision/share/eigen3 \
${MESHROOM_BUNDLE}/aliceVision/share/fonts \
${MESHROOM_BUNDLE}/aliceVision/share/lemon \
${MESHROOM_BUNDLE}/aliceVision/share/libraw \
${MESHROOM_BUNDLE}/aliceVision/share/man \
${MESHROOM_BUNDLE}/aliceVision/share/pkgconfig
# Install libs needed by Qt # Install libs needed by Qt
RUN yum install -y \ RUN yum install -y \
@ -41,22 +49,17 @@ RUN yum install -y \
# Install Python3 # Install Python3
RUN yum install -y centos-release-scl && yum install -y rh-python36 && source scl_source enable rh-python36 && pip install --upgrade pip RUN yum install -y centos-release-scl && yum install -y rh-python36 && source scl_source enable rh-python36 && pip install --upgrade pip
COPY ./*requirements.txt ${MESHROOM_DEV}/
COPY ./*requirements.txt ./setup.py "${MESHROOM_DEV}/"
# Install Meshroom requirements and freeze bundle # Install Meshroom requirements and freeze bundle
WORKDIR "${MESHROOM_DEV}" WORKDIR "${MESHROOM_DEV}"
RUN source scl_source enable rh-python36 && pip install -r dev_requirements.txt -r requirements.txt RUN source scl_source enable rh-python36 && pip install -r dev_requirements.txt -r requirements.txt
COPY ./docker/qt-installer-noninteractive.qs "${MESHROOM_DEV}/docker/"
# Install Qt (to build plugins) # Install Qt (to build plugins)
ENV QT_VERSION_A=5.14 \
QT_VERSION_B=5.14.1
WORKDIR /tmp/qt WORKDIR /tmp/qt
RUN wget https://download.qt.io/archive/qt/${QT_VERSION_A}/${QT_VERSION_B}/qt-opensource-linux-x64-${QT_VERSION_B}.run && \ COPY dl/qt.run /tmp/qt
chmod +x qt-opensource-linux-x64-${QT_VERSION_B}.run && \ COPY ./docker/qt-installer-noninteractive.qs ${MESHROOM_DEV}/docker/
./qt-opensource-linux-x64-${QT_VERSION_B}.run --verbose --platform minimal --script "${MESHROOM_DEV}/docker/qt-installer-noninteractive.qs" && \ RUN chmod +x qt.run && \
rm qt-opensource-linux-x64-${QT_VERSION_B}.run ./qt.run --verbose --platform minimal --script "${MESHROOM_DEV}/docker/qt-installer-noninteractive.qs" && \
rm qt.run

View file

@ -0,0 +1,68 @@
ARG AV_VERSION
ARG CUDA_VERSION
ARG CENTOS_VERSION=7
FROM alicevision/alicevision:${AV_VERSION}-centos${CENTOS_VERSION}-cuda${CUDA_VERSION}
LABEL maintainer="AliceVision Team alicevision-team@googlegroups.com"
# Execute with nvidia docker (https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0))
# docker run -it --runtime=nvidia meshroom
ENV MESHROOM_DEV=/opt/Meshroom \
MESHROOM_BUILD=/tmp/Meshroom_build \
MESHROOM_BUNDLE=/opt/Meshroom_bundle \
QT_DIR=/opt/Qt5.14.1/5.14.1/gcc_64 \
QT_CI_LOGIN=alicevisionjunk@gmail.com \
QT_CI_PASSWORD=azerty1.
WORKDIR ${MESHROOM_BUNDLE}
RUN mv "${AV_BUNDLE}" "${MESHROOM_BUNDLE}/aliceVision" && \
rm -rf ${MESHROOM_BUNDLE}/aliceVision/share/doc \
${MESHROOM_BUNDLE}/aliceVision/share/eigen3 \
${MESHROOM_BUNDLE}/aliceVision/share/fonts \
${MESHROOM_BUNDLE}/aliceVision/share/lemon \
${MESHROOM_BUNDLE}/aliceVision/share/libraw \
${MESHROOM_BUNDLE}/aliceVision/share/man \
${MESHROOM_BUNDLE}/aliceVision/share/pkgconfig
# Install libs needed by Qt
RUN yum install -y \
flex \
fontconfig \
freetype \
glib2 \
libICE \
libX11 \
libxcb \
libXext \
libXi \
libXrender \
libSM \
libXt-devel \
libGLU-devel \
mesa-libOSMesa-devel \
mesa-libGL-devel \
mesa-libGLU-devel \
xcb-util-keysyms \
xcb-util-image \
libxkbcommon-x11
# Install Python2
RUN yum install -y python-devel && \
curl https://bootstrap.pypa.io/get-pip.py -o /tmp/get-pip.py && \
python /tmp/get-pip.py && \
pip install --upgrade pip
COPY ./*requirements.txt ${MESHROOM_DEV}/
# Install Meshroom requirements and freeze bundle
WORKDIR "${MESHROOM_DEV}"
RUN pip install -r dev_requirements.txt -r requirements.txt
# Install Qt (to build plugins)
WORKDIR /tmp/qt
COPY dl/qt.run /tmp/qt
COPY ./docker/qt-installer-noninteractive.qs ${MESHROOM_DEV}/docker/
RUN chmod +x qt.run && \
./qt.run --verbose --platform minimal --script "${MESHROOM_DEV}/docker/qt-installer-noninteractive.qs" && \
rm qt.run

View file

@ -0,0 +1,72 @@
ARG MESHROOM_VERSION
ARG AV_VERSION
ARG CUDA_VERSION
ARG CENTOS_VERSION
FROM alicevision/meshroom-deps:${MESHROOM_VERSION}-av${AV_VERSION}-centos${CENTOS_VERSION}-cuda${CUDA_VERSION}-py2
LABEL maintainer="AliceVision Team alicevision-team@googlegroups.com"
# Execute with nvidia docker (https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0))
# docker run -it --runtime nvidia -p 2222:22 --name meshroom -v</path/to/your/data>:/data alicevision/meshroom:develop-av2.2.8.develop-ubuntu20.04-cuda11.0
# ssh -p 2222 -X root@<docker host> /opt/Meshroom_bundle/Meshroom # Password is 'meshroom'
ENV MESHROOM_DEV=/opt/Meshroom \
MESHROOM_BUILD=/tmp/Meshroom_build \
MESHROOM_BUNDLE=/opt/Meshroom_bundle \
AV_INSTALL=/opt/AliceVision_install \
QT_DIR=/opt/Qt5.14.1/5.14.1/gcc_64 \
PATH="${PATH}:${MESHROOM_BUNDLE}" \
OPENIMAGEIO_LIBRARY=/opt/AliceVision_install/lib
COPY *.txt *.md *.py ${MESHROOM_DEV}/
COPY ./docs ${MESHROOM_DEV}/docs
COPY ./meshroom ${MESHROOM_DEV}/meshroom
COPY ./tests ${MESHROOM_DEV}/tests
COPY ./bin ${MESHROOM_DEV}/bin
WORKDIR ${MESHROOM_DEV}
RUN python setup.py install_exe -d "${MESHROOM_BUNDLE}" && \
find ${MESHROOM_BUNDLE} -name "*Qt5Web*" -delete && \
find ${MESHROOM_BUNDLE} -name "*Qt5Designer*" -delete && \
rm -rf ${MESHROOM_BUNDLE}/lib/PySide2/typesystems/ \
${MESHROOM_BUNDLE}/lib/PySide2/examples/ \
${MESHROOM_BUNDLE}/lib/PySide2/include/ \
${MESHROOM_BUNDLE}/lib/PySide2/Qt/translations/ \
${MESHROOM_BUNDLE}/lib/PySide2/Qt/resources/ \
${MESHROOM_BUNDLE}/lib/PySide2/QtWeb* \
${MESHROOM_BUNDLE}/lib/PySide2/pyside2-lupdate \
${MESHROOM_BUNDLE}/lib/PySide2/rcc \
${MESHROOM_BUNDLE}/lib/PySide2/designer
WORKDIR ${MESHROOM_BUILD}
# Build Meshroom plugins
RUN cmake "${MESHROOM_DEV}" -DALICEVISION_ROOT="${AV_INSTALL}" -DCMAKE_INSTALL_PREFIX="${MESHROOM_BUNDLE}/qtPlugins"
RUN make "-j$(nproc)" qtOIIO
RUN make "-j$(nproc)" qmlAlembic
RUN make "-j$(nproc)" qtAliceVision
RUN make "-j$(nproc)" && \
rm -rf "${MESHROOM_BUILD}" "${MESHROOM_DEV}" \
${MESHROOM_BUNDLE}/aliceVision/share/doc \
${MESHROOM_BUNDLE}/aliceVision/share/eigen3 \
${MESHROOM_BUNDLE}/aliceVision/share/fonts \
${MESHROOM_BUNDLE}/aliceVision/share/lemon \
${MESHROOM_BUNDLE}/aliceVision/share/libraw \
${MESHROOM_BUNDLE}/aliceVision/share/man \
${MESHROOM_BUNDLE}/aliceVision/share/pkgconfig
# Enable SSH X11 forwarding, needed when the Docker image
# is run on a remote machine
RUN yum -y install openssh-server xauth mesa-dri-drivers && \
systemctl enable sshd && \
mkdir -p /run/sshd
RUN sed -i "s/^.*X11Forwarding.*$/X11Forwarding yes/; s/^.*X11UseLocalhost.*$/X11UseLocalhost no/; s/^.*PermitRootLogin prohibit-password/PermitRootLogin yes/; s/^.*X11UseLocalhost.*/X11UseLocalhost no/;" /etc/ssh/sshd_config
RUN echo "root:meshroom" | chpasswd
WORKDIR /root
EXPOSE 22
CMD bash -c "test -s /etc/machine-id || systemd-machine-id-setup; sshd-keygen; /usr/sbin/sshd -D"

71
docker/Dockerfile_ubuntu Normal file
View file

@ -0,0 +1,71 @@
ARG MESHROOM_VERSION
ARG AV_VERSION
ARG CUDA_VERSION
ARG UBUNTU_VERSION
FROM alicevision/meshroom-deps:${MESHROOM_VERSION}-av${AV_VERSION}-ubuntu${UBUNTU_VERSION}-cuda${CUDA_VERSION}
LABEL maintainer="AliceVision Team alicevision-team@googlegroups.com"
# Execute with nvidia docker (https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0))
# docker run -it --runtime nvidia -p 2222:22 --name meshroom -v</path/to/your/data>:/data alicevision/meshroom:develop-av2.2.8.develop-ubuntu20.04-cuda11.0
# ssh -p 2222 -X root@<docker host> /opt/Meshroom_bundle/Meshroom # Password is 'meshroom'
ENV MESHROOM_DEV=/opt/Meshroom \
MESHROOM_BUILD=/tmp/Meshroom_build \
MESHROOM_BUNDLE=/opt/Meshroom_bundle \
AV_INSTALL=/opt/AliceVision_install \
QT_DIR=/opt/Qt5.14.1/5.14.1/gcc_64 \
PATH="${PATH}:${MESHROOM_BUNDLE}" \
OPENIMAGEIO_LIBRARY=/opt/AliceVision_install/lib
COPY *.txt *.md *.py ${MESHROOM_DEV}/
COPY ./docs ${MESHROOM_DEV}/docs
COPY ./meshroom ${MESHROOM_DEV}/meshroom
COPY ./tests ${MESHROOM_DEV}/tests
COPY ./bin ${MESHROOM_DEV}/bin
WORKDIR ${MESHROOM_DEV}
RUN python3 setup.py install_exe -d "${MESHROOM_BUNDLE}" && \
find ${MESHROOM_BUNDLE} -name "*Qt5Web*" -delete && \
find ${MESHROOM_BUNDLE} -name "*Qt5Designer*" -delete && \
rm -rf ${MESHROOM_BUNDLE}/lib/PySide2/typesystems/ \
${MESHROOM_BUNDLE}/lib/PySide2/examples/ \
${MESHROOM_BUNDLE}/lib/PySide2/include/ \
${MESHROOM_BUNDLE}/lib/PySide2/Qt/translations/ \
${MESHROOM_BUNDLE}/lib/PySide2/Qt/resources/ \
${MESHROOM_BUNDLE}/lib/PySide2/QtWeb* \
${MESHROOM_BUNDLE}/lib/PySide2/pyside2-lupdate \
${MESHROOM_BUNDLE}/lib/PySide2/rcc \
${MESHROOM_BUNDLE}/lib/PySide2/designer
WORKDIR ${MESHROOM_BUILD}
# Build Meshroom plugins
RUN cmake "${MESHROOM_DEV}" -DALICEVISION_ROOT="${AV_INSTALL}" -DCMAKE_INSTALL_PREFIX="${MESHROOM_BUNDLE}/qtPlugins"
RUN make "-j$(nproc)" qtOIIO
RUN make "-j$(nproc)" qmlAlembic
RUN make "-j$(nproc)" qtAliceVision
RUN make "-j$(nproc)" && \
rm -rf "${MESHROOM_BUILD}" "${MESHROOM_DEV}" \
${MESHROOM_BUNDLE}/aliceVision/share/doc \
${MESHROOM_BUNDLE}/aliceVision/share/eigen3 \
${MESHROOM_BUNDLE}/aliceVision/share/fonts \
${MESHROOM_BUNDLE}/aliceVision/share/lemon \
${MESHROOM_BUNDLE}/aliceVision/share/libraw \
${MESHROOM_BUNDLE}/aliceVision/share/man/ \
aliceVision/share/pkgconfig
# Enable SSH X11 forwarding, needed when the Docker image
# is run on a remote machine
RUN apt install ssh xauth && \
systemctl enable ssh && \
mkdir -p /run/sshd
RUN sed -i "s/^.*X11Forwarding.*$/X11Forwarding yes/; s/^.*X11UseLocalhost.*$/X11UseLocalhost no/; s/^.*PermitRootLogin prohibit-password/PermitRootLogin yes/; s/^.*X11UseLocalhost.*/X11UseLocalhost no/;" /etc/ssh/sshd_config
RUN echo "root:meshroom" | chpasswd
WORKDIR /root
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

View file

@ -0,0 +1,75 @@
ARG AV_VERSION
ARG CUDA_VERSION
ARG UBUNTU_VERSION
FROM alicevision/alicevision:${AV_VERSION}-ubuntu${UBUNTU_VERSION}-cuda${CUDA_VERSION}
LABEL maintainer="AliceVision Team alicevision-team@googlegroups.com"
# Execute with nvidia docker (https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0))
# docker run -it --runtime=nvidia meshroom
ENV MESHROOM_DEV=/opt/Meshroom \
MESHROOM_BUILD=/tmp/Meshroom_build \
QT_DIR=/opt/Qt5.14.1/5.14.1/gcc_64 \
QT_CI_LOGIN=alicevisionjunk@gmail.com \
QT_CI_PASSWORD=azerty1.
# Workaround for qmlAlembic/qtAliceVision builds: fuse lib/lib64 folders
#RUN ln -s ${AV_INSTALL}/lib ${AV_INSTALL}/lib64
# Install libs needed by Qt
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
flex \
fontconfig \
libfreetype6 \
libglib2.0-0 \
libice6 \
libx11-6 \
libxcb1 \
libxext6 \
libxi6 \
libxrender1 \
libsm6 \
libxt-dev \
libglu-dev \
libosmesa-dev \
libgl-dev \
libglu-dev \
libqt5charts5-dev \
libxcb-keysyms1 \
libxcb-image0 \
libxkbcommon-x11-0 \
libz-dev \
systemd \
ssh
# Disabled as QTOIIO requires ah least 5.13 (5.12 available in Ubuntu 20.04)
# qtdeclarative5-dev \
# qt3d-assimpsceneimport-plugin \
# qt3d-defaultgeometryloader-plugin \
# qt3d-gltfsceneio-plugin \
# qt3d-scene2d-plugin \
# qt53dextras5 \
# qt3d5-dev
RUN apt-get install -y --no-install-recommends \
software-properties-common
# Install Python3
RUN apt install python3-pip -y && pip3 install --upgrade pip
# Install Qt (to build plugins)
WORKDIR /tmp/qt
COPY dl/qt.run /tmp/qt
COPY ./docker/qt-installer-noninteractive.qs ${MESHROOM_DEV}/docker/
RUN chmod +x qt.run && \
./qt.run --verbose --platform minimal --script "${MESHROOM_DEV}/docker/qt-installer-noninteractive.qs" && \
rm qt.run
COPY ./*requirements.txt ./setup.py ${MESHROOM_DEV}/
# Install Meshroom requirements and freeze bundle
WORKDIR "${MESHROOM_DEV}"
RUN pip install -r dev_requirements.txt -r requirements.txt

16
docker/build-all.sh Executable file
View file

@ -0,0 +1,16 @@
#!/bin/sh
set -e
test -d docker || (
echo This script must be run from the top level Meshroom directory
exit 1
)
CUDA_VERSION=11.0 UBUNTU_VERSION=20.04 docker/build-ubuntu.sh
CUDA_VERSION=11.0 UBUNTU_VERSION=18.04 docker/build-ubuntu.sh
CUDA_VERSION=10.2 UBUNTU_VERSION=18.04 docker/build-ubuntu.sh
CUDA_VERSION=9.2 UBUNTU_VERSION=18.04 docker/build-ubuntu.sh
CUDA_VERSION=10.2 CENTOS_VERSION=7 docker/build-centos.sh
CUDA_VERSION=9.2 CENTOS_VERSION=7 docker/build-centos.sh

42
docker/build-centos.sh Executable file
View file

@ -0,0 +1,42 @@
#!/bin/bash
set -ex
test -z "$MESHROOM_VERSION" && MESHROOM_VERSION="$(git rev-parse --abbrev-ref HEAD)-$(git rev-parse --short HEAD)"
test -z "$AV_VERSION" && echo "AliceVision version not specified, set AV_VERSION in the environment" && exit 1
test -z "$CUDA_VERSION" && CUDA_VERSION="10.2"
test -z "$CENTOS_VERSION" && CENTOS_VERSION="7"
test -z "$MESHROOM_PYTHON2" || echo "========== Build for Python 2 =========="
test -z "$MESHROOM_PYTHON2" || export PYTHON2_DOCKER_EXT="-py2"
test -z "$MESHROOM_PYTHON2" || export PYTHON2_DOCKERFILE_EXT="_py2"
test -z "$MESHROOM_PYTHON2" && echo "========== Build for Python 3 =========="
test -d docker || (
echo This script must be run from the top level Meshroom directory
exit 1
)
test -d dl || \
mkdir dl
test -f dl/qt.run || \
wget "https://download.qt.io/archive/qt/5.14/5.14.1/qt-opensource-linux-x64-5.14.1.run" -O "dl/qt.run"
# DEPENDENCIES
docker build \
--rm \
--build-arg "CUDA_VERSION=${CUDA_VERSION}" \
--build-arg "CENTOS_VERSION=${CENTOS_VERSION}" \
--build-arg "AV_VERSION=${AV_VERSION}" \
--tag "alicevision/meshroom-deps:${MESHROOM_VERSION}-av${AV_VERSION}-centos${CENTOS_VERSION}-cuda${CUDA_VERSION}${PYTHON2_DOCKER_EXT}" \
-f docker/Dockerfile_centos_deps${PYTHON2_DOCKERFILE_EXT} .
# Meshroom
docker build \
--rm \
--build-arg "MESHROOM_VERSION=${MESHROOM_VERSION}" \
--build-arg "CUDA_VERSION=${CUDA_VERSION}" \
--build-arg "CENTOS_VERSION=${CENTOS_VERSION}" \
--build-arg "AV_VERSION=${AV_VERSION}" \
--tag "alicevision/meshroom:${MESHROOM_VERSION}-av${AV_VERSION}-centos${CENTOS_VERSION}-cuda${CUDA_VERSION}${PYTHON2_DOCKER_EXT}" \
-f docker/Dockerfile_centos${PYTHON2_DOCKERFILE_EXT} .

37
docker/build-ubuntu.sh Executable file
View file

@ -0,0 +1,37 @@
#!/bin/bash
set -e
test -z "$MESHROOM_VERSION" && MESHROOM_VERSION="$(git rev-parse --abbrev-ref HEAD)-$(git rev-parse --short HEAD)"
test -z "$AV_VERSION" && echo "AliceVision version not specified, set AV_VERSION in the environment" && exit 1
test -z "$CUDA_VERSION" && CUDA_VERSION=11.0
test -z "$UBUNTU_VERSION" && UBUNTU_VERSION=20.04
test -d docker || (
echo This script must be run from the top level Meshroom directory
exit 1
)
test -d dl || \
mkdir dl
test -f dl/qt.run || \
"wget https://download.qt.io/archive/qt/5.14/5.14.1/qt-opensource-linux-x64-5.14.1.run" -O "dl/qt.run"
# DEPENDENCIES
docker build \
--rm \
--build-arg "CUDA_VERSION=${CUDA_VERSION}" \
--build-arg "UBUNTU_VERSION=${UBUNTU_VERSION}" \
--build-arg "AV_VERSION=${AV_VERSION}" \
--tag "alicevision/meshroom-deps:${MESHROOM_VERSION}-av${AV_VERSION}-ubuntu${UBUNTU_VERSION}-cuda${CUDA_VERSION}" \
-f docker/Dockerfile_ubuntu_deps .
# Meshroom
docker build \
--rm \
--build-arg "MESHROOM_VERSION=${MESHROOM_VERSION}" \
--build-arg "CUDA_VERSION=${CUDA_VERSION}" \
--build-arg "UBUNTU_VERSION=${UBUNTU_VERSION}" \
--build-arg "AV_VERSION=${AV_VERSION}" \
--tag "alicevision/meshroom:${MESHROOM_VERSION}-av${AV_VERSION}-ubuntu${UBUNTU_VERSION}-cuda${CUDA_VERSION}" \
-f docker/Dockerfile_ubuntu .

28
docker/extract.sh Executable file
View file

@ -0,0 +1,28 @@
#!/bin/bash
set -ex
AV_VERSION="2.2.10.hdri"
MESHROOM_VERSION="2020.0.1.hdri"
test -z "$MESHROOM_VERSION" && MESHROOM_VERSION="$(git rev-parse --abbrev-ref HEAD)-$(git rev-parse --short HEAD)"
test -z "$AV_VERSION" && echo "AliceVision version not specified, set AV_VERSION in the environment" && exit 1
test -z "$CUDA_VERSION" && CUDA_VERSION="10.2"
test -z "$CENTOS_VERSION" && CENTOS_VERSION="7"
test -z "$MESHROOM_PYTHON2" || echo "========== Build for Python 2 =========="
test -z "$MESHROOM_PYTHON2" || export PYTHON2_DOCKER_EXT="-py2"
test -z "$MESHROOM_PYTHON2" || export PYTHON2_DOCKERFILE_EXT="_py2"
test -z "$MESHROOM_PYTHON2" && echo "========== Build for Python 3 =========="
test -d docker || (
echo This script must be run from the top level Meshroom directory
exit 1
)
VERSION_NAME=${MESHROOM_VERSION}-av${AV_VERSION}-centos${CENTOS_VERSION}-cuda${CUDA_VERSION}${PYTHON2_DOCKER_EXT}
# Retrieve the Meshroom bundle folder
rm -rf ./Meshroom-${VERSION_NAME}
CID=$(docker create alicevision/meshroom:${VERSION_NAME})
docker cp ${CID}:/opt/Meshroom_bundle ./Meshroom-${VERSION_NAME}
docker rm ${CID}

View file

@ -28,6 +28,8 @@ Controller.prototype.ComponentSelectionPageCallback = function() {
var widget = gui.currentPageWidget(); var widget = gui.currentPageWidget();
widget.deselectAll(); widget.deselectAll();
widget.selectComponent("qt.qt5.5141.gcc_64"); widget.selectComponent("qt.qt5.5141.gcc_64");
widget.selectComponent("qt.qt5.5141.qtcharts");
widget.selectComponent("qt.qt5.5141.qtcharts.gcc_64");
gui.clickButton(buttons.NextButton); gui.clickButton(buttons.NextButton);
} }
Controller.prototype.IntroductionPageCallback = function() { Controller.prototype.IntroductionPageCallback = function() {

View file

@ -1,4 +1,4 @@
__version__ = "2019.2.0" __version__ = "2020.1.1"
__version_name__ = __version__ __version_name__ = __version__
from distutils import util from distutils import util

View file

@ -1204,3 +1204,4 @@ def loadGraph(filepath):
graph.load(filepath) graph.load(filepath)
graph.update() graph.update()
return graph return graph

View file

@ -10,6 +10,7 @@ import platform
import re import re
import shutil import shutil
import time import time
import types
import uuid import uuid
from collections import defaultdict, namedtuple from collections import defaultdict, namedtuple
from enum import Enum from enum import Enum
@ -540,6 +541,7 @@ class BaseNode(BaseObject):
def getAttributes(self): def getAttributes(self):
return self._attributes return self._attributes
@Slot(str, result=bool)
def hasAttribute(self, name): def hasAttribute(self, name):
return name in self._attributes.keys() return name in self._attributes.keys()
@ -607,15 +609,15 @@ class BaseNode(BaseObject):
def _buildCmdVars(self): def _buildCmdVars(self):
def _buildAttributeCmdVars(cmdVars, name, attr): def _buildAttributeCmdVars(cmdVars, name, attr):
if attr.enabled: if attr.enabled:
if attr.attributeDesc.group is not None: group = attr.attributeDesc.group(attr.node) if isinstance(attr.attributeDesc.group, types.FunctionType) else attr.attributeDesc.group
if group is not None:
# if there is a valid command line "group" # if there is a valid command line "group"
v = attr.getValueStr() v = attr.getValueStr()
cmdVars[name] = '--{name} {value}'.format(name=name, value=v) cmdVars[name] = '--{name} {value}'.format(name=name, value=v)
cmdVars[name + 'Value'] = str(v) cmdVars[name + 'Value'] = str(v)
if v: if v:
cmdVars[attr.attributeDesc.group] = cmdVars.get(attr.attributeDesc.group, '') + \ cmdVars[group] = cmdVars.get(group, '') + ' ' + cmdVars[name]
' ' + cmdVars[name]
elif isinstance(attr, GroupAttribute): elif isinstance(attr, GroupAttribute):
assert isinstance(attr.value, DictModel) assert isinstance(attr.value, DictModel)
# if the GroupAttribute is not set in a single command line argument, # if the GroupAttribute is not set in a single command line argument,

View file

@ -44,7 +44,7 @@ class ComputerStatistics:
self.gpuMemoryTotal = 0 self.gpuMemoryTotal = 0
self.gpuName = '' self.gpuName = ''
self.curves = defaultdict(list) self.curves = defaultdict(list)
self.nvidia_smi = None
self._isInit = False self._isInit = False
def initOnFirstTime(self): def initOnFirstTime(self):
@ -53,40 +53,21 @@ class ComputerStatistics:
self._isInit = True self._isInit = True
self.cpuFreq = psutil.cpu_freq().max self.cpuFreq = psutil.cpu_freq().max
self.ramTotal = psutil.virtual_memory().total / 1024/1024/1024 self.ramTotal = psutil.virtual_memory().total / (1024*1024*1024)
if platform.system() == "Windows": if platform.system() == "Windows":
from distutils import spawn from distutils import spawn
# If the platform is Windows and nvidia-smi # If the platform is Windows and nvidia-smi
# could not be found from the environment path,
# try to find it from system drive with default installation path
self.nvidia_smi = spawn.find_executable('nvidia-smi') self.nvidia_smi = spawn.find_executable('nvidia-smi')
if self.nvidia_smi is None: if self.nvidia_smi is None:
self.nvidia_smi = "%s\\Program Files\\NVIDIA Corporation\\NVSMI\\nvidia-smi.exe" % os.environ['systemdrive'] # could not be found from the environment path,
# try to find it from system drive with default installation path
default_nvidia_smi = "%s\\Program Files\\NVIDIA Corporation\\NVSMI\\nvidia-smi.exe" % os.environ['systemdrive']
if os.path.isfile(default_nvidia_smi):
self.nvidia_smi = default_nvidia_smi
else: else:
self.nvidia_smi = "nvidia-smi" self.nvidia_smi = "nvidia-smi"
try:
p = subprocess.Popen([self.nvidia_smi, "-q", "-x"], stdout=subprocess.PIPE)
xmlGpu, stdError = p.communicate()
smiTree = ET.fromstring(xmlGpu)
gpuTree = smiTree.find('gpu')
try:
self.gpuMemoryTotal = gpuTree.find('fb_memory_usage').find('total').text.split(" ")[0]
except Exception as e:
logging.debug('Failed to get gpuMemoryTotal: "{}".'.format(str(e)))
pass
try:
self.gpuName = gpuTree.find('product_name').text
except Exception as e:
logging.debug('Failed to get gpuName: "{}".'.format(str(e)))
pass
except Exception as e:
logging.debug('Failed to get information from nvidia_smi at init: "{}".'.format(str(e)))
def _addKV(self, k, v): def _addKV(self, k, v):
if isinstance(v, tuple): if isinstance(v, tuple):
for ki, vi in v._asdict().items(): for ki, vi in v._asdict().items():
@ -98,18 +79,23 @@ class ComputerStatistics:
self.curves[k].append(v) self.curves[k].append(v)
def update(self): def update(self):
self.initOnFirstTime() try:
self._addKV('cpuUsage', psutil.cpu_percent(percpu=True)) # interval=None => non-blocking (percentage since last call) self.initOnFirstTime()
self._addKV('ramUsage', psutil.virtual_memory().percent) self._addKV('cpuUsage', psutil.cpu_percent(percpu=True)) # interval=None => non-blocking (percentage since last call)
self._addKV('swapUsage', psutil.swap_memory().percent) self._addKV('ramUsage', psutil.virtual_memory().percent)
self._addKV('vramUsage', 0) self._addKV('swapUsage', psutil.swap_memory().percent)
self._addKV('ioCounters', psutil.disk_io_counters()) self._addKV('vramUsage', 0)
self.updateGpu() self._addKV('ioCounters', psutil.disk_io_counters())
self.updateGpu()
except Exception as e:
logging.debug('Failed to get statistics: "{}".'.format(str(e)))
def updateGpu(self): def updateGpu(self):
if not self.nvidia_smi:
return
try: try:
p = subprocess.Popen([self.nvidia_smi, "-q", "-x"], stdout=subprocess.PIPE) p = subprocess.Popen([self.nvidia_smi, "-q", "-x"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
xmlGpu, stdError = p.communicate() xmlGpu, stdError = p.communicate(timeout=10) # 10 seconds
smiTree = ET.fromstring(xmlGpu) smiTree = ET.fromstring(xmlGpu)
gpuTree = smiTree.find('gpu') gpuTree = smiTree.find('gpu')
@ -129,7 +115,11 @@ class ComputerStatistics:
except Exception as e: except Exception as e:
logging.debug('Failed to get gpuTemperature: "{}".'.format(str(e))) logging.debug('Failed to get gpuTemperature: "{}".'.format(str(e)))
pass pass
except subprocess.TimeoutExpired as e:
logging.debug('Timeout when retrieving information from nvidia_smi: "{}".'.format(str(e)))
p.kill()
outs, errs = p.communicate()
return
except Exception as e: except Exception as e:
logging.debug('Failed to get information from nvidia_smi: "{}".'.format(str(e))) logging.debug('Failed to get information from nvidia_smi: "{}".'.format(str(e)))
return return
@ -201,15 +191,19 @@ class ProcStatistics:
data = proc.as_dict(self.dynamicKeys) data = proc.as_dict(self.dynamicKeys)
for k, v in data.items(): for k, v in data.items():
self._addKV(k, v) self._addKV(k, v)
files = [f.path for f in proc.open_files()] ## Note: Do not collect stats about open files for now,
if self.lastIterIndexWithFiles != -1: # as there is bug in psutil-5.7.2 on Windows which crashes the application.
if set(files) != set(self.openFiles[self.lastIterIndexWithFiles]): # https://github.com/giampaolo/psutil/issues/1763
self.openFiles[self.iterIndex] = files #
self.lastIterIndexWithFiles = self.iterIndex # files = [f.path for f in proc.open_files()]
elif files: # if self.lastIterIndexWithFiles != -1:
self.openFiles[self.iterIndex] = files # if set(files) != set(self.openFiles[self.lastIterIndexWithFiles]):
self.lastIterIndexWithFiles = self.iterIndex # self.openFiles[self.iterIndex] = files
# self.lastIterIndexWithFiles = self.iterIndex
# elif files:
# self.openFiles[self.iterIndex] = files
# self.lastIterIndexWithFiles = self.iterIndex
self.iterIndex += 1 self.iterIndex += 1
def toDict(self): def toDict(self):
@ -234,7 +228,7 @@ class Statistics:
self.computer = ComputerStatistics() self.computer = ComputerStatistics()
self.process = ProcStatistics() self.process = ProcStatistics()
self.times = [] self.times = []
self.interval = 5 self.interval = 10 # refresh interval in seconds
def update(self, proc): def update(self, proc):
''' '''

View file

@ -143,9 +143,9 @@ def findFilesByTypeInFolder(folder, recursive=False):
return output return output
def hdri(inputImages=None, inputViewpoints=None, inputIntrinsics=None, output='', graph=None): def panoramaHdr(inputImages=None, inputViewpoints=None, inputIntrinsics=None, output='', graph=None):
""" """
Create a new Graph with a complete HDRI pipeline. Create a new Graph with a Panorama HDR pipeline.
Args: Args:
inputImages (list of str, optional): list of image file paths inputImages (list of str, optional): list of image file paths
@ -156,9 +156,9 @@ def hdri(inputImages=None, inputViewpoints=None, inputIntrinsics=None, output=''
Graph: the created graph Graph: the created graph
""" """
if not graph: if not graph:
graph = Graph('HDRI') graph = Graph('PanoramaHDR')
with GraphModification(graph): with GraphModification(graph):
nodes = hdriPipeline(graph) nodes = panoramaHdrPipeline(graph)
cameraInit = nodes[0] cameraInit = nodes[0]
if inputImages: if inputImages:
cameraInit.viewpoints.extend([{'path': image} for image in inputImages]) cameraInit.viewpoints.extend([{'path': image} for image in inputImages])
@ -173,18 +173,22 @@ def hdri(inputImages=None, inputViewpoints=None, inputIntrinsics=None, output=''
return graph return graph
def hdriFisheye(inputImages=None, inputViewpoints=None, inputIntrinsics=None, output='', graph=None): def panoramaFisheyeHdr(inputImages=None, inputViewpoints=None, inputIntrinsics=None, output='', graph=None):
if not graph: if not graph:
graph = Graph('HDRI-Fisheye') graph = Graph('PanoramaFisheyeHDR')
with GraphModification(graph): with GraphModification(graph):
hdri(inputImages, inputViewpoints, inputIntrinsics, output, graph) panoramaHdr(inputImages, inputViewpoints, inputIntrinsics, output, graph)
for panoramaInit in graph.nodesByType("PanoramaInit"): for panoramaInit in graph.nodesByType("PanoramaInit"):
panoramaInit.attribute("useFisheye").value = True panoramaInit.attribute("useFisheye").value = True
# when using fisheye images, the overlap between images can be small
# and thus requires many features to get enough correspondances for cameras estimation
for featureExtraction in graph.nodesByType("FeatureExtraction"):
featureExtraction.attribute("describerPreset").value = 'high'
return graph return graph
def hdriPipeline(graph): def panoramaHdrPipeline(graph):
""" """
Instantiate an HDRI pipeline inside 'graph'. Instantiate an PanoramaHDR pipeline inside 'graph'.
Args: Args:
graph (Graph/UIGraph): the graph in which nodes should be instantiated graph (Graph/UIGraph): the graph in which nodes should be instantiated
@ -214,7 +218,7 @@ def hdriPipeline(graph):
featureExtraction = graph.addNewNode('FeatureExtraction', featureExtraction = graph.addNewNode('FeatureExtraction',
input=ldr2hdrMerge.outSfMData, input=ldr2hdrMerge.outSfMData,
describerPreset='high') describerQuality='high')
panoramaInit = graph.addNewNode('PanoramaInit', panoramaInit = graph.addNewNode('PanoramaInit',
input=featureExtraction.input, input=featureExtraction.input,
@ -249,6 +253,7 @@ def hdriPipeline(graph):
imageProcessing = graph.addNewNode('ImageProcessing', imageProcessing = graph.addNewNode('ImageProcessing',
input=panoramaCompositing.output, input=panoramaCompositing.output,
fixNonFinite=True,
fillHoles=True, fillHoles=True,
extension='exr') extension='exr')

View file

@ -188,6 +188,13 @@ The metadata needed are:
joinChar=',', joinChar=',',
advanced=True, advanced=True,
), ),
desc.BoolParam(
name='useInternalWhiteBalance',
label='Apply internal white balance',
description='Apply image white balance (Only for raw images)',
value=True,
uid=[0],
),
desc.ChoiceParam( desc.ChoiceParam(
name='viewIdMethod', name='viewIdMethod',
label='ViewId Method', label='ViewId Method',

View file

@ -41,7 +41,7 @@ class CameraLocalization(desc.CommandLineNode):
label='Match Desc Types', label='Match Desc Types',
description='''Describer types to use for the matching.''', description='''Describer types to use for the matching.''',
value=['sift'], value=['sift'],
values=['sift', 'sift_float', 'sift_upright', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'], values=['sift', 'sift_float', 'sift_upright', 'dspsift', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'],
exclusive=False, exclusive=False,
uid=[0], uid=[0],
joinChar=',', joinChar=',',

View file

@ -48,7 +48,7 @@ class CameraRigCalibration(desc.CommandLineNode):
label='Match Describer Types', label='Match Describer Types',
description='''The describer types to use for the matching''', description='''The describer types to use for the matching''',
value=['sift'], value=['sift'],
values=['sift', 'sift_float', 'sift_upright', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'], values=['sift', 'sift_float', 'sift_upright', 'dspsift', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'],
exclusive=False, exclusive=False,
uid=[0], uid=[0],
joinChar=',', joinChar=',',

View file

@ -35,7 +35,7 @@ It can also be used to remove specific parts of from an SfM scene (like filter a
label='Describer Types', label='Describer Types',
description='Describer types to keep.', description='Describer types to keep.',
value=['sift'], value=['sift'],
values=['sift', 'sift_float', 'sift_upright', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv', 'unknown'], values=['sift', 'sift_float', 'sift_upright', 'dspsift', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv', 'unknown'],
exclusive=False, exclusive=False,
uid=[0], uid=[0],
joinChar=',', joinChar=',',

View file

@ -20,7 +20,7 @@ class ExportMatches(desc.CommandLineNode):
label='Describer Types', label='Describer Types',
description='Describer types used to describe an image.', description='Describer types used to describe an image.',
value=['sift'], value=['sift'],
values=['sift', 'sift_float', 'sift_upright', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'], values=['sift', 'sift_float', 'sift_upright', 'dspsift', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'],
exclusive=False, exclusive=False,
uid=[0], uid=[0],
joinChar=',', joinChar=',',

View file

@ -42,20 +42,76 @@ It is robust to motion-blur, depth-of-field, occlusion. Be careful to have enoug
label='Describer Types', label='Describer Types',
description='Describer types used to describe an image.', description='Describer types used to describe an image.',
value=['sift'], value=['sift'],
values=['sift', 'sift_float', 'sift_upright', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'], values=['sift', 'sift_float', 'sift_upright', 'dspsift', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'],
exclusive=False, exclusive=False,
uid=[0], uid=[0],
joinChar=',', joinChar=',',
), ),
desc.ChoiceParam( desc.ChoiceParam(
name='describerPreset', name='describerPreset',
label='Describer Preset', label='Describer Density',
description='Control the ImageDescriber configuration (low, medium, normal, high, ultra). Configuration "ultra" can take long time !', description='Control the ImageDescriber density (low, medium, normal, high, ultra).\n'
'Warning: Use ULTRA only on small datasets.',
value='normal',
values=['low', 'medium', 'normal', 'high', 'ultra', 'custom'],
exclusive=True,
uid=[0],
group=lambda node: 'allParams' if node.describerPreset.value != 'custom' else None,
),
desc.IntParam(
name='maxNbFeatures',
label='Max Nb Features',
description='Max number of features extracted (0 means default value based on Describer Density).',
value=0,
range=(0, 100000, 1000),
uid=[0],
advanced=True,
enabled=lambda node: (node.describerPreset.value == 'custom'),
),
desc.ChoiceParam(
name='describerQuality',
label='Describer Quality',
description='Control the ImageDescriber quality (low, medium, normal, high, ultra).',
value='normal', value='normal',
values=['low', 'medium', 'normal', 'high', 'ultra'], values=['low', 'medium', 'normal', 'high', 'ultra'],
exclusive=True, exclusive=True,
uid=[0], uid=[0],
), ),
desc.ChoiceParam(
name='contrastFiltering',
label='Contrast Filtering',
description="Contrast filtering method to ignore features with too low contrast that can be considered as noise:\n"
"* Static: Fixed threshold.\n"
"* AdaptiveToMedianVariance: Based on image content analysis.\n"
"* NoFiltering: Disable contrast filtering.\n"
"* GridSortOctaves: Grid Sort but per octaves (and only per scale at the end).\n"
"* GridSort: Grid sort per octaves and at the end (scale * peakValue).\n"
"* GridSortScaleSteps: Grid sort per octaves and at the end (scale and then peakValue).\n"
"* NonExtremaFiltering: Filter non-extrema peakValues.\n",
value='GridSort',
values=['Static', 'AdaptiveToMedianVariance', 'NoFiltering', 'GridSortOctaves', 'GridSort', 'GridSortScaleSteps', 'GridSortOctaveSteps', 'NonExtremaFiltering'],
exclusive=True,
advanced=True,
uid=[0],
),
desc.FloatParam(
name='relativePeakThreshold',
label='Relative Peak Threshold',
description='Peak Threshold relative to median of gradiants.',
value=0.01,
range=(0.01, 1.0, 0.001),
advanced=True,
uid=[0],
enabled=lambda node: (node.contrastFiltering.value == 'AdaptiveToMedianVariance'),
),
desc.BoolParam(
name='gridFiltering',
label='Grid Filtering',
description='Enable grid filtering. Highly recommended to ensure usable number of features.',
value=True,
advanced=True,
uid=[0],
),
desc.BoolParam( desc.BoolParam(
name='forceCpuExtraction', name='forceCpuExtraction',
label='Force CPU Extraction', label='Force CPU Extraction',

View file

@ -63,7 +63,7 @@ then it checks the number of features that validates this model and iterate thro
label='Describer Types', label='Describer Types',
description='Describer types used to describe an image.', description='Describer types used to describe an image.',
value=['sift'], value=['sift'],
values=['sift', 'sift_float', 'sift_upright', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'], values=['sift', 'sift_float', 'sift_upright', 'dspsift', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'],
exclusive=False, exclusive=False,
uid=[0], uid=[0],
joinChar=',', joinChar=',',

View file

@ -0,0 +1,131 @@
__version__ = "1.1"
from meshroom.core import desc
class FeatureRepeatability(desc.CommandLineNode):
commandLine = 'aliceVision_samples_repeatabilityDataset {allParams}'
size = desc.DynamicNodeSize('input')
# parallelization = desc.Parallelization(blockSize=40)
# commandLineRange = '--rangeStart {rangeStart} --rangeSize {rangeBlockSize}'
documentation = '''
'''
inputs = [
desc.File(
name='input',
label='Input Folder',
description='Input Folder with evaluation datasets.',
value='',
uid=[0],
),
desc.ChoiceParam(
name='describerTypes',
label='Describer Types',
description='Describer types used to describe an image.',
value=['sift'],
values=['sift', 'sift_float', 'sift_upright', 'dspsift', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'],
exclusive=False,
uid=[0],
joinChar=',',
),
desc.ChoiceParam(
name='describerPreset',
label='Describer Density',
description='Control the ImageDescriber density (low, medium, normal, high, ultra).\n'
'Warning: Use ULTRA only on small datasets.',
value='normal',
values=['low', 'medium', 'normal', 'high', 'ultra'],
exclusive=True,
uid=[0],
),
desc.ChoiceParam(
name='describerQuality',
label='Describer Quality',
description='Control the ImageDescriber quality (low, medium, normal, high, ultra).',
value='normal',
values=['low', 'medium', 'normal', 'high', 'ultra'],
exclusive=True,
uid=[0],
),
desc.ChoiceParam(
name='contrastFiltering',
label='Contrast Filtering',
description="Contrast filtering method to ignore features with too low contrast that can be considered as noise:\n"
"* Static: Fixed threshold.\n"
"* AdaptiveToMedianVariance: Based on image content analysis.\n"
"* NoFiltering: Disable contrast filtering.\n"
"* GridSortOctaves: Grid Sort but per octaves (and only per scale at the end).\n"
"* GridSort: Grid sort per octaves and at the end (scale * peakValue).\n"
"* GridSortScaleSteps: Grid sort per octaves and at the end (scale and then peakValue).\n"
"* NonExtremaFiltering: Filter non-extrema peakValues.\n",
value='Static',
values=['Static', 'AdaptiveToMedianVariance', 'NoFiltering', 'GridSortOctaves', 'GridSort', 'GridSortScaleSteps', 'GridSortOctaveSteps', 'NonExtremaFiltering'],
exclusive=True,
advanced=True,
uid=[0],
),
desc.FloatParam(
name='relativePeakThreshold',
label='Relative Peak Threshold',
description='Peak Threshold relative to median of gradiants.',
value=0.01,
range=(0.01, 1.0, 0.001),
advanced=True,
uid=[0],
enabled=lambda node: (node.contrastFiltering.value == 'AdaptiveToMedianVariance'),
),
desc.BoolParam(
name='gridFiltering',
label='Grid Filtering',
description='Enable grid filtering. Highly recommended to ensure usable number of features.',
value=True,
advanced=True,
uid=[0],
),
desc.BoolParam(
name='forceCpuExtraction',
label='Force CPU Extraction',
description='Use only CPU feature extraction.',
value=True,
uid=[],
advanced=True,
),
desc.IntParam(
name='invalidate',
label='Invalidate',
description='Invalidate.',
value=0,
range=(0, 10000, 1),
group="",
uid=[0],
),
desc.StringParam(
name="comments",
label="Comments",
description="Comments",
value="",
group="",
uid=[],
),
desc.ChoiceParam(
name='verboseLevel',
label='Verbose Level',
description='verbosity level (fatal, error, warning, info, debug, trace).',
value='info',
values=['fatal', 'error', 'warning', 'info', 'debug', 'trace'],
exclusive=True,
uid=[],
)
]
outputs = [
desc.File(
name='output',
label='Output Folder',
description='Output path for the features and descriptors files (*.feat, *.desc).',
value=desc.Node.internalFolder,
uid=[],
),
]

View file

@ -52,7 +52,7 @@ It is known to be faster but less robust to challenging datasets than the Increm
label='Describer Types', label='Describer Types',
description='Describer types used to describe an image.', description='Describer types used to describe an image.',
value=['sift'], value=['sift'],
values=['sift', 'sift_float', 'sift_upright', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', values=['sift', 'sift_float', 'sift_upright', 'dspsift', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4',
'sift_ocv', 'akaze_ocv'], 'sift_ocv', 'akaze_ocv'],
exclusive=False, exclusive=False,
uid=[0], uid=[0],

View file

@ -86,6 +86,13 @@ Convert or apply filtering to the input images.
value=False, value=False,
uid=[0], uid=[0],
), ),
desc.BoolParam(
name='fixNonFinite',
label='Fix Non-Finite',
description='Fix non-finite pixels based on neighboring pixels average.',
value=False,
uid=[0],
),
desc.BoolParam( desc.BoolParam(
name='exposureCompensation', name='exposureCompensation',
label='Exposure Compensation', label='Exposure Compensation',
@ -119,8 +126,9 @@ Convert or apply filtering to the input images.
), ),
desc.BoolParam( desc.BoolParam(
name='fillHoles', name='fillHoles',
label='Fill holes', label='Fill Holes',
description='Fill holes.', description='Fill holes based on the alpha channel.\n'
'Note: It will enable fixNonFinite, as it is required for the image pyramid construction used to fill holes.',
value=False, value=False,
uid=[0], uid=[0],
), ),
@ -280,6 +288,19 @@ Convert or apply filtering to the input images.
exclusive=True, exclusive=True,
uid=[0], uid=[0],
), ),
desc.ChoiceParam(
name='storageDataType',
label='Storage Data Type for EXR output',
description='Storage image data type:\n'
' * float: Use full floating point (32 bits per channel)\n'
' * half: Use half float (16 bits per channel)\n'
' * halfFinite: Use half float, but clamp values to avoid non-finite values\n'
' * auto: Use half float if all values can fit, else use full float\n',
value='float',
values=['float', 'half', 'halfFinite', 'auto'],
exclusive=True,
uid=[0],
),
desc.ChoiceParam( desc.ChoiceParam(
name='verboseLevel', name='verboseLevel',
label='Verbose Level', label='Verbose Level',

View file

@ -1,4 +1,4 @@
__version__ = "2.0" __version__ = "3.0"
import json import json
@ -27,6 +27,9 @@ class LdrToHdrCalibration(desc.CommandLineNode):
commandLine = 'aliceVision_LdrToHdrCalibration {allParams}' commandLine = 'aliceVision_LdrToHdrCalibration {allParams}'
size = desc.DynamicNodeSize('input') size = desc.DynamicNodeSize('input')
cpu = desc.Level.INTENSIVE
ram = desc.Level.NORMAL
documentation = ''' documentation = '''
Calibrate LDR to HDR response curve from samples Calibrate LDR to HDR response curve from samples
''' '''
@ -46,6 +49,15 @@ class LdrToHdrCalibration(desc.CommandLineNode):
value=desc.Node.internalFolder, value=desc.Node.internalFolder,
uid=[0], uid=[0],
), ),
desc.BoolParam(
name='byPass',
label='Bypass',
description="Bypass HDR creation and use the medium bracket as the source for the next steps",
value=False,
uid=[0],
group='internal',
enabled= lambda node: node.nbBrackets.value != 1,
),
desc.ChoiceParam( desc.ChoiceParam(
name='calibrationMethod', name='calibrationMethod',
label='Calibration Method', label='Calibration Method',
@ -59,6 +71,7 @@ class LdrToHdrCalibration(desc.CommandLineNode):
value='debevec', value='debevec',
exclusive=True, exclusive=True,
uid=[0], uid=[0],
enabled= lambda node: node.byPass.enabled and not node.byPass.value,
), ),
desc.ChoiceParam( desc.ChoiceParam(
name='calibrationWeight', name='calibrationWeight',
@ -72,6 +85,7 @@ class LdrToHdrCalibration(desc.CommandLineNode):
values=['default', 'gaussian', 'triangle', 'plateau'], values=['default', 'gaussian', 'triangle', 'plateau'],
exclusive=True, exclusive=True,
uid=[0], uid=[0],
enabled= lambda node: node.byPass.enabled and not node.byPass.value,
), ),
desc.IntParam( desc.IntParam(
name='userNbBrackets', name='userNbBrackets',
@ -79,7 +93,7 @@ class LdrToHdrCalibration(desc.CommandLineNode):
description='Number of exposure brackets per HDR image (0 for automatic detection).', description='Number of exposure brackets per HDR image (0 for automatic detection).',
value=0, value=0,
range=(0, 15, 1), range=(0, 15, 1),
uid=[0], uid=[],
group='user', # not used directly on the command line group='user', # not used directly on the command line
), ),
desc.IntParam( desc.IntParam(
@ -88,7 +102,7 @@ class LdrToHdrCalibration(desc.CommandLineNode):
description='Number of exposure brackets used per HDR image. It is detected automatically from input Viewpoints metadata if "userNbBrackets" is 0, else it is equal to "userNbBrackets".', description='Number of exposure brackets used per HDR image. It is detected automatically from input Viewpoints metadata if "userNbBrackets" is 0, else it is equal to "userNbBrackets".',
value=0, value=0,
range=(0, 10, 1), range=(0, 10, 1),
uid=[], uid=[0],
), ),
desc.IntParam( desc.IntParam(
name='channelQuantizationPower', name='channelQuantizationPower',
@ -98,17 +112,18 @@ class LdrToHdrCalibration(desc.CommandLineNode):
range=(8, 14, 1), range=(8, 14, 1),
uid=[0], uid=[0],
advanced=True, advanced=True,
enabled= lambda node: node.byPass.enabled and not node.byPass.value,
), ),
desc.IntParam( desc.IntParam(
name='maxTotalPoints', name='maxTotalPoints',
label='Max Number of Points', label='Max Number of Points',
description='Max number of points selected by the sampling strategy.\n' description='Max number of points used from the sampling. This ensures that the number of pixels values extracted by the sampling\n'
'This ensures that this sampling step will extract a number of pixels values\n' 'can be managed by the calibration step (in term of computation time and memory usage).',
'that the calibration step can manage (in term of computation time and memory usage).',
value=1000000, value=1000000,
range=(8, 10000000, 1000), range=(8, 10000000, 1000),
uid=[0], uid=[0],
advanced=True, advanced=True,
enabled= lambda node: node.byPass.enabled and not node.byPass.value,
), ),
desc.ChoiceParam( desc.ChoiceParam(
name='verboseLevel', name='verboseLevel',
@ -131,6 +146,11 @@ class LdrToHdrCalibration(desc.CommandLineNode):
) )
] ]
def processChunk(self, chunk):
if chunk.node.nbBrackets.value == 1 or chunk.node.byPass.value:
return
super(LdrToHdrCalibration, self).processChunk(chunk)
@classmethod @classmethod
def update(cls, node): def update(cls, node):
if not isinstance(node.nodeDesc, cls): if not isinstance(node.nodeDesc, cls):

View file

@ -1,4 +1,4 @@
__version__ = "3.0" __version__ = "4.0"
import json import json
@ -53,7 +53,7 @@ class LdrToHdrMerge(desc.CommandLineNode):
description='Number of exposure brackets per HDR image (0 for automatic detection).', description='Number of exposure brackets per HDR image (0 for automatic detection).',
value=0, value=0,
range=(0, 15, 1), range=(0, 15, 1),
uid=[0], uid=[],
group='user', # not used directly on the command line group='user', # not used directly on the command line
), ),
desc.IntParam( desc.IntParam(
@ -62,7 +62,7 @@ class LdrToHdrMerge(desc.CommandLineNode):
description='Number of exposure brackets used per HDR image. It is detected automatically from input Viewpoints metadata if "userNbBrackets" is 0, else it is equal to "userNbBrackets".', description='Number of exposure brackets used per HDR image. It is detected automatically from input Viewpoints metadata if "userNbBrackets" is 0, else it is equal to "userNbBrackets".',
value=0, value=0,
range=(0, 10, 1), range=(0, 10, 1),
uid=[], uid=[0],
), ),
desc.IntParam( desc.IntParam(
name='offsetRefBracketIndex', name='offsetRefBracketIndex',
@ -123,23 +123,36 @@ class LdrToHdrMerge(desc.CommandLineNode):
description='This is an arbitrary target value (in Lux) used to replace the unknown luminance value of the saturated pixels.\n' description='This is an arbitrary target value (in Lux) used to replace the unknown luminance value of the saturated pixels.\n'
'\n' '\n'
'Some Outdoor Reference Light Levels:\n' 'Some Outdoor Reference Light Levels:\n'
' * 120,000 lux : Brightest sunlight\n' ' * 120,000 lux: Brightest sunlight\n'
' * 110,000 lux : Bright sunlight\n' ' * 110,000 lux: Bright sunlight\n'
' * 20,000 lux : Shade illuminated by entire clear blue sky, midday\n' ' * 20,000 lux: Shade illuminated by entire clear blue sky, midday\n'
' * 1,000 lux : Typical overcast day, midday\n' ' * 1,000 lux: Typical overcast day, midday\n'
' * 400 lux : Sunrise or sunset on a clear day\n' ' * 400 lux: Sunrise or sunset on a clear day\n'
' * 40 lux : Fully overcast, sunset/sunrise\n' ' * 40 lux: Fully overcast, sunset/sunrise\n'
'\n' '\n'
'Some Indoor Reference Light Levels:\n' 'Some Indoor Reference Light Levels:\n'
' * 20000 lux : Max Usually Used Indoor\n' ' * 20000 lux: Max Usually Used Indoor\n'
' * 750 lux : Supermarkets\n' ' * 750 lux: Supermarkets\n'
' * 500 lux : Office Work\n' ' * 500 lux: Office Work\n'
' * 150 lux : Home\n', ' * 150 lux: Home\n',
value=120000.0, value=120000.0,
range=(1000.0, 150000.0, 1.0), range=(1000.0, 150000.0, 1.0),
uid=[0], uid=[0],
enabled= lambda node: node.byPass.enabled and not node.byPass.value and node.highlightCorrectionFactor.value != 0, enabled= lambda node: node.byPass.enabled and not node.byPass.value and node.highlightCorrectionFactor.value != 0,
), ),
desc.ChoiceParam(
name='storageDataType',
label='Storage Data Type',
description='Storage image data type:\n'
' * float: Use full floating point (32 bits per channel)\n'
' * half: Use half float (16 bits per channel)\n'
' * halfFinite: Use half float, but clamp values to avoid non-finite values\n'
' * auto: Use half float if all values can fit, else use full float\n',
value='float',
values=['float', 'half', 'halfFinite', 'auto'],
exclusive=True,
uid=[0],
),
desc.ChoiceParam( desc.ChoiceParam(
name='verboseLevel', name='verboseLevel',
label='Verbose Level', label='Verbose Level',

View file

@ -1,4 +1,4 @@
__version__ = "3.0" __version__ = "4.0"
import json import json
@ -63,7 +63,7 @@ class LdrToHdrSampling(desc.CommandLineNode):
description='Number of exposure brackets per HDR image (0 for automatic detection).', description='Number of exposure brackets per HDR image (0 for automatic detection).',
value=0, value=0,
range=(0, 15, 1), range=(0, 15, 1),
uid=[0], uid=[],
group='user', # not used directly on the command line group='user', # not used directly on the command line
), ),
desc.IntParam( desc.IntParam(
@ -72,7 +72,7 @@ class LdrToHdrSampling(desc.CommandLineNode):
description='Number of exposure brackets used per HDR image. It is detected automatically from input Viewpoints metadata if "userNbBrackets" is 0, else it is equal to "userNbBrackets".', description='Number of exposure brackets used per HDR image. It is detected automatically from input Viewpoints metadata if "userNbBrackets" is 0, else it is equal to "userNbBrackets".',
value=0, value=0,
range=(0, 10, 1), range=(0, 10, 1),
uid=[], uid=[0],
), ),
desc.BoolParam( desc.BoolParam(
name='byPass', name='byPass',

View file

@ -10,6 +10,9 @@ class PanoramaCompositing(desc.CommandLineNode):
commandLine = 'aliceVision_panoramaCompositing {allParams}' commandLine = 'aliceVision_panoramaCompositing {allParams}'
size = desc.DynamicNodeSize('input') size = desc.DynamicNodeSize('input')
cpu = desc.Level.INTENSIVE
ram = desc.Level.INTENSIVE
documentation = ''' documentation = '''
Once the images have been transformed geometrically (in PanoramaWarping), Once the images have been transformed geometrically (in PanoramaWarping),
they have to be fused together in a single panorama image which looks like a single photography. they have to be fused together in a single panorama image which looks like a single photography.
@ -54,15 +57,36 @@ Multiple cameras are contributing to the low frequencies and only the best one c
exclusive=True, exclusive=True,
uid=[0] uid=[0]
), ),
desc.BoolParam(
name='useGraphCut',
label='Use Smart Seams',
description='Use a graphcut algorithm to optmize seams for better transitions between images.',
value=True,
uid=[0],
),
desc.ChoiceParam(
name='storageDataType',
label='Storage Data Type',
description='Storage image data type:\n'
' * float: Use full floating point (32 bits per channel)\n'
' * half: Use half float (16 bits per channel)\n'
' * halfFinite: Use half float, but clamp values to avoid non-finite values\n'
' * auto: Use half float if all values can fit, else use full float\n',
value='float',
values=['float', 'half', 'halfFinite', 'auto'],
exclusive=True,
uid=[0],
),
desc.ChoiceParam( desc.ChoiceParam(
name='overlayType', name='overlayType',
label='Overlay Type', label='Overlay Type',
description='Overlay on top of panorama to analyze transitions:\n' description='Overlay on top of panorama to analyze transitions:\n'
' * none: no overlay\n' ' * none: no overlay\n'
' * borders: display image borders\n' ' * borders: display image borders\n'
' * seams: display transitions between images\n', ' * seams: display transitions between images\n'
' * all: display borders and seams\n',
value='none', value='none',
values=['none', 'borders', 'seams'], values=['none', 'borders', 'seams', 'all'],
exclusive=True, exclusive=True,
advanced=True, advanced=True,
uid=[0] uid=[0]

View file

@ -51,7 +51,7 @@ Estimate relative camera rotations between input images.
label='Describer Types', label='Describer Types',
description='Describer types used to describe an image.', description='Describer types used to describe an image.',
value=['sift'], value=['sift'],
values=['sift', 'sift_float', 'sift_upright', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', values=['sift', 'sift_float', 'sift_upright', 'dspsift', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4',
'sift_ocv', 'akaze_ocv'], 'sift_ocv', 'akaze_ocv'],
exclusive=False, exclusive=False,
uid=[0], uid=[0],

View file

@ -90,6 +90,24 @@ This node allows to setup the Panorama:
uid=[0], uid=[0],
enabled=lambda node: node.useFisheye.value and not node.estimateFisheyeCircle.value, enabled=lambda node: node.useFisheye.value and not node.estimateFisheyeCircle.value,
), ),
desc.ChoiceParam(
name='inputAngle',
label='input Angle offset',
description='Add a rotation to the input XML given poses (CCW).',
value='None',
values=['None', 'rotate90', 'rotate180', 'rotate270'],
exclusive=True,
uid=[0]
),
desc.BoolParam(
name='debugFisheyeCircleEstimation',
label='Debug Fisheye Circle Detection',
description='Debug fisheye circle detection.',
value=False,
uid=[0],
enabled=lambda node: node.useFisheye.value,
advanced=True,
),
desc.ChoiceParam( desc.ChoiceParam(
name='verboseLevel', name='verboseLevel',
label='Verbose Level', label='Verbose Level',

View file

@ -25,15 +25,59 @@ Compute the image warping for each input image in the panorama coordinate system
value='', value='',
uid=[0], uid=[0],
), ),
desc.BoolParam(
name='estimateResolution',
label='Estimate Resolution',
description='Estimate output panorama resolution automatically based on the input images resolution.',
value=True,
uid=[0],
group=None, # skip group from command line
),
desc.IntParam( desc.IntParam(
name='panoramaWidth', name='panoramaWidth',
label='Panorama Width', label='Panorama Width',
description='Panorama Width (in pixels).\n' description='Choose the output panorama width (in pixels).',
'Set 0 to let the software choose the size automatically, so that on average the input resolution is kept (to limit over/under sampling).',
value=10000, value=10000,
range=(0, 50000, 1000), range=(0, 50000, 1000),
uid=[0],
enabled=lambda node: (not node.estimateResolution.value),
),
desc.IntParam(
name='percentUpscale',
label='Upscale ratio',
description='Percentage of upscaled pixels.\n'
'\n'
'How many percent of the pixels will be upscaled (compared to its original resolution):\n'
' * 0: all pixels will be downscaled\n'
' * 50: on average the input resolution is kept (optimal to reduce over/under-sampling)\n'
' * 100: all pixels will be upscaled\n',
value=50,
range=(0, 100, 1),
enabled=lambda node: (node.estimateResolution.value),
uid=[0] uid=[0]
), ),
desc.IntParam(
name='maxPanoramaWidth',
label='Max Panorama Width',
description='Choose the maximal output panorama width (in pixels). Zero means no limit.',
value=35000,
range=(0, 100000, 1000),
uid=[0],
enabled=lambda node: (node.estimateResolution.value),
),
desc.ChoiceParam(
name='storageDataType',
label='Storage Data Type',
description='Storage image data type:\n'
' * float: Use full floating point (32 bits per channel)\n'
' * half: Use half float (16 bits per channel)\n'
' * halfFinite: Use half float, but clamp values to avoid non-finite values\n'
' * auto: Use half float if all values can fit, else use full float\n',
value='float',
values=['float', 'half', 'halfFinite', 'auto'],
exclusive=True,
uid=[0],
),
desc.ChoiceParam( desc.ChoiceParam(
name='verboseLevel', name='verboseLevel',
label='Verbose Level', label='Verbose Level',

View file

@ -34,9 +34,10 @@ This node allows to transfer poses and/or intrinsics form one SfM scene onto ano
description="Matching Method:\n" description="Matching Method:\n"
" * from_viewid: Align cameras with same view Id\n" " * from_viewid: Align cameras with same view Id\n"
" * from_filepath: Align cameras with a filepath matching, using 'fileMatchingPattern'\n" " * from_filepath: Align cameras with a filepath matching, using 'fileMatchingPattern'\n"
" * from_metadata: Align cameras with matching metadata, using 'metadataMatchingList'\n", " * from_metadata: Align cameras with matching metadata, using 'metadataMatchingList'\n"
" * from_intrinsicid: Copy intrinsics parameters\n",
value='from_viewid', value='from_viewid',
values=['from_viewid', 'from_filepath', 'from_metadata'], values=['from_viewid', 'from_filepath', 'from_metadata', 'from_intrinsicid'],
exclusive=True, exclusive=True,
uid=[0], uid=[0],
), ),

View file

@ -112,8 +112,8 @@ The transformation can be based on:
joinChar="," joinChar=","
), ),
desc.FloatParam( desc.FloatParam(
name="manualScale", name="manualScale",
label="Scale", label="Scale",
description="Uniform Scale.", description="Uniform Scale.",
value=1.0, value=1.0,
uid=[0], uid=[0],
@ -127,8 +127,8 @@ The transformation can be based on:
name='landmarksDescriberTypes', name='landmarksDescriberTypes',
label='Landmarks Describer Types', label='Landmarks Describer Types',
description='Image describer types used to compute the mean of the point cloud. (only for "landmarks" method).', description='Image describer types used to compute the mean of the point cloud. (only for "landmarks" method).',
value=['sift', 'akaze'], value=['sift', 'dspsift', 'akaze'],
values=['sift', 'sift_float', 'sift_upright', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'], values=['sift', 'sift_float', 'sift_upright', 'dspsift', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv', 'unknown'],
exclusive=False, exclusive=False,
uid=[0], uid=[0],
joinChar=',', joinChar=',',

View file

@ -97,7 +97,7 @@ It iterates like that, adding cameras and triangulating new 2D features into 3D
label='Describer Types', label='Describer Types',
description='Describer types used to describe an image.', description='Describer types used to describe an image.',
value=['sift'], value=['sift'],
values=['sift', 'sift_float', 'sift_upright', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'], values=['sift', 'sift_float', 'sift_upright', 'dspsift', 'akaze', 'akaze_liop', 'akaze_mldb', 'cctag3', 'cctag4', 'sift_ocv', 'akaze_ocv'],
exclusive=False, exclusive=False,
uid=[0], uid=[0],
joinChar=',', joinChar=',',

View file

@ -56,7 +56,7 @@ Many cameras are contributing to the low frequencies and only the best ones cont
name='downscale', name='downscale',
label='Texture Downscale', label='Texture Downscale',
description='''Texture downscale factor''', description='''Texture downscale factor''',
value=1, value=2,
values=(1, 2, 4, 8), values=(1, 2, 4, 8),
exclusive=True, exclusive=True,
uid=[0], uid=[0],

View file

@ -2,13 +2,13 @@
"BASE": ["mikrosRender"], "BASE": ["mikrosRender"],
"CPU": { "CPU": {
"NONE": [], "NONE": [],
"NORMAL": [], "NORMAL": ["@.nCPUs>8"],
"INTENSIVE": ["@.nCPUs>30"] "INTENSIVE": ["@.nCPUs>30"]
}, },
"RAM": { "RAM": {
"NONE": [], "NONE": [],
"NORMAL": ["@.mem>8"], "NORMAL": ["@.mem>8"],
"INTENSIVE": ["@.mem>30"] "INTENSIVE": ["@.mem>80"]
}, },
"GPU": { "GPU": {
"NONE": [], "NONE": [],

View file

@ -71,7 +71,7 @@ class MeshroomApp(QApplication):
help='Import images to reconstruct from specified folder and sub-folders.') help='Import images to reconstruct from specified folder and sub-folders.')
parser.add_argument('-s', '--save', metavar='PROJECT.mg', type=str, default='', parser.add_argument('-s', '--save', metavar='PROJECT.mg', type=str, default='',
help='Save the created scene.') help='Save the created scene.')
parser.add_argument('-p', '--pipeline', metavar='MESHROOM_FILE/photogrammetry/hdri', type=str, default=os.environ.get("MESHROOM_DEFAULT_PIPELINE", "photogrammetry"), parser.add_argument('-p', '--pipeline', metavar='MESHROOM_FILE/photogrammetry/panoramaHdr/panoramaFisheyeHdr', type=str, default=os.environ.get("MESHROOM_DEFAULT_PIPELINE", "photogrammetry"),
help='Override the default Meshroom pipeline with this external graph.') help='Override the default Meshroom pipeline with this external graph.')
parser.add_argument("--verbose", help="Verbosity level", default='warning', parser.add_argument("--verbose", help="Verbosity level", default='warning',
choices=['fatal', 'error', 'warning', 'info', 'debug', 'trace'],) choices=['fatal', 'error', 'warning', 'info', 'debug', 'trace'],)

View file

@ -5,6 +5,8 @@ from PySide2.QtCharts import QtCharts
import csv import csv
import os import os
import logging
class CsvData(QObject): class CsvData(QObject):
"""Store data from a CSV file.""" """Store data from a CSV file."""
@ -20,13 +22,18 @@ class CsvData(QObject):
def getColumn(self, index): def getColumn(self, index):
return self._data.at(index) return self._data.at(index)
@Slot(result=str)
def getFilepath(self): def getFilepath(self):
return self._filepath return self._filepath
@Slot(result=int) @Slot(result=int)
def getNbColumns(self): def getNbColumns(self):
return len(self._data) if self._ready else 0 if self._ready:
return len(self._data)
else:
return 0
@Slot(str)
def setFilepath(self, filepath): def setFilepath(self, filepath):
if self._filepath == filepath: if self._filepath == filepath:
return return
@ -40,6 +47,7 @@ class CsvData(QObject):
self._ready = ready self._ready = ready
self.readyChanged.emit() self.readyChanged.emit()
@Slot()
def updateData(self): def updateData(self):
self.setReady(False) self.setReady(False)
self._data.clear() self._data.clear()
@ -53,23 +61,23 @@ class CsvData(QObject):
if not self._filepath or not self._filepath.lower().endswith(".csv") or not os.path.isfile(self._filepath): if not self._filepath or not self._filepath.lower().endswith(".csv") or not os.path.isfile(self._filepath):
return [] return []
csvRows = []
with open(self._filepath, "r") as fp:
reader = csv.reader(fp)
for row in reader:
csvRows.append(row)
dataList = [] dataList = []
try:
# Create the objects in dataList csvRows = []
# with the first line elements as objects' title with open(self._filepath, "r") as fp:
for elt in csvRows[0]: reader = csv.reader(fp)
dataList.append(CsvColumn(elt, parent=self._data)) for row in reader:
csvRows.append(row)
# Populate the content attribute # Create the objects in dataList
for elt in csvRows[1:]: # with the first line elements as objects' title
for idx, value in enumerate(elt): for elt in csvRows[0]:
dataList[idx].appendValue(value) dataList.append(CsvColumn(elt)) # , parent=self._data
# Populate the content attribute
for elt in csvRows[1:]:
for idx, value in enumerate(elt):
dataList[idx].appendValue(value)
except Exception as e:
logging.error("CsvData: Failed to load file: {}\n{}".format(self._filepath, str(e)))
return dataList return dataList
@ -114,4 +122,4 @@ class CsvColumn(QObject):
serie.append(float(index), float(value)) serie.append(float(index), float(value))
title = Property(str, lambda self: self._title, constant=True) title = Property(str, lambda self: self._title, constant=True)
content = Property("QStringList", lambda self: self._content, constant=True) content = Property("QStringList", lambda self: self._content, constant=True)

View file

@ -2,7 +2,7 @@
# coding:utf-8 # coding:utf-8
from meshroom.core import pyCompatibility from meshroom.core import pyCompatibility
from PySide2.QtCore import QUrl from PySide2.QtCore import QUrl, QFileInfo
from PySide2.QtCore import QObject, Slot from PySide2.QtCore import QObject, Slot
import os import os
@ -89,3 +89,8 @@ class FilepathHelper(QObject):
if fileList: if fileList:
return fileList[0] return fileList[0]
return "" return ""
@Slot(QUrl, result=int)
def fileSizeMB(self, path):
""" Returns the file size in MB. """
return QFileInfo(self.asStr(path)).size() / (1024*1024)

View file

@ -67,7 +67,14 @@ Dialog {
font.pointSize: 21 font.pointSize: 21
palette.buttonText: root.palette.link palette.buttonText: root.palette.link
ToolTip.text: "AliceVision Website" ToolTip.text: "AliceVision Website"
onClicked: Qt.openUrlExternally("https://alicevision.github.io") onClicked: Qt.openUrlExternally("https://alicevision.org")
}
MaterialToolButton {
text: MaterialIcons.favorite
font.pointSize: 21
palette.buttonText: root.palette.link
ToolTip.text: "Donate to get a better software"
onClicked: Qt.openUrlExternally("https://alicevision.org/association/#donate")
} }
ToolButton { ToolButton {
icon.source: "../img/GitHub-Mark-Light-32px.png" icon.source: "../img/GitHub-Mark-Light-32px.png"

View file

@ -164,7 +164,7 @@ Item {
root.nbReads = categories[0].length-1 root.nbReads = categories[0].length-1
for(var j = 0; j < nbCores; j++) { for(var j = 0; j < nbCores; j++) {
var lineSerie = cpuChart.createSeries(ChartView.SeriesTypeLine, "CPU" + j, valueAxisX, valueAxisY) var lineSerie = cpuChart.createSeries(ChartView.SeriesTypeLine, "CPU" + j, valueCpuX, valueCpuY)
if(categories[j].length === 1) { if(categories[j].length === 1) {
lineSerie.append(0, categories[j][0]) lineSerie.append(0, categories[j][0])
@ -177,7 +177,7 @@ Item {
lineSerie.color = colors[j % colors.length] lineSerie.color = colors[j % colors.length]
} }
var averageLine = cpuChart.createSeries(ChartView.SeriesTypeLine, "AVERAGE", valueAxisX, valueAxisY) var averageLine = cpuChart.createSeries(ChartView.SeriesTypeLine, "AVERAGE", valueCpuX, valueCpuY)
var average = [] var average = []
for(var l = 0; l < categories[0].length; l++) { for(var l = 0; l < categories[0].length; l++) {
@ -227,7 +227,7 @@ Item {
root.ramLabel = "RAM Max Peak: " root.ramLabel = "RAM Max Peak: "
} }
var ramSerie = ramChart.createSeries(ChartView.SeriesTypeLine, root.ramLabel + root.ramTotal + "GB", valueAxisX2, valueAxisRam) var ramSerie = ramChart.createSeries(ChartView.SeriesTypeLine, root.ramLabel + root.ramTotal + "GB", valueRamX, valueRamY)
if(ram.length === 1) { if(ram.length === 1) {
// Create 2 entries if we have only one input value to create a segment that can be display // Create 2 entries if we have only one input value to create a segment that can be display
@ -253,9 +253,9 @@ Item {
var gpuUsed = getPropertyWithDefault(jsonObject.computer.curves, 'gpuUsed', 0) var gpuUsed = getPropertyWithDefault(jsonObject.computer.curves, 'gpuUsed', 0)
var gpuTemperature = getPropertyWithDefault(jsonObject.computer.curves, 'gpuTemperature', 0) var gpuTemperature = getPropertyWithDefault(jsonObject.computer.curves, 'gpuTemperature', 0)
var gpuUsedSerie = gpuChart.createSeries(ChartView.SeriesTypeLine, "GPU", valueAxisX3, valueAxisY3) var gpuUsedSerie = gpuChart.createSeries(ChartView.SeriesTypeLine, "GPU", valueGpuX, valueGpuY)
var gpuUsedMemorySerie = gpuChart.createSeries(ChartView.SeriesTypeLine, "Memory", valueAxisX3, valueAxisY3) var gpuUsedMemorySerie = gpuChart.createSeries(ChartView.SeriesTypeLine, "Memory", valueGpuX, valueGpuY)
var gpuTemperatureSerie = gpuChart.createSeries(ChartView.SeriesTypeLine, "Temperature", valueAxisX3, valueAxisY3) var gpuTemperatureSerie = gpuChart.createSeries(ChartView.SeriesTypeLine, "Temperature", valueGpuX, valueGpuY)
if(gpuUsedMemory.length === 1) { if(gpuUsedMemory.length === 1) {
gpuUsedSerie.append(0, gpuUsed[0]) gpuUsedSerie.append(0, gpuUsed[0])
@ -384,7 +384,7 @@ Item {
title: "CPU: " + root.nbCores + " cores, " + root.cpuFrequency + "Hz" title: "CPU: " + root.nbCores + " cores, " + root.cpuFrequency + "Hz"
ValueAxis { ValueAxis {
id: valueAxisY id: valueCpuY
min: 0 min: 0
max: 100 max: 100
titleText: "<span style='color: " + textColor + "'>%</span>" titleText: "<span style='color: " + textColor + "'>%</span>"
@ -397,7 +397,7 @@ Item {
} }
ValueAxis { ValueAxis {
id: valueAxisX id: valueCpuX
min: 0 min: 0
max: root.deltaTime * Math.max(1, root.nbReads) max: root.deltaTime * Math.max(1, root.nbReads)
titleText: "<span style='color: " + textColor + "'>Minutes</span>" titleText: "<span style='color: " + textColor + "'>Minutes</span>"
@ -439,7 +439,7 @@ Item {
title: root.ramLabel + root.ramTotal + "GB" title: root.ramLabel + root.ramTotal + "GB"
ValueAxis { ValueAxis {
id: valueAxisY2 id: valueRamY
min: 0 min: 0
max: 100 max: 100
titleText: "<span style='color: " + textColor + "'>%</span>" titleText: "<span style='color: " + textColor + "'>%</span>"
@ -452,20 +452,7 @@ Item {
} }
ValueAxis { ValueAxis {
id: valueAxisRam id: valueRamX
min: 0
max: root.ramTotal
titleText: "<span style='color: " + textColor + "'>GB</span>"
color: textColor
gridLineColor: textColor
minorGridLineColor: textColor
shadesColor: textColor
shadesBorderColor: textColor
labelsColor: textColor
}
ValueAxis {
id: valueAxisX2
min: 0 min: 0
max: root.deltaTime * Math.max(1, root.nbReads) max: root.deltaTime * Math.max(1, root.nbReads)
titleText: "<span style='color: " + textColor + "'>Minutes</span>" titleText: "<span style='color: " + textColor + "'>Minutes</span>"
@ -507,7 +494,7 @@ Item {
title: (root.gpuName || root.gpuTotalMemory) ? ("GPU: " + root.gpuName + ", " + root.gpuTotalMemory + "MB") : "No GPU" title: (root.gpuName || root.gpuTotalMemory) ? ("GPU: " + root.gpuName + ", " + root.gpuTotalMemory + "MB") : "No GPU"
ValueAxis { ValueAxis {
id: valueAxisY3 id: valueGpuY
min: 0 min: 0
max: root.gpuMaxAxis max: root.gpuMaxAxis
titleText: "<span style='color: " + textColor + "'>%, °C</span>" titleText: "<span style='color: " + textColor + "'>%, °C</span>"
@ -520,7 +507,7 @@ Item {
} }
ValueAxis { ValueAxis {
id: valueAxisX3 id: valueGpuX
min: 0 min: 0
max: root.deltaTime * Math.max(1, root.nbReads) max: root.deltaTime * Math.max(1, root.nbReads)
titleText: "<span style='color: " + textColor + "'>Minutes</span>" titleText: "<span style='color: " + textColor + "'>Minutes</span>"

View file

@ -345,6 +345,7 @@ Panel {
footerContent: RowLayout { footerContent: RowLayout {
// Images count // Images count
MaterialToolLabel { MaterialToolLabel {
Layout.minimumWidth: childrenRect.width
ToolTip.text: grid.model.count + " Input Images" ToolTip.text: grid.model.count + " Input Images"
iconText: MaterialIcons.image iconText: MaterialIcons.image
label: grid.model.count.toString() label: grid.model.count.toString()
@ -353,6 +354,7 @@ Panel {
} }
// cameras count // cameras count
MaterialToolLabel { MaterialToolLabel {
Layout.minimumWidth: childrenRect.width
ToolTip.text: label + " Estimated Cameras" ToolTip.text: label + " Estimated Cameras"
iconText: MaterialIcons.videocam iconText: MaterialIcons.videocam
label: _reconstruction ? _reconstruction.nbCameras.toString() : "0" label: _reconstruction ? _reconstruction.nbCameras.toString() : "0"
@ -364,6 +366,7 @@ Panel {
MaterialToolLabelButton { MaterialToolLabelButton {
id: displayHDR id: displayHDR
Layout.minimumWidth: childrenRect.width
property var activeNode: _reconstruction.activeNodes.get("LdrToHdrMerge").node property var activeNode: _reconstruction.activeNodes.get("LdrToHdrMerge").node
ToolTip.text: "Visualize HDR images: " + (activeNode ? activeNode.label : "No Node") ToolTip.text: "Visualize HDR images: " + (activeNode ? activeNode.label : "No Node")
iconText: MaterialIcons.filter iconText: MaterialIcons.filter
@ -405,6 +408,8 @@ Panel {
MaterialToolButton { MaterialToolButton {
id: imageProcessing id: imageProcessing
Layout.minimumWidth: childrenRect.width
property var activeNode: _reconstruction.activeNodes.get("ImageProcessing").node property var activeNode: _reconstruction.activeNodes.get("ImageProcessing").node
font.pointSize: 15 font.pointSize: 15
padding: 0 padding: 0
@ -449,6 +454,8 @@ Panel {
// Thumbnail size icon and slider // Thumbnail size icon and slider
MaterialToolButton { MaterialToolButton {
Layout.minimumWidth: childrenRect.width
text: MaterialIcons.photo_size_select_large text: MaterialIcons.photo_size_select_large
ToolTip.text: "Thumbnails Scale" ToolTip.text: "Thumbnails Scale"
padding: 0 padding: 0

View file

@ -22,7 +22,8 @@ FloatingPane {
CsvData { CsvData {
id: csvData id: csvData
filepath: ldrHdrCalibrationNode ? ldrHdrCalibrationNode.attribute("response").value : "" property bool hasAttr: (ldrHdrCalibrationNode && ldrHdrCalibrationNode.hasAttribute("response"))
filepath: hasAttr ? ldrHdrCalibrationNode.attribute("response").value : ""
} }
// To avoid interaction with components in background // To avoid interaction with components in background
@ -34,7 +35,8 @@ FloatingPane {
onWheel: {} onWheel: {}
} }
property bool crfReady: csvData.ready && csvData.nbColumns >= 4 // note: We need to use csvData.getNbColumns() slot instead of the csvData.nbColumns property to avoid a crash on linux.
property bool crfReady: csvData && csvData.ready && (csvData.getNbColumns() >= 4)
onCrfReadyChanged: { onCrfReadyChanged: {
if(crfReady) if(crfReady)
{ {

View file

@ -10,10 +10,11 @@ FloatingPane {
padding: 5 padding: 5
radius: 0 radius: 0
property real gainDefaultValue: 1
property real gammaDefaultValue: 1 property real gammaDefaultValue: 1
property real offsetDefaultValue: 0 property real slidersPowerValue: 4
property real gammaValue: gammaCtrl.value property real gainValue: Math.pow(gainCtrl.value, slidersPowerValue)
property real offsetValue: offsetCtrl.value property real gammaValue: Math.pow(gammaCtrl.value, slidersPowerValue)
property string channelModeValue: channelsCtrl.value property string channelModeValue: channelsCtrl.value
property variant colorRGBA: null property variant colorRGBA: null
@ -44,7 +45,7 @@ FloatingPane {
model: channels model: channels
} }
// offset slider // gain slider
RowLayout { RowLayout {
spacing: 5 spacing: 5
@ -56,30 +57,30 @@ FloatingPane {
ToolTip.text: "Reset Gain" ToolTip.text: "Reset Gain"
onClicked: { onClicked: {
offsetCtrl.value = offsetDefaultValue; gainCtrl.value = gainDefaultValue;
} }
} }
TextField { TextField {
id: offsetLabel id: gainLabel
ToolTip.visible: ToolTip.text && hovered ToolTip.visible: ToolTip.text && hovered
ToolTip.delay: 100 ToolTip.delay: 100
ToolTip.text: "Color Gain (in linear colorspace)" ToolTip.text: "Color Gain (in linear colorspace)"
text: offsetValue.toFixed(2) text: gainValue.toFixed(2)
Layout.preferredWidth: textMetrics_offsetValue.width Layout.preferredWidth: textMetrics_gainValue.width
selectByMouse: true selectByMouse: true
validator: doubleValidator validator: doubleValidator
onAccepted: { onAccepted: {
offsetCtrl.value = Number(offsetLabel.text) gainCtrl.value = Math.pow(Number(gainLabel.text), 1.0/slidersPowerValue)
} }
} }
Slider { Slider {
id: offsetCtrl id: gainCtrl
Layout.fillWidth: true Layout.fillWidth: true
from: -1 from: 0.01
to: 1 to: 2
value: 0 value: gainDefaultValue
stepSize: 0.01 stepSize: 0.01
} }
} }
@ -107,19 +108,19 @@ FloatingPane {
ToolTip.text: "Apply Gamma (after Gain and in linear colorspace)" ToolTip.text: "Apply Gamma (after Gain and in linear colorspace)"
text: gammaValue.toFixed(2) text: gammaValue.toFixed(2)
Layout.preferredWidth: textMetrics_offsetValue.width Layout.preferredWidth: textMetrics_gainValue.width
selectByMouse: true selectByMouse: true
validator: doubleValidator validator: doubleValidator
onAccepted: { onAccepted: {
gammaCtrl.value = Number(offsetLabel.text) gammaCtrl.value = Math.pow(Number(gammaLabel.text), 1.0/slidersPowerValue)
} }
} }
Slider { Slider {
id: gammaCtrl id: gammaCtrl
Layout.fillWidth: true Layout.fillWidth: true
from: 0.01 from: 0.01
to: 16 to: 2
value: 1 value: gammaDefaultValue
stepSize: 0.01 stepSize: 0.01
} }
} }
@ -131,7 +132,7 @@ FloatingPane {
color: root.colorRGBA ? Qt.rgba(red.value_gamma, green.value_gamma, blue.value_gamma, 1.0) : "black" color: root.colorRGBA ? Qt.rgba(red.value_gamma, green.value_gamma, blue.value_gamma, 1.0) : "black"
} }
// gamma slider // RGBA colors
RowLayout { RowLayout {
spacing: 1 spacing: 1
TextField { TextField {
@ -230,8 +231,8 @@ FloatingPane {
text: "1.2345" // use one more than expected to get the correct value (probably needed due to TextField margin) text: "1.2345" // use one more than expected to get the correct value (probably needed due to TextField margin)
} }
TextMetrics { TextMetrics {
id: textMetrics_offsetValue id: textMetrics_gainValue
font: offsetLabel.font font: gainLabel.font
text: "-10.01" text: "1.2345"
} }
} }

View file

@ -210,7 +210,7 @@ FocusScope {
setSource("FloatImage.qml", { setSource("FloatImage.qml", {
'source': Qt.binding(function() { return getImageFile(imageType.type); }), 'source': Qt.binding(function() { return getImageFile(imageType.type); }),
'gamma': Qt.binding(function() { return hdrImageToolbar.gammaValue; }), 'gamma': Qt.binding(function() { return hdrImageToolbar.gammaValue; }),
'offset': Qt.binding(function() { return hdrImageToolbar.offsetValue; }), 'gain': Qt.binding(function() { return hdrImageToolbar.gainValue; }),
'channelModeString': Qt.binding(function() { return hdrImageToolbar.channelModeValue; }), 'channelModeString': Qt.binding(function() { return hdrImageToolbar.channelModeValue; }),
}) })
} else { } else {
@ -558,10 +558,15 @@ FocusScope {
anchors.fill: parent anchors.fill: parent
property var activeNode: _reconstruction.activeNodes.get('LdrToHdrCalibration').node property var activeNode: _reconstruction.activeNodes.get('LdrToHdrCalibration').node
active: activeNode && activeNode.isComputed && displayLdrHdrCalibrationGraph.checked property var isEnabled: displayLdrHdrCalibrationGraph.checked && activeNode && activeNode.isComputed
// active: isEnabled
// Setting "active" from true to false creates a crash on linux with Qt 5.14.2.
// As a workaround, we clear the CameraResponseGraph with an empty node
// and hide the loader content.
visible: isEnabled
sourceComponent: CameraResponseGraph { sourceComponent: CameraResponseGraph {
ldrHdrCalibrationNode: activeNode ldrHdrCalibrationNode: isEnabled ? activeNode : null
} }
} }
} }

View file

@ -203,14 +203,12 @@ Entity {
property string rawSource: attribute ? attribute.value : model.source property string rawSource: attribute ? attribute.value : model.source
// whether dependencies are statified (applies for output/connected input attributes only) // whether dependencies are statified (applies for output/connected input attributes only)
readonly property bool dependencyReady: { readonly property bool dependencyReady: {
if(!attribute) if(attribute) {
// if the node is removed, the attribute will be invalid const rootAttribute = attribute.isLink ? attribute.rootLinkParam : attribute
return false if(rootAttribute.isOutput)
return rootAttribute.node.globalStatus === "SUCCESS"
const rootAttribute = attribute.isLink ? attribute.rootLinkParam : attribute }
if(rootAttribute.isOutput) return true // is an input param without link (so no dependency) or an external file
return rootAttribute.node.globalStatus === "SUCCESS"
return true // is an input param so no dependency
} }
// source based on raw source + dependency status // source based on raw source + dependency status
property string currentSource: dependencyReady ? rawSource : "" property string currentSource: dependencyReady ? rawSource : ""

View file

@ -110,6 +110,14 @@ import Utils 1.0
MediaLoaderEntity { MediaLoaderEntity {
id: exrLoaderEntity id: exrLoaderEntity
Component.onCompleted: { Component.onCompleted: {
var fSize = Filepath.fileSizeMB(source)
if(fSize > 500)
{
// Do not load images that are larger than 500MB
console.warn("Viewer3D: Do not load the EXR in 3D as the file size is too large: " + fSize + "MB")
root.status = SceneLoader.Error;
return;
}
// EXR loading strategy: // EXR loading strategy:
// - [1] as a depth map // - [1] as a depth map
var obj = Viewer3DSettings.depthMapLoaderComp.createObject( var obj = Viewer3DSettings.depthMapLoaderComp.createObject(

View file

@ -413,19 +413,24 @@ ApplicationWindow {
onTriggered: ensureSaved(function() { _reconstruction.new("photogrammetry") }) onTriggered: ensureSaved(function() { _reconstruction.new("photogrammetry") })
} }
Action { Action {
text: "HDRI" text: "Panorama HDR"
onTriggered: ensureSaved(function() { _reconstruction.new("hdri") }) onTriggered: ensureSaved(function() { _reconstruction.new("panoramahdr") })
} }
Action { Action {
text: "HDRI Fisheye" text: "Panorama Fisheye HDR"
onTriggered: ensureSaved(function() { _reconstruction.new("hdriFisheye") }) onTriggered: ensureSaved(function() { _reconstruction.new("panoramafisheyehdr") })
} }
} }
Action { Action {
id: openActionItem id: openActionItem
text: "Open" text: "Open"
shortcut: "Ctrl+O" shortcut: "Ctrl+O"
onTriggered: ensureSaved(function() { openFileDialog.open() }) onTriggered: ensureSaved(function() {
if(_reconstruction.graph && _reconstruction.graph.filepath) {
openFileDialog.folder = Filepath.stringToUrl(Filepath.dirname(_reconstruction.graph.filepath))
}
openFileDialog.open()
})
} }
Menu { Menu {
id: openRecentMenu id: openRecentMenu
@ -477,14 +482,27 @@ ApplicationWindow {
id: saveAction id: saveAction
text: "Save" text: "Save"
shortcut: "Ctrl+S" shortcut: "Ctrl+S"
enabled: _reconstruction.graph && (!_reconstruction.graph.filepath || !_reconstruction.undoStack.clean) enabled: (_reconstruction.graph && !_reconstruction.graph.filepath) || !_reconstruction.undoStack.clean
onTriggered: _reconstruction.graph.filepath ? _reconstruction.save() : saveFileDialog.open() onTriggered: {
if(_reconstruction.graph.filepath) {
_reconstruction.save()
}
else
{
saveFileDialog.open()
}
}
} }
Action { Action {
id: saveAsAction id: saveAsAction
text: "Save As..." text: "Save As..."
shortcut: "Ctrl+Shift+S" shortcut: "Ctrl+Shift+S"
onTriggered: saveFileDialog.open() onTriggered: {
if(_reconstruction.graph && _reconstruction.graph.filepath) {
saveFileDialog.folder = Filepath.stringToUrl(Filepath.dirname(_reconstruction.graph.filepath))
}
saveFileDialog.open()
}
} }
MenuSeparator { } MenuSeparator { }
Action { Action {

View file

@ -111,7 +111,7 @@ class LiveSfmManager(QObject):
to include those images to the reconstruction. to include those images to the reconstruction.
""" """
# Get all new images in the watched folder # Get all new images in the watched folder
imagesInFolder = multiview.findFilesByTypeInFolder(self._folder) imagesInFolder = multiview.findFilesByTypeInFolder(self._folder).images
newImages = set(imagesInFolder).difference(self.allImages) newImages = set(imagesInFolder).difference(self.allImages)
for imagePath in newImages: for imagePath in newImages:
# print('[LiveSfmManager] New image file : {}'.format(imagePath)) # print('[LiveSfmManager] New image file : {}'.format(imagePath))
@ -484,12 +484,12 @@ class Reconstruction(UIGraph):
if p.lower() == "photogrammetry": if p.lower() == "photogrammetry":
# default photogrammetry pipeline # default photogrammetry pipeline
self.setGraph(multiview.photogrammetry()) self.setGraph(multiview.photogrammetry())
elif p.lower() == "hdri": elif p.lower() == "panoramahdr":
# default hdri pipeline # default panorama hdr pipeline
self.setGraph(multiview.hdri()) self.setGraph(multiview.panoramaHdr())
elif p.lower() == "hdrifisheye": elif p.lower() == "panoramafisheyehdr":
# default hdri pipeline # default panorama fisheye hdr pipeline
self.setGraph(multiview.hdriFisheye()) self.setGraph(multiview.panoramaFisheyeHdr())
else: else:
# use the user-provided default photogrammetry project file # use the user-provided default photogrammetry project file
self.load(p, setupProjectFile=False) self.load(p, setupProjectFile=False)

View file

@ -6,6 +6,8 @@ from cx_Freeze import setup, Executable
import meshroom import meshroom
currentDir = os.path.dirname(os.path.abspath(__file__))
class PlatformExecutable(Executable): class PlatformExecutable(Executable):
""" """
Extend cx_Freeze.Executable to handle platform variations. Extend cx_Freeze.Executable to handle platform variations.
@ -32,7 +34,6 @@ class PlatformExecutable(Executable):
# get icon for platform if defined # get icon for platform if defined
icon = icons.get(platform.system(), None) if icons else None icon = icons.get(platform.system(), None) if icons else None
if platform.system() in (self.Linux, self.Darwin): if platform.system() in (self.Linux, self.Darwin):
currentDir = os.path.dirname(os.path.abspath(__file__))
initScript = os.path.join(currentDir, "setupInitScriptUnix.py") initScript = os.path.join(currentDir, "setupInitScriptUnix.py")
super(PlatformExecutable, self).__init__(script, initScript, base, targetName, icon, shortcutName, super(PlatformExecutable, self).__init__(script, initScript, base, targetName, icon, shortcutName,
shortcutDir, copyright, trademarks) shortcutDir, copyright, trademarks)
@ -46,6 +47,11 @@ build_exe_options = {
], ],
"include_files": ["CHANGES.md", "COPYING.md", "LICENSE-MPL2.md", "README.md"] "include_files": ["CHANGES.md", "COPYING.md", "LICENSE-MPL2.md", "README.md"]
} }
if os.path.isdir(os.path.join(currentDir, "tractor")):
build_exe_options["packages"].append("tractor")
if os.path.isdir(os.path.join(currentDir, "simpleFarm")):
build_exe_options["packages"].append("simpleFarm")
if platform.system() == PlatformExecutable.Linux: if platform.system() == PlatformExecutable.Linux:
# include required system libs # include required system libs

3
start.sh Normal file
View file

@ -0,0 +1,3 @@
#!/bin/sh
export PYTHONPATH="$(dirname "$(readlink -f "${BASH_SOURCE[0]}" )" )"
python meshroom/ui