43 Renderfarm submitters
Alice Sonolet edited this page 2025-09-05 15:31:59 +02:00
This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

WIP Sorry, there is no detailed tutorial or guide for this at the moment.

Meshroom supports two methods to compute nodes :

  • first the compute process that will launch the process on your local machine
  • also the submit process that allows launching the process on a renderfarm system. This system can be more efficient as it enables launching each chunk on a different machine.

On startup when meshroom detects plugins it can also register submitters. By default, submitters are placed on meshroom/submitters. If any Render Farm Submitter is detected, Meshroom will show an additional button :

image

As you can see in the above screenshot, the submitted chunks on queue are displayed with a darker shade of blue than the nodes that are waiting for local computation.

Setting up the environment can be complicated, so it is nothing for the casual user. You would need different servers (physical or virtual) that can distribute and manage the Job to connected machines (Example).

There will be a tutorial or guide on this in the future, but for now you are on your own. (So please don´t open a new issue on this just for asking for help)

Here are the submitters that are currently supported natively on Meshroom :

  1. SimpleFarm and Tractor : Submitters for Pixar's Tractor (simpleFarm is a private wrapper around tractor and the goal is to replace it by the Tractor submitter directly)

Submitted status

On the Status tab of the node editor interface youcan see that for submitted nodes :

  • The execMode will be set to "EXTERN"
  • The status is set to "SUBMITTED" for nodes that are waiting for an instance on the farm to compute them.

For instance, if you open multiple Meshroom on the same files (usually from multiple machines sharing a storage). It is important when you submit a job on a renderfarm that will manage the scheduling on multiple machines. (https://github.com/alicevision/meshroom/issues/1165#issuecomment-733601561)

Submitters API

To setup a new submitter, first follow this folder structure :

$ROOT
└── meshroom
    └── mySubmitter
        ├── __init__.py
        └── newSubmitter.py

Then before launching Meshroom register the submitter by setting this environment variable :

export MESHROOM_SUBMITTERS_PATH=$MESHROOM_SUBMITTERS_PATH:$ROOT/meshroom

Here is a template for newSubmitter.py :

import os
import logging
from meshroom.core.submitter import BaseSubmitter

currentDir = os.path.dirname(os.path.realpath(__file__))
binDir = os.path.dirname(os.path.dirname(os.path.dirname(currentDir)))


class Task:
    def __init__(self, name: str, command, **kwargs):
        self.name = name
        self.command = command
        self._parents, self._children = [], []
    
    def connect(self, parent):
        self._parents.append(parent)
        parent._children.append(self)


class Job:
    def __init__(self, name, **kwargs):
        self.name = name
        self._tasks = []
    
    def addTask(self, task: Task):
        self._tasks.append(task)
    
    def submit(self, dryRun: bool=False):
        # TODO: The actual code to create the tasks and run the job in the background goes here
        return True


class TemplateSubmitter(BaseSubmitter):
    def __init__(self, parent=None):
        super().__init__(name='Template', parent=parent)

    def createTask(self, meshroomFile, node):
        logging.info('node: ', node.name)
        optionalArgs = {}
        if node.isParallelized:
            blockSize, fullSize, nbBlocks = node.nodeDesc.parallelization.getSizes(node)
            if nbBlocks > 1:
                optionalArgs["chunkInfo"] = {'start': 0, 'end': nbBlocks - 1, 'step': 1}
        exe = os.path.join(binDir, "meshroom_compute")
        taskCommand = f"{exe} --node {node.name} \"{meshroomFile}\" --extern"
        task = Task(name=node.name, command=taskCommand, **optionalArgs)
        return task

    def submit(self, nodes, edges, filepath, submitLabel="{projectName}"):
        projectName = os.path.splitext(os.path.basename(filepath))[0]
        name = submitLabel.format(projectName=projectName)
        job = Job(name)
        nodeUidToTask = {}
        for node in nodes:
            if node._uid in nodeUidToTask:
                continue
            task = self.createTask(filepath, node)
            job.addTask(task)
            nodeUidToTask[node._uid] = task

        for u, v in edges:
            nodeUidToTask[u._uid].connect(nodeUidToTask[v._uid])

        res = job.submit(dryRun=False)
        return res

Available submitters

Pixar RenderMan - Tractor

https://renderman.pixar.com/tractor

https://renderman.pixar.com/forum/download.php (There is a non commercial version)

https://renderman.pixar.com/store

meshroom/submitters/simpleFarmSubmitter.py meshroom/submitters/simpleFarmConfig.json

https://github.com/alicevision/Meshroom/pull/2874

https://alicevision.org/img/meshroom/renderfarm.png

Setup (draft)

grafik

Prerequisites

  • Install Meshroom on all participating machines
  • Install Pixar RenderMan on all participating machines https://www.youtube.com/watch?v=4zAkV2oXkwI
  • Install Tractor on all participating machines
  • All participating machines need to be in the same network/able to communicate

Set Up Tractor

  • Configure the Tractor Server:

  • On one of the machines, start the Tractor server. This machine will act as the main server for job distribution.

  • Ensure that the server is accessible from the other machine.

  • Add Worker Machines:

  • On the server, add the second machine as a worker in the Tractor interface. This can usually be done through the Tractor GUI by navigating to the "Workers" section and adding the IP address or hostname of the second machine.

Meshroom

  • Meshroom project files need to be stored on a shared network drive, accessible from all machines

  • configuration [WIP]

  • Submit [not yet fully implemented] 485231295-be7f4df6-c449-45bb-b0a8-5cff89e8a8c1

Fireworks

https://github.com/alicevision/meshroom/pull/81 https://materialsproject.github.io/fireworks

CGRU / AFANASY

No MR submitter so far...

https://github.com/alicevision/meshroom/issues/1039 https://www.youtube.com/watch?v=OYVZvlXdBsg (reference) http://ramellij.blogspot.com/2015/06/how-to-build-open-source-renderfarm.html

OpenCue

[Experimental]

  1. Get the Meshroom OpenCue Submitter https://github.com/alicevision/meshroom/pull/992
  2. Place it in the Meshroom Submitters folder
  3. The Meshroom Submitter is using OpenCue PyOutline.
  4. Cuebot server with OpenCue database, RQD Render Agents on the target machines, PyCue and PyOutline on the primary machine are required to be set up. https://www.opencue.io/docs/concepts/opencue-overview/
  5. The process is described here: https://www.opencue.io/docs/getting-started/
  6. RQD can now run a submitted Meshroom Job on the render-farm machines

This can be setup in the Google Cloud environment as described here: https://cloud.google.com/solutions/creating-a-render-farm-on-gcp-using-opencue https://cloud.google.com/opencue

https://www.youtube.com/watch?v=uOi3azKJ3Xs (reference) https://www.youtube.com/watch?v=Vk-huejruG0 (reference for local setup)

Note: This has not been tested

Slurm farm HPC

https://github.com/alicevision/meshroom/issues/357#issuecomment-535851216

  • slurm on High-Performance-Computing-Cluster / render farm

Slurm needs bash script to load jobs with specific comments such #SBATCH (https://support.ceci-hpc.be/doc/_contents/QuickStart/SubmittingJobs/SlurmTutorial.html) .

slurmFarmSubmitter converts the Meshroom nodes to those scripts through python + jinja template https://github.com/sergiy-nazarenko/hafarm/blob/master/slurm_job.schema

slurmFarmSubmitter.py needs to be placed in meshroom/submitters/slurmFarmSubmitter.py


python subprocess module to submit a Slurm sbatch job:

https://github.com/alicevision/meshroom/blob/dev/slurm/meshroom/submitters/slurmSubmitter.py

Details on Slurm:

https://vsoch.github.io/lessons/slurm/ (good overview and tutorials)

https://slurm.schedmd.com/

https://aws.amazon.com/de/hpc/parallelcluster/

https://docs.aws.amazon.com/parallelcluster/latest/ug/schedulers.slurm.html

https://cloud.google.com/blog/products/gcp/easy-hpc-clusters-on-gcp-with-slurm

https://cloud.google.com/solutions/deploying-slurm-cluster-compute-engine

Amazon Web Services

https://github.com/alicevision/meshroom/wiki/Meshroom-on-AWS

leverage AWS SQS (Simple Queue Service), S3 (Simple Storage Service) and boto https://github.com/boto/boto

to do: submitter

reference: https://www.youtube.com/watch?v=_Oqo383uviw

Autodesk Backburner

Untested. Starting point: https://www.jigsaw24.com/articles/use-backburner-to-queue-renders-from-non-autodesk-software


Other Render Managers to evaluate:

https://coalition.readthedocs.io/en/latest/

https://prism-pipeline.com/pandora/ (looks to be easy to set up) https://github.com/chadmv/plow