Using Snakemake on the HPC Cluster
Setup
- Install snakemake via mamba/conda (see https://snakemake.readthedocs.io/en/stable/getting_started/installation.html)
- In the working folder create a snakemake environment with
mamba activate snakemake
- Install the cluster-generic tool to submit snakemake files to a cluster from pip with
pip install snakemake-executor-plugin-cluster-generic
- Create a generic profile in ~/.config/snakemake/testprofile/config.yaml such as:
executor: slurm jobs: 100 default-resources: mem_mb: max(1.5 * input.size_mb, 100) account: <ACCOUNT_NAME> partition: <PARTITION NAME> set-threads: myrule: max(input.size_mb / 5, 2) set-resources: myrule: mem_mb: attempt * 200
Executing a Snakefile Workflow
- Create a Snakefile (see
- You can over-ride resources with that flag otherwise it will use the values in testprofile
- Run the Snakefile with this command
snakemake --profile testprofile -j 1 --executor cluster-generic --cluster-generic-submit-cmd "batch" ...