nf-core/metatdenovo
Assembly and annotation of metatranscriptomic or metagenomic data for prokaryotic, eukaryotic and viruses.
Introduction
This document describes the output produced by the pipeline.
The directories listed below will be created in the results directory after the pipeline has finished. All paths are relative to the top-level results directory.
Pipeline overview
The pipeline is built using Nextflow and the results are organized as follow:
- Module output
- Preprocessing
- FastQC - Read quality control
- Trim galore! - Primer trimming
- MultiQC - Aggregate report describing results
- BBduk - Filter out sequences from samples that matches sequences in a user-provided fasta file (optional)
- BBnorm - Normalize the reads in the samples to use less resources for assembly (optional)
- Assembly step - Generate contigs with an assembler program
- ORF Caller step - Identify protein-coding genes (ORFs) with an ORF caller
- Prodigal - Output from Prodigal (default)
- Prokka - Output from Prokka (optional)
- TransDecoder - Output from transdecoder (optional)
- Functional and taxonomical annotation - Predict the function and the taxonomy of ORFs
- Preprocessing
- Custom metatdenovo output
- Summary tables folder - Tab separated tables ready for further analysis in tools like R and Python
- Pipeline information - Report metrics generated during the workflow execution
Module output
Preprocessing
FastQC
FastQC gives general quality metrics about your sequenced reads. It provides information about the quality score distribution across your reads, per base sequence content (%A/T/G/C), adapter contamination and overrepresented sequences. For further reading and documentation see the FastQC help pages. FastQC is run as part of Trim galore! therefore its output can be found in Trimgalore’s folder.
Output files
trimgalore/fastqc/
*_fastqc.html
: FastQC report containing quality metrics for your untrimmed raw fastq files.
Trim galore!
Trimgalore is a fastq preprocessor for read/adapter trimming and quality control. It is used in this pipeline for trimming adapter sequences and discard low-quality reads. Its output is in the results folder and part of the MultiQC report.
Output files
trimgalore/
: directory containing log files with retained reads, trimming percentage, etc. for each sample.*trimming_report.txt
: report of read numbers that pass trimgalore.
MultiQC
MultiQC is a visualization tool that generates a single HTML report summarising all samples in your project. Most of the pipeline QC results are visualised in the report and further statistics are available in the report data directory.
Results generated by MultiQC collate pipeline QC from supported tools e.g. FastQC. The pipeline has special steps which also allow the software versions to be reported in the MultiQC output for future traceability. For more information about how to use MultiQC reports, see http://multiqc.info.
Output files
multiqc/
multiqc_report.html
: a standalone HTML file that can be viewed in your web browser.multiqc_data/
: directory containing parsed statistics from the different tools used in the pipeline.multiqc_plots/
: directory containing static images from the report in various formats.
The FastQC plots displayed in the MultiQC report shows untrimmed reads. They may contain adapter sequence and potentially regions with low quality.
BBduk
BBduk is a filtering tool that removes specific sequences from the samples using a reference fasta file. BBduk is built-in tool from BBmap.
Output files
bbmap/
*.bbduk.log
: a text file with the results from BBduk analysis. Number of filtered reads can be seen in this log.
BBnorm
BBnorm is a tool from BBmap that allows to reduce the coverage of highly abundant sequence kmers and remove sequences representing kmers that are below a threshold. It can be useful if the data set is too large to assemble but also potentially improve an assembly. N.B. the digital normalization is done only for the assembly and the non-normalized sequences will be used for quantification. BBnorm is a BBmap tool.
Output files
bbmap/bbnorm/logs/
*.logs
: it is a log file of the bbnorm run.
Assembly step
Megahit
Megahit is used to assemble the cleaned and trimmed FastQ reads into contigs.
Output files
megahit/megahit_out/
*.log
: log file of Megahit run.megahit_assembly.contigs.fa.gz
: reference genome created by Megahit.intermediate_contigs
: folder that contains the intermediate steps of Megahit run.
Spades
Optionally, you can use Spades to assemble reads into contigs.
Output files
spades/
spades.assembly.gfa.gz
: gfa file output from spadesspades.spades.log
: log file output from spades runspades.transcripts.fa.gz
: reference genome created by Spades
ORF caller step
Prodigal
As default, Prodigal is used to identify ORFs in the assembly.
Output files
prodigal/
*.fna.gz
: nucleotides fasta file output*.faa.gz
: amino acids fasta file output*.gff.gz
: genome feature file output
Prokka
As one alternative, you can use Prokka to identify ORFs in the assembly. In addition to calling ORFs (done with Prodigal) Prokka will filter ORFs to only retain quality ORFs and will functionally annotate the ORFs. NB: Prodigal or Prokka are recomended for prokaryotic samples
Output files
prokka/
*.ffn.gz
: nucleotides fasta file output*.faa.gz
: amino acids fasta file output*.gff.gz
: genome feature file output
TransDecoder
Another alternative is TransDecoder to find ORFs in the assembly. N.B. TransDecoder is recomended for eukaryotic samples
Output files
transdecoder/
*.cds
: nucleotides fasta file output*.pep
: amino acids fasta file output*.gff3
: genome feature file output
Functional and taxonomical annotation
EggNOG
EggNOG-mapper will perform an analysis to assign functions to the ORFs.
Output files
eggnog/
*.emapper.annotations.gz
: a file with the results from the annotation phase, see the EggNOG-mapper documentation.*.emapper.hits.gz
: a file with the results from the search phase, from HMMER, Diamond or MMseqs2.*.emapper.seed_orthologs.gz
: a file with the results from parsing the hits. Each row links a query with a seed ortholog. This file has the same format independently of which searcher was used, except that it can be in short format (4 fields), or full.
KOfamScan
KOfamScan will perform an analysis to assign KEGG orthologs to ORFs.
Output files
kofamscan/
*.kofamscan_output.tsv.gz
: kofamscan output.
EUKulele
EUKulele will perform an analysis to assign taxonomy to the ORFs. A number of databases are supported: MMETSP, PhyloDB and GTDB. GTDB currently only works as a user provided database, i.e. data must be downloaded before running nf-core/metatdenovo.
Output files
eukulele/assembler.orfcaller/mets_full/diamond/
*.diamond.out.gz
: Diamond output
eukulele/assembler.orfcaller/taxonomy_estimation/
*-estimated-taxonomy.out.gz
: EUKulele output
Hmmsearch
You can run hmmsearch on ORFs using a set of HMM profiles provided to the pipeline (see the --hmmdir
, --hmmpatern
and --hmmfiles
parameters).
Output files
hmmer/
*.tbl.gz
: Table output gzipped as result of Hmmsearch run.
After the search, hits for each ORF and HMM will be summarised and ranked based on scores for the hits (see also output in summary tables).
Output files
hmmrank/
*.tsv.gz
: tab separeted file with the ranked ORFs for each HMM profile.
Metatdenovo output
Summary tables
Consistently named and formated output tables in tsv format ready for further analysis. Filenames start with assembly program and ORF caller, to allow reruns of the pipeline with different parameter settings without overwriting output files.
Output files
summary_tables/
{assembler}.{orf_caller}.overall_stats.tsv.gz
: overall statistics from the pipeline, e.g. number of reads, number of called ORFs, number of reads mapping back to contigs/ORFs etc.{assembler}.{orf_caller}.counts.tsv.gz
: read counts per ORF and sample.{assembler}.{orf_caller}.emapper.tsv.gz
: reformatted output from EggNOG-mapper.{assembler}.{orf_caller}.{db}_eukulele.tsv.gz
: taxonomic annotation per ORF for specific database.{assembler}.{orf_caller}.prokka-annotations.tsv.gz
: reformatted annotation output from Prokka.{assembler}.{orf_caller}.hmmrank.tsv.gz
: ranked summary table from HMMER results.
Pipeline information
Output files
pipeline_info/
- reports generated by Nextflow:
execution_report.html
,execution_timeline.html
,execution_trace.txt
andpipeline_dag.dot
/pipeline_dag.svg
. - reports generated by the pipeline:
pipeline_report.html
,pipeline_report.txt
andsoftware_versions.yml
. Thepipeline_report*
files will only be present if the--email
/--email_on_fail
parameter’s are used when running the pipeline. - reformatted samplesheet files used as input to the pipeline:
samplesheet.valid.csv
. - parameters used by the pipeline run:
params.json
.
- reports generated by Nextflow: