-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
beta2: segfault in utgcns and bug in Snakemake #53
Comments
Did it try the job twice, if so I think snakemake worked correctly despite the weird error reporting. It stopped since the output wasn't successful. If you can share the offending input, |
yes, the run was restarted once - I read that as a "should no longer happen" error, but ok, the segfault seems more critical for now.
Globus reported a few connection problems, but the file seems complete (size-wise, at least); just ping me if it has nevertheless been corrupted during the transfer. Thanks! |
Cute, the issue here is canu's consensus has a max read limit of 2,097,151bp (2^21 bits) and your read is too long. We can either skip too long reads on load or allow canu to store longer reads. |
Quite unexpected - I assume that limit can only be changed when compiling Canu? |
There's actually no reason to enforce/use that limit in consensus. It affects overlap and sequence data structures but ONT reads are never loaded into those. If you built verkko from source, can you try the following patch in the canu code:
It is running on my end so if you can't change the code, I can also share the final fasta assuming it runs to completion. |
As a right now solution, that would be much appreciated. If that problem occurs with just one sample, then I would not have to go through the process of deploying a source build of Verkko on our cluster. |
I put the finished fasta on globus under |
Thanks a lot, it all worked as expected with one exception: the |
Yes it will. In the meantime, you can check/trim any read over 2mb in your ONT datasets to avoid hitting it. In the case above, the read was the start of the contig and we trim back to at least 2x coverage so most of it was erased anyway from the final consensus. |
Great, and thanks for the advice. |
Hi,
I successfully processed two samples with the beta2 release (installed via bioconda), but for a third one, I am running into the following issue:
I am not sure if the above simply triggered the below error in Snakemake (known, and supposedly fixed, see snakemake/snakemake#823), or if that is indicating another problem in the pipeline:
I can share the input data via Globus if needed.
Best,
Peter
The text was updated successfully, but these errors were encountered: