-
Notifications
You must be signed in to change notification settings - Fork 179
Consensus job failed repeatedly #2374
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
You're running an older version and there have been several fixes to consensus since v2.2. I'd suggest trying v2.3, you should be able to re-start this assembly using v2.3 but make a backup of the assembly folder to be safe. If that still doesn't work, post the logs of the failed jobs |
Hello! Thank you so much for the reply and the suggestions. Really appreciate it! I ran canu 2.3 and ran into the same problems. Here's a relevant part of the canu.out file. It looks like there's no memory associated with the Grid: cns tag. Could this be a problem?
As requested, here is the consensus.28289772_80.out logs of one of the failed jobs in /unitigging/5-consensus (the other logs are almost identical):
It looks like an OOM error killed the process. In Job 79, the consensus job actually begins but fails with the same sort of error. I've included a snippet of this below. Any ideas what I can do about this?
I am launching canu on the head node with the basic command:
Thank you so much for your help! Wei Han |
Looks like your grid is killing these jobs due to exceeding memory. Canu typically estimates the memory (looks like it requested 4.5gb in this case) but perhaps it was incorrect in this case. You can try to increase the memory by adding the option |
Hello! Thanks again for that. The consensus job looks to have run after bumping up the memory as you suggested. However, the pipeline still fails. Here is what is shown in the canu.out file:
I'm assuming this is an OOM issue as well? I can't seem to find the corresponding log file for the failed job under /unitigging/. Should I try to bump up the memory of each of the tags for each corresponding module (e.g. Or, would simply using the Thank you so much! Really appreciate your help here. Wei Han |
This is the canu executor script and it's not memory, it's time. You should add a default time to canu's submit command via the gridOptions= |
Hi everyone, I'm trying to run hicanu (canu 2.2) with some pacbio hifi reads, but am repeatedly running into a failure with the consensus job stage. Does anyone know how this can be resolved?
Here's my command for it:
canu -p hicanu -d "$out_dir" genomeSize=1.05g -pacbio-hifi "$hifi_reads"
And here's what is shown in the canu.out log:
If it helps, my input file to the main command is a .fq.gz, which canu should be able to work with.
Thank you so much for your help!
Wei Han
The text was updated successfully, but these errors were encountered: