You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The GPU memory keeps increasing while conducting dgl.unbatch on batched graphs on GPU and copying splitted graphs to CPU.
To Reproduce
Run the following script, and we will notice that the allocated GPU memory keeps increasing. If a much larger graph dataset is used, this will evently cause OOM crash.
Hi @wondey-sh, have you estimated the expected memory usage?
Hi @czkkkkkk , the example code is used in part of my model inference code, and I expect that it can be run in a g4dn.xlarge machine. However, OOM happens even when batch_size is 1. The graphs in my dataset is much larger than the example code, but are perfectly fine for training with batch_size=16. Thanks.
🐛 Bug
The GPU memory keeps increasing while conducting
dgl.unbatch
on batched graphs on GPU and copying splitted graphs to CPU.To Reproduce
Run the following script, and we will notice that the allocated GPU memory keeps increasing. If a much larger graph dataset is used, this will evently cause OOM crash.
Expected behavior
How could the GPU memory increase be prevented?
Environment
conda
,pip
, source): condaAdditional context
The text was updated successfully, but these errors were encountered: