-
-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SDE PINN solver #897
base: master
Are you sure you want to change the base?
SDE PINN solver #897
Conversation
As promised this solver will be completed. I got busy in another project so had to put this on hold. Initial reviews on this would be great. The polynomial chaos expansion for SPDEs will be done later. |
That looks to be on the right track. |
would sub_batching as added be a good idea? for sub_batch=2 i get |
So i tried solving again for sub batch=1, 2, 5, the mean_fit error increases as sub batch size increases. The MSE training loss converges smoothly for lower number of sub batches. This could be due a lack of z_i's distribution information being reflected in the dataset (ideal case involves large number of z_i samples). I'm suspecting a NN with probabilistic weights might be better suited to this problem (input t, outputs u), where we have a Random loss function (must choose number of z_i before hand in KKL loss approximation). This would be similar to my BPINN solvers with the exception that we have a stochastic objective. But a doubt arises, would the optimization become too difficult then? or it there any work around to this. |
I would think you'd need sample sizes of at least like 100 to be able to smoothly converge? |
Ohh yeah. I'll set up some tests for this. I had tried for like n=10, locally but it was too slow. |
I had tried with sub_batch=2, 5 and 10 but these were not better that the solution of sub_batch=1. Ive added the sub_batch=250 tests, i dont understand why do they fail?. (locally sub_batch=100 solve call takes ~3hrs runtime and fails tests). In case Im testing incorrectly or some code can be sped up, do let me know. Thanks. |
Fixed the issue and made the solver faster ! Tests should pass. |
@ChrisRackauckas All the tests pass. lmk, if something can be improved so we can get this bad boy finally merged. |
This looks correct, though it needs a bit more things tested:
|
@ChrisRackauckas sorry for the delay, the solver was failing miserably for the additive noise tests. Turns out I was performing the initial condition transform twice! Finally figured where the code was going wrong ! I'll be posting the results shortly (yes the new results are pretty good as well) |
@ChrisRackauckas I have added tests for strong and weak convergence in case of both GBM and the Additive Noise Test equation in the testing file. Note that we can also train the SDEPINN via strong or weak convergence, I am defaulting to weak convergence ( basis: strong convergence implies weak convergence, but easier to optimize weak solution over the same n iterations) but the user can always choose if they want to train with a total path-wise loss as well. Let me know what you think on this or want more testing on this part. Ive used MonteCarloMeasurements.Particles for the returned solutions (BPINNs use this as well). Using this the difference in larger vs smaller sub sample sizes is much clear, the confidence intervals are better for more sub samples (in almost all cases they are equal or within the confidence intervals of the truncated/analytical solutions) . The Diffusion term is learnt much well for more sub samples (makes sense as we basically learn the Wiener basis mapping with solutions better). |
And all the related tests pass, requesting reviews. |
Checklist
contributor guidelines, in particular the SciML Style Guide and
COLPRAC.
Additional context
Add any other context about the problem here.