You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Implementing the Inverse Dirichlet adaptive loss method proposed in this paper "Inverse Dirichlet weighting enables reliable training of physics informed neural networks" paper by Suryanarayana Maddu, Dominik Sturm, Christian L Müller, and Ivo F Sbalzarini.
The algorithm is section (3.2) in the paper. The algorithm should be implemented as a concrete subtype of AbstractAdaptiveLoss so that it fits within our pre-existing code gen infrastructure in the discretize_inner_functions function. Note that the algorithm in section (3.1) is implemented in our repo as GradientScaleAdaptiveLoss, and and that implementation is a good reference for the implementation of the Inverse Dirichlet algorithm since they utilize many of the same quantities.
(i.e.)
struct InverseDirichletAdaptiveLoss <: AbstractAdaptiveLoss
...
end
This doesn't seem that difficult to implement as most of the quantities are already calculated in the GradientScaleAdaptiveLoss, the method just has to use the calculated values differently. This would be good to compare as it seems similar to some of the ADAM-like variance-analyzing optimizers.
The text was updated successfully, but these errors were encountered:
Implementing the Inverse Dirichlet adaptive loss method proposed in this paper "Inverse Dirichlet weighting enables reliable training of physics informed neural networks" paper by Suryanarayana Maddu, Dominik Sturm, Christian L Müller, and Ivo F Sbalzarini.
The algorithm is section (3.2) in the paper. The algorithm should be implemented as a concrete subtype of
AbstractAdaptiveLoss
so that it fits within our pre-existing code gen infrastructure in thediscretize_inner_functions
function. Note that the algorithm in section (3.1) is implemented in our repo asGradientScaleAdaptiveLoss
, and and that implementation is a good reference for the implementation of the Inverse Dirichlet algorithm since they utilize many of the same quantities.(i.e.)
This doesn't seem that difficult to implement as most of the quantities are already calculated in the
GradientScaleAdaptiveLoss
, the method just has to use the calculated values differently. This would be good to compare as it seems similar to some of the ADAM-like variance-analyzing optimizers.The text was updated successfully, but these errors were encountered: