You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the response,
Coudl you provide more details on training?
I am unable to get good results with the LSTM layers setup described on the paper.
Did you use stateful LSTMS?
Did you organized the train dataset on some particular order to feed the model, like it is done with language model, shuffling with order, chunk-ifying the data, etc...
How many LSTM layers are on each?
How many forward steps you predicted at training time. Did you use some weighted MSE to account for the forecasting importance?
Why you did not use a more traidtional Resnet encoder, i find better performance with a more standard model...
Did you benchmarked against the ConvLSTM layers (available in Keras).
Sorry so many questions, but I have not been able to get good results with this model. But I have the intuition that it should perform better.
I am trying to reproduce this papers results on with little success.
Could you provide codes and/or datasets?
SIncerely,
Thomas
The text was updated successfully, but these errors were encountered: