Runtimeerror: Cudnn Rnn Backward Can Only Be Called in Training Mode

Trouble description :

RuntimeError: cudnn RNN backward tin merely be chosen in training fashion

Problem assay :

The reason is that information technology can't be right or wrong in back propagation train state .

My code added a before back propagation Alexnet,Alexnet Adding loss And so it will spread dorsum past itself , At this time, in the whole LSTM This non-linear phenomenon volition appear in the back propagation train Status problems .

solve :

employ Baidu Search :

I've tried all the methods on the first folio , Information technology'south all useless ...

Thorough solution :

Add... Before you beginning preparation :

          torch.backends.cudnn.enabled=False        

Tin completely solve !!!


to update :

That method just now can merely forcibly change the state of the current model , The core trouble is that the lawmaking appears issues, Check carefully train What places trigger non train state . Information technology's common to trigger evaluation country .

A good way to debug is to enter each loop railroad train Record the current epoch And impress it out , If not train, It oft leads to repeated entry into the bicycle .

If there is a tensor stored in GPU, It oft causes the video retentiveness to explode first !!

版权声明
本文为[A wind attorney fond to bicycles]所创,转载请带上原文链接,感谢
https://chowdera.com/2021/08/20210825022738918e.html

lynchagettold.blogspot.com

Source: https://chowdera.com/2021/08/20210830070053548n.html

0 Response to "Runtimeerror: Cudnn Rnn Backward Can Only Be Called in Training Mode"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel