【文章內(nèi)容簡(jiǎn)介】
traincgb Conjugate Gradient with Powell/Beale Restarts traincgf FletcherPowell Conjugate Gradient traincgp PolakRibi233。re Conjugate Gradient trainoss OneStep Secant traingdx Variable learning rate backpropagation ? Trainlm: ? Good: 對(duì)于函數(shù)擬合問(wèn)題 , 當(dāng)網(wǎng)絡(luò)只有幾百個(gè)可調(diào)參數(shù)的時(shí)候 , LM收斂最快 . ? Bad: 當(dāng)網(wǎng)絡(luò)的權(quán)值增加的時(shí)候 LM的優(yōu)點(diǎn)逐漸消失(消耗內(nèi)存急劇增加 ). 而且 LM不適合與模式識(shí)別網(wǎng)絡(luò)訓(xùn)練 . ? Trainrp: ? Good: 用來(lái)訓(xùn)練模式識(shí)別問(wèn)題的網(wǎng)絡(luò)收斂最快 , 而且消耗內(nèi)迅也不多 (訓(xùn)練中只借用下降梯度的方向 ). ? Bad: 函數(shù)擬合時(shí)效果不好 . 當(dāng)接近極小點(diǎn)的時(shí)候性能下降 . ?Trainscg(推薦算法 ): ? 在很多情況下效果都很好 , 尤其是對(duì)規(guī)模較大的網(wǎng)絡(luò) . ? 在函數(shù)擬合情況下幾乎和 LM算法一樣快 (對(duì)于較大的網(wǎng)絡(luò)甚至更快 ) ,.在模式識(shí)別訓(xùn)練中和 trainrp 一樣快 . Its performance does not degrade as quickly as trainrp performance does when the error is reduced. ? 共軛梯度法 ( conjugate gradient algorithms) 對(duì)內(nèi)存要求不是很高 . ? Trainbfg: ? 性能和 trainlm相近 ,但對(duì)內(nèi)存要求較 trainlm小 . 但該算法計(jì)算量隨著網(wǎng)絡(luò)規(guī)模的增加呈幾何增長(zhǎng) , since the equivalent of a matrix inverse must be puted at each iteration. ? Traingdx: ? 與其他算法比起來(lái)比較慢 , 內(nèi)存要求和 trainrp相近 . 但是仍有其有用之處 , 有些場(chǎng)合下需要收斂慢的算法 . For example, when using early stopping you may have inconsistent results if you use an algorithm that converges too quickly. You may overshoot the point at which the error on the validation set is minimized. 三 .訓(xùn)練數(shù)據(jù)前期處理 ? ? ? premnmx: 得到 [1,1]的新數(shù)據(jù) ? tramnmx: 歸一劃新的輸入 ?