【正文】
le Pathcat=(/mnt/hdd/datasets/dogs_cats/train/cat)dog=(/mnt/hdd/datasets/dogs_cats/train/dog)filepath=/mnt/hdd/datasets/dogs_cats/train/cat/filepath2=/mnt/hdd/datasets/dogs_cats/train/dog/Loading the Imagesimages=[]label = []for i in cat:image = (filepath+i)(image)(0) for cat imagesfor i in dog:image = (filepath2+i)(image)(1) for dog imagesresizing all the imagesfor i in range(0,23000):images[i]=(images[i],(300,300))converting images to arraysimages=(images)label=(label) Defining the hyperparametersfilters=10filtersize=(5,5)epochs =5batchsize=128input_shape=(300,300,3)Converting the target variable to the required sizefrom import to_categoricallabel = to_categorical(label)Defining the modelmodel = Sequential()((input_shape=input_shape))((filters, filtersize, strides=(1, 1), padding=39。valid39。, data_format=channels_last, activation=39。relu39。))((pool_size=(2, 2)))(())((units=2, input_dim=50,activation=39。softmax39。))(loss=39。categorical_crossentropy39。, optimizer=39。adam39。, metrics=[39。accuracy39。])(images, label, epochs=epochs, batch_size=batchsize,validation_split=)()在這一模型中,我只使用了單一卷積和池化層,可訓(xùn)練參數(shù)是219,801。很好奇如果我在這種情況使用了MLP會(huì)有多少參數(shù)。通過增加更多的卷積和池化層,你可以進(jìn)一步降低參數(shù)的數(shù)量。我們添加的卷積層越多,被提取的特征就會(huì)更具體和復(fù)雜。在該模型中,我只使用了一個(gè)卷積層和池化層,可訓(xùn)練參數(shù)量為219,801。如果想知道使用MLP在這種情況下會(huì)得到多少,你可以通過加入更多卷積和池化層來減少參數(shù)的數(shù)量。越多的卷積層意味著提取出來的特征更加具體,更加復(fù)雜。結(jié)語希望本文能夠讓你認(rèn)識(shí)卷積神經(jīng)網(wǎng)絡(luò),這篇文章沒有深入CNN的復(fù)雜數(shù)學(xué)原理。如果希望增進(jìn)了解,你可以嘗試構(gòu)建自己的卷積神經(jīng)網(wǎng)絡(luò),借此來了解它運(yùn)行和預(yù)測的原理。