轉(zhuǎn)自:https://github.com/Keyird/ 今天為大家?guī)?lái)社區(qū)作者的精選推薦《深度學(xué)習(xí)->語(yǔ)義分割實(shí)戰(zhàn)(一):SegNet 詳解與 TensorFlow 2.0 實(shí)現(xiàn)》,。CSDN 博客專家 @AI 菌 從 SegNet 算法著手,,帶大家使用 TensorFlow 2.0 搭建 SegNet 網(wǎng)絡(luò),,對(duì)場(chǎng)景中的目標(biāo)進(jìn)行分割,。
本次我依然以理論和實(shí)戰(zhàn)兩部分展開,,首先理論部分對(duì) SegNet 算法進(jìn)行必要的講解,,然后在實(shí)戰(zhàn)部分,,使用 TensorFlow 2.0 框架搭建 SegNet 網(wǎng)絡(luò),,實(shí)現(xiàn)對(duì)場(chǎng)景中的目標(biāo)(礦堆)進(jìn)行分割。分割結(jié)果如下: 早在 2015 年,,Vijay Badrinarayanan, Alex Kendall 等人就提出了 SegNet 算法,這是一種用于語(yǔ)義像素級(jí)分割的深度全卷積神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu),。它主要是由一個(gè)編碼器網(wǎng)絡(luò),、一個(gè)對(duì)應(yīng)的解碼網(wǎng)絡(luò)和一個(gè)像素級(jí)分類層組成。
SegNet 的新穎之處在于解碼器對(duì)其低分辨率輸入特征映射進(jìn)行上采樣的方式,。具體地說(shuō),,解碼器使用在對(duì)應(yīng)編碼器的最大池化步驟中計(jì)算的池索引來(lái)執(zhí)行非線性上采樣。SegNet 的主要針對(duì)場(chǎng)景理解應(yīng)用,,SegNet 的可訓(xùn)練的參數(shù)量比其它的網(wǎng)絡(luò)結(jié)構(gòu)顯著減少,,并且它可以通過(guò)隨機(jī)梯度下降算法進(jìn)行端對(duì)端地訓(xùn)練,。經(jīng)評(píng)估表明,與其他體系結(jié)構(gòu)相比,,SegNet 在推理過(guò)程中,,具有時(shí)間和內(nèi)存方面的良好性能。 2. SegNet 網(wǎng)絡(luò)結(jié)構(gòu) (1) 整體結(jié)構(gòu),。如下圖所示,,SegNet 網(wǎng)絡(luò)由一個(gè)編碼器網(wǎng)絡(luò)、一個(gè)對(duì)應(yīng)的解碼器網(wǎng)絡(luò)和一個(gè)像素級(jí)分類層組成,。其中,,編碼器網(wǎng)絡(luò)采用的是 VGG 進(jìn)行特征提取,;解碼器網(wǎng)絡(luò)主要進(jìn)行 3 次分線性上采樣,;像素級(jí)分類層,通過(guò)一個(gè)卷積層將網(wǎng)路輸出調(diào)整為我們所需的輸出,。 來(lái)源:SegNet: A Deep ConvolutionalEncoder-Decoder Architecture for ImageSegmentation (2) 編碼器網(wǎng)絡(luò),。詳細(xì)來(lái)說(shuō),編碼器網(wǎng)絡(luò)采用的是 VGG16 的前面 13 個(gè)卷積層進(jìn)行提取特征,,并且送入到解碼器中的是 VGG16 第 4 個(gè)卷積塊 Conv_block 輸出的特征矩陣 feature,。由于原圖的 shape 是 (416,416,3)的,經(jīng)過(guò) 4 個(gè)卷積塊后(相當(dāng)于進(jìn)行了 4 次下采樣),,編碼器輸出的 feature 的 shape 是 (26,26,3) (3) 解碼器網(wǎng)絡(luò),。解碼實(shí)際上是上采樣的過(guò)程,SegNet 的新穎之處在于它的上采樣方式,。具體做法如下圖所示,,解碼器使用在對(duì)應(yīng)編碼器的最大池化步驟中計(jì)算的池索引來(lái)執(zhí)行非線性上采樣。
SegNet: A Deep ConvolutionalEncoder-Decoder Architecture for ImageSegmentation (4) 像素級(jí)分類層,。這一層主要是為像素級(jí)分類而設(shè)計(jì),。使用卷積層來(lái)改變解碼器網(wǎng)絡(luò)輸出張量的通道數(shù)。比如我們要進(jìn)行 n_class 分類,,那么通過(guò)卷積層的輸出 shape 就要變?yōu)?(208,208,n_class) 3. SegNet 實(shí)驗(yàn)效果 (1) 在 CamVid 數(shù)據(jù)集上白天和黑夜樣本下的測(cè)試效果圖,。很明顯,SegNet 的測(cè)試結(jié)果更接近 Ground Truth,。因此,,SegNet 在該數(shù)據(jù)集上的表現(xiàn)更好,。 SegNet (2) 下面兩張表展示了 SegNet 在 CamVid 和 SUNRGB-D 數(shù)據(jù)集上的表現(xiàn),,可見 SegNet 的整體精度優(yōu)于其它的分割網(wǎng)絡(luò)。
SegNet (3) 從下表可以看出,,SegNet 在保持精度不錯(cuò)的情況下,,在推理時(shí)間和占用內(nèi)存仍有較好的優(yōu)勢(shì),。
二、用 TF2.0 搭建 SegNet 進(jìn)行語(yǔ)義分割 下面僅對(duì)本項(xiàng)目的核心代碼進(jìn)行講解,。我已經(jīng)將完整代碼上傳至我的 GitHub 地址:需要的可自行下載,,歡迎 star! 1. 數(shù)據(jù)集準(zhǔn)備 語(yǔ)義分割數(shù)據(jù)集以及標(biāo)簽的制作過(guò)程可參考:labelme 安裝以及使用教程—自制語(yǔ)義分割數(shù)據(jù)集數(shù)據(jù)集制作完成后,,要通過(guò) make_txt 文件保存數(shù)據(jù)集所有圖片和對(duì)應(yīng)標(biāo)簽的文件名,。代碼如下: # coding:utf-8 import os
imgs_path = '/home/fmc/WX/Segmentation/SegNet-tf2/dataset/jpg' # 圖片文件存放地址 for files in os.listdir(imgs_path): print(files) image_name = files + ';' + files[:-4] + '.png'
with open('train.txt', 'a') as f: f.write(str(image_name) + '\n') f.close()
2. 網(wǎng)絡(luò)結(jié)構(gòu)搭建 (1) 編碼器網(wǎng)絡(luò) def vggnet_encoder(input_height=416, input_width=416, pretrained='imagenet'):
img_input = tf.keras.Input(shape=(input_height, input_width, 3))
# 416,416,3 -> 208,208,64 x = layers.Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(img_input) x = layers.Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x) x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x) f1 = x
# 208,208,64 -> 128,128,128 x = layers.Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x) x = layers.Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x) x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x) f2 = x
# 104,104,128 -> 52,52,256 x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x) x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x) x = layers.Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x) x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x) f3 = x
# 52,52,256 -> 26,26,512 x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x) x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x) x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x) x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x) f4 = x
# 26,26,512 -> 13,13,512 x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x) x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x) x = layers.Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x) x = layers.MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x) f5 = x
return img_input, [f1, f2, f3, f4, f5]
(2) 解碼器網(wǎng)絡(luò)與像素級(jí)分類層 # 解碼器 def decoder(feature_input, n_classes, n_upSample): # feature_input是vggnet第四個(gè)卷積塊的輸出特征矩陣 # 26,26,512 output = (layers.ZeroPadding2D((1, 1), data_format=IMAGE_ORDERING))(feature_input) output = (layers.Conv2D(512, (3, 3), padding='valid', data_format=IMAGE_ORDERING))(output) output = (layers.BatchNormalization())(output)
# 進(jìn)行一次UpSampling2D,此時(shí)hw變?yōu)樵瓉?lái)的1/8 # 52,52,256 output = (layers.UpSampling2D((2, 2), data_format=IMAGE_ORDERING))(output) output = (layers.ZeroPadding2D((1, 1), data_format=IMAGE_ORDERING))(output) output = (layers.Conv2D(256, (3, 3), padding='valid', data_format=IMAGE_ORDERING))(output) output = (layers.BatchNormalization())(output)
# 進(jìn)行一次UpSampling2D,,此時(shí)hw變?yōu)樵瓉?lái)的1/4 # 104,104,128 for _ in range(n_upSample - 2): output = (layers.UpSampling2D((2, 2), data_format=IMAGE_ORDERING))(output) output = (layers.ZeroPadding2D((1, 1), data_format=IMAGE_ORDERING))(output) output = (layers.Conv2D(128, (3, 3), padding='valid', data_format=IMAGE_ORDERING))(output) output = (layers.BatchNormalization())(output)
# 進(jìn)行一次 UpSampling2D,,此時(shí) hw 變?yōu)樵瓉?lái)的 1/2 # 208,208,64 output = (layers.UpSampling2D((2, 2), data_format=IMAGE_ORDERING))(output) output = (layers.ZeroPadding2D((1, 1), data_format=IMAGE_ORDERING))(output) output = (layers.Conv2D(64, (3, 3), padding='valid', data_format=IMAGE_ORDERING))(output) output = (layers.BatchNormalization())(output) # 像素級(jí)分類層 # 此時(shí)輸出為h_input/2,w_input/2,nclasses # 208,208,2 output = layers.Conv2D(n_classes, (3, 3), padding='same', data_format=IMAGE_ORDERING)(output)
return output
(3) 整體結(jié)構(gòu) # 語(yǔ)義分割網(wǎng)絡(luò)SegNet def SegNet(input_height=416, input_width=416, n_classes=2, n_upSample=3, encoder_level=3):
img_input, features = vggnet_encoder(input_height=input_height, input_width=input_width) feature = features[encoder_level] # (26,26,512) output = decoder(feature, n_classes, n_upSample)
# 將結(jié)果進(jìn)行reshape output = tf.reshape(output, (-1, int(input_height / 2) * int(input_width / 2), 2)) output = layers.Softmax()(output)
model = tf.keras.Model(img_input, output)
return model
3. 模型的裝配與訓(xùn)練 (1) 模型的裝配
model.compile(loss=loss_function, # 交叉熵?fù)p失函數(shù) optimizer=optimizers.Adam(lr=1e-3), # 優(yōu)化器 metrics=['accuracy']) # 評(píng)價(jià)標(biāo)準(zhǔn)
(2) 模型的訓(xùn)練 # 開始訓(xùn)練 model.fit_generator(generate_arrays_from_file(lines[:num_train], batch_size), # 訓(xùn)練集 steps_per_epoch=max(1, num_train // batch_size), # 每一個(gè)epos的steps數(shù) validation_data=generate_arrays_from_file(lines[num_train:], batch_size), # 驗(yàn)證集 validation_steps=max(1, num_val // batch_size), epochs=50, initial_epoch=0, callbacks=[checkpoint_period, reduce_lr, early_stopping]) # 回調(diào)
4. 測(cè)試效果 對(duì)存放在 img_test 文件下的圖片一一進(jìn)行測(cè)試,并將語(yǔ)義分割后的結(jié)果存放在 img_out 文件里,。 for jpg in imgs:
img = Image.open('./img_test/'+jpg) old_img = copy.deepcopy(img) orininal_h = np.array(img).shape[0] orininal_w = np.array(img).shape[1]
img = img.resize((WIDTH,HEIGHT)) img = np.array(img) img = img/255 img = img.reshape(-1,HEIGHT,WIDTH,3)
pr = model.predict(img)[0] pr = pr.reshape((int(HEIGHT/2), int(WIDTH/2), NCLASSES)).argmax(axis=-1)
seg_img = np.zeros((int(HEIGHT/2), int(WIDTH/2), 3)) colors = class_colors
for c in range(NCLASSES): seg_img[:,:,1] += ((pr[:,: ] == c )*( colors[c][1] )).astype('uint8')
# Image.fromarray將數(shù)組轉(zhuǎn)換成image格式 seg_img = Image.fromarray(np.uint8(seg_img)).resize((orininal_w, orininal_h)) # 將兩張圖片合成一張圖片 image = Image.blend(old_img, seg_img, 0.3) image.save('./img_out/'+jpg)
最后測(cè)試得到的效果如下: SegNet 論文詳解:深度學(xué)習(xí)—語(yǔ)義分割(1):SegNet論文詳解 https://blog.csdn.net/wjinjie/article/details/106732783
數(shù)據(jù)集制作教程:labelme 安裝以及使用教程——自制語(yǔ)義分割數(shù)據(jù)集 https://blog.csdn.net/wjinjie/article/details/106735141
代碼下載地址 https://github.com/Keyird/
— 參考文獻(xiàn) —
|