久久国产成人av_抖音国产毛片_a片网站免费观看_A片无码播放手机在线观看,色五月在线观看,亚洲精品m在线观看,女人自慰的免费网址,悠悠在线观看精品视频,一级日本片免费的,亚洲精品久,国产精品成人久久久久久久

分享

基于端到端深度學(xué)習(xí)的自動駕駛:AirSim教程(包含Ubuntu18.04下配置AIrsim仿真環(huán)境解決方案)

 netouch 2023-08-03 發(fā)布于北京

這是微軟自動駕駛食譜的第一個教程(目前共兩個)。之前看到過,,這里記錄一下。

https://github.com/microsoft/AutonomousDrivingCookbook

前言

在本教程中,,將學(xué)習(xí)如何使用AirSim仿真環(huán)境收集的數(shù)據(jù)來訓(xùn)練和測試自動駕駛的端到端深度學(xué)習(xí)模型。將訓(xùn)練一個模型,,學(xué)習(xí)如何通過山脈/景觀地圖的一部分,,在AirSim使用一個單一的正面面對網(wǎng)絡(luò)攝像頭(webcam)采集到的畫面作為輸入去操縱汽車。這樣的任務(wù)通常被認(rèn)為是自動駕駛的“hello world”,,

教程結(jié)構(gòu)

Keras框架,。

步驟0 -數(shù)據(jù)探索和準(zhǔn)備

概述

訓(xùn)練一個深度學(xué)習(xí)模型,
輸入:攝像頭的畫面和車輛最后已知狀態(tài)
輸出:轉(zhuǎn)向角度預(yù)測,。

端到端自動駕駛

不用解釋了吧,。不像需要特征工程等傳統(tǒng)機器學(xué)習(xí)方法,數(shù)據(jù)輸入神經(jīng)網(wǎng)絡(luò),,直接得到輸出,。唯一的缺點就是需要大量數(shù)據(jù),但模擬器可以用來采集數(shù)據(jù),,之后用少量的真實數(shù)據(jù)進行微調(diào)(Behavioral Cloning),便可實現(xiàn)端到端自動駕駛,。

下載數(shù)據(jù)集:

https:///AirSimTutorialDataset

數(shù)據(jù)集的百度網(wǎng)盤鏈接:

鏈接:https://pan.baidu.com/s/1l_YJ6c9VAJS_pkIJeSWSFw
提取碼:fwr3
復(fù)制這段內(nèi)容后打開百度網(wǎng)盤手機App,,操作更方便哦

代碼解讀如下:

注:<< ...... >>符號代表你需要對代碼根據(jù)自己的實際路徑進行修改。


%matplotlib inline
import numpy as np
import pandas as pd
import h5py
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw
import os
import Cooking
import random

# << Point this to the directory containing the raw data >>
RAW_DATA_DIR = 'data_raw/'

# << Point this to the desired output directory for the cooked (.h5) data >>
COOKED_DATA_DIR = 'data_cooked/'

# The folders to search for data under RAW_DATA_DIR
# For example, the first folder searched will be RAW_DATA_DIR/normal_1
DATA_FOLDERS = ['normal_1', 'normal_2', 'normal_3', 'normal_4', 'normal_5', 'normal_6', 'swerve_1', 'swerve_2', 'swerve_3']

# The size of the figures in this notebook
FIGURE_SIZE = (10,10)

數(shù)據(jù)集由兩部分組成:圖像和.tsv 文件,。先看一下.tsv 文件格式,。


sample_tsv_path = os.path.join(RAW_DATA_DIR, 'normal_1/airsim_rec.txt')
sample_tsv = pd.read_csv(sample_tsv_path, sep='\t') # https://blog.csdn.net/b876144622/article/details/80781917
sample_tsv.head()

在這里插入圖片描述
數(shù)據(jù)集包含標(biāo)簽:轉(zhuǎn)向角度;圖像名稱等,。
看一個圖片:normal_1文件夾下的img_0,。


sample_image_path = os.path.join(RAW_DATA_DIR, 'normal_1/images/img_0.png')
sample_image = Image.open(sample_image_path)
plt.title('Sample Image')
plt.imshow(sample_image)
plt.show()

在這里插入圖片描述
我們只對圖像中的一小部分感興趣,,ROI區(qū)域如下圖中紅框區(qū)域:

sample_image_roi = sample_image.copy()

fillcolor=(255,0,0)
draw = ImageDraw.Draw(sample_image_roi)
points = [(1,76), (1,135), (255,135), (255,76)]
for i in range(0, len(points), 1): # 1是步長,默認(rèn)為1,,不寫也行,。https://www.runoob.com/python/python-func-range.html
    draw.line([points[i], points[(i 1)%len(points)]], fill=fillcolor, width=3) # 這里的寫法,可以學(xué)習(xí)
del draw

plt.title('Image with sample ROI')
plt.imshow(sample_image_roi)
plt.show()

在這里插入圖片描述
提取ROI既可以減少訓(xùn)練時間,,也可以減少訓(xùn)練模型所需的數(shù)據(jù)量,。它還可以防止模型因關(guān)注于環(huán)境中的不相關(guān)特征(例如山、樹等)而混淆,。
數(shù)據(jù)增強

  1. 沿豎直方向鏡像圖像,,同時將轉(zhuǎn)角取負(fù)號。
  2. 更改全局光照

把所有標(biāo)簽放到一個變量中,,以更好地觀察,。

full_path_raw_folders = [os.path.join(RAW_DATA_DIR, f) for f in DATA_FOLDERS]

dataframes = []
for folder in full_path_raw_folders:
    current_dataframe = pd.read_csv(os.path.join(folder, 'airsim_rec.txt'), sep='\t')
    current_dataframe['Folder'] = folder
    dataframes.append(current_dataframe)
    
dataset = pd.concat(dataframes, axis=0) # 把{list:9}變成1個DataFrame{46738,8}

print('Number of data points: {0}'.format(dataset.shape[0]))

dataset.head()

在這里插入圖片描述
觀察文件夾的命名:'normal’和'swerve’兩種,這指的是兩種不同的駕駛策略,,我們看一下二者區(qū)別,。從每個駕駛風(fēng)格對彼此繪制數(shù)據(jù)點的一部分。

min_index = 100
max_index = 1100
steering_angles_normal_1 = dataset[dataset['Folder'].apply(lambda v: 'normal_1' in v)]['Steering'][min_index:max_index] # 這里的寫法堪稱牛逼
steering_angles_swerve_1 = dataset[dataset['Folder'].apply(lambda v: 'swerve_1' in v)]['Steering'][min_index:max_index]

plot_index = [i for i in range(min_index, max_index, 1)]

fig = plt.figure(figsize=FIGURE_SIZE)
ax1 = fig.add_subplot(111)

ax1.scatter(plot_index, steering_angles_normal_1, c='b', marker='o', label='normal_1')
ax1.scatter(plot_index, steering_angles_swerve_1, c='r', marker='o', label='swerve_1')
plt.legend(loc='upper left');
plt.title('Steering Angles for normal_1 and swerve_1 runs')
plt.xlabel('Time')
plt.ylabel('Steering Angle')
plt.show()

在這里插入圖片描述
藍色的點顯示的是正常的駕駛策略,,轉(zhuǎn)向角度接近于零,,車在道路上大部分時間都是直線行駛。
轉(zhuǎn)彎駕駛策略使汽車幾乎在道路上左右搖擺,。在訓(xùn)練端到端深度學(xué)習(xí)模型時,,由于我們沒有做任何特征工程,我們的模型幾乎完全依賴數(shù)據(jù)集來提供它需要的所有必要信息,。因此,,要考慮到模型可能遇到的任何急轉(zhuǎn)彎,并在它開始偏離道路時給予它糾正自身的能力,,我們需要在訓(xùn)練時為它提供足夠的這樣的例子,。因此,我們創(chuàng)建了這些額外的數(shù)據(jù)集來關(guān)注這些場景,。

現(xiàn)在,,讓我們看看每個類別中的數(shù)據(jù)點數(shù)量。

dataset['Is Swerve'] = dataset.apply(lambda r: 'swerve' in r['Folder'], axis=1) # https://www.cnblogs.com/liulangmao/p/9342806.html,。直呼內(nèi)行
grouped = dataset.groupby(by=['Is Swerve']).size().reset_index() # 關(guān)于pandas的用法還是太強了,。。,。
grouped.columns = ['Is Swerve', 'Count']

def make_autopct(values):
    def my_autopct(percent):
        total = sum(values)
        val = int(round(percent*total/100.0))
        return '{0:.2f}%  ({1:d})'.format(percent,val)
    return my_autopct

pie_labels = ['Normal', 'Swerve']
fig, ax = plt.subplots(figsize=FIGURE_SIZE) # 函數(shù)返回一個figure圖像和子圖ax的array列表,。https://www.cnblogs.com/komean/p/10670619.html
ax.pie(grouped['Count'], labels=pie_labels, autopct = make_autopct(grouped['Count'])) # https://www.cnblogs.com/biyoulin/p/9565350.html
plt.title('Number of data points per driving strategy')
plt.show()

在這里插入圖片描述
1/4是Swerve數(shù)據(jù),剩余的是normal數(shù)據(jù),且只有47,,000數(shù)據(jù),,因此網(wǎng)絡(luò)不能太深。

讓我們看看這兩種策略下標(biāo)簽的分布情況,。

bins = np.arange(-1, 1.05, 0.05)
normal_labels = dataset[dataset['Is Swerve'] == False]['Steering']
swerve_labels = dataset[dataset['Is Swerve'] == True]['Steering']

def steering_histogram(hist_labels, title, color):
    plt.figure(figsize=FIGURE_SIZE)
    n, b, p = plt.hist(hist_labels.as_matrix(), bins, normed=1, facecolor=color) # normed:參數(shù)指定密度,也就是每個條狀圖的占比例比,默認(rèn)為1
    plt.xlabel('Steering Angle')
    plt.ylabel('Normalized Frequency')
    plt.title(title)
    plt.show()

steering_histogram(normal_labels, 'Normal label distribution', 'g') # https://blog.csdn.net/weixin_43085694/article/details/104147348
steering_histogram(swerve_labels, 'Swerve label distribution', 'r') # https://blog.csdn.net/m0_45408211/article/details/107583589

在這里插入圖片描述
在這里插入圖片描述
兩個結(jié)論:

  • 汽車正常行駛時,,轉(zhuǎn)向角度幾乎總是為零。這是一個嚴(yán)重的不平衡,,如果這部分?jǐn)?shù)據(jù)沒有采樣,,模型將總是預(yù)測零,汽車將無法轉(zhuǎn)彎,。
  • 當(dāng)使用轉(zhuǎn)向策略駕駛汽車時,,我們會得到在正常策略數(shù)據(jù)集中不會出現(xiàn)的急轉(zhuǎn)彎例子。這驗證了我們收集上述數(shù)據(jù)背后的原因,。(將數(shù)據(jù)集分類normal和swerve兩類)

此時,,我們需要將原始數(shù)據(jù)合并成適合訓(xùn)練的壓縮數(shù)據(jù)文件。這里,,我們將使用**.h5文件**,,因為這種格式非常適合支持大型數(shù)據(jù)集,而不需要一下子將所有數(shù)據(jù)都讀入內(nèi)存,。它還可以與Keras無縫地工作,。
編寫數(shù)據(jù)集的代碼很簡單,但是很長,。當(dāng)它終止時,,最終的數(shù)據(jù)集將有4部分:

  • image:圖片數(shù)據(jù),numpy array
  • previous_state:汽車的最后已知狀態(tài),,numpy array,,(steering, throttle, brake, speed)元組格式。
  • label:轉(zhuǎn)角(我們要預(yù)測的),,歸一化到[-1,1]之間,,numpy array
  • metadata:關(guān)于文件的metadata(他們來自哪里等),numpy array

我們把他們分成train/test/validation三部分,。


train_eval_test_split = [0.7, 0.2, 0.1]
full_path_raw_folders = [os.path.join(RAW_DATA_DIR, f) for f in DATA_FOLDERS]
Cooking.cook(full_path_raw_folders, COOKED_DATA_DIR, train_eval_test_split)

在這里插入圖片描述上述文件中導(dǎo)入的本地模塊import Cooking解讀如下:

import random
import csv
from PIL import Image
import numpy as np
import pandas as pd
import sys
import os
import errno
from collections import OrderedDict
import h5py
from pathlib import Path
import copy
import re

def checkAndCreateDir(full_path):
    '''Checks if a given path exists and if not, creates the needed directories.
            Inputs:
                full_path: path to be checked
    '''
    if not os.path.exists(os.path.dirname(full_path)):
        try:
            os.makedirs(os.path.dirname(full_path))
        except OSError as exc:  # Guard against race condition
            if exc.errno != errno.EEXIST:
                raise
                
def readImagesFromPath(image_names):
    ''' Takes in a path and a list of image file names to be loaded and returns a list of all loaded images after resize.
           Inputs:
                image_names: list of image names
           Returns:
                List of all loaded and resized images
    '''
    returnValue = []
    for image_name in image_names:
        im = Image.open(image_name)
        imArr = np.asarray(im)
        
        #Remove alpha channel if exists
        if len(imArr.shape) == 3 and imArr.shape[2] == 4: # 如果imArr的形狀是3并且通道維度是4的話
            if (np.all(imArr[:, :, 3] == imArr[0, 0, 3])): # 如果所有圖像的alpha通道都相等
                imArr = imArr[:,:,0:3] 3 # 移除alpha通道
        if len(imArr.shape) != 3 or imArr.shape[2] != 3:
            print('Error: Image', image_name, 'is not RGB.')
            sys.exit()            

        returnIm = np.asarray(imArr)

        returnValue.append(returnIm)
    return returnValue # 返回列表{list:32},。每個元素的形狀是【144,256,,3】
    
    
    
def splitTrainValidationAndTestData(all_data_mappings, split_ratio=(0.7, 0.2, 0.1)):
    '''Simple function to create train, validation and test splits on the data.
            Inputs:
                all_data_mappings: mappings from the entire dataset
                split_ratio: (train, validation, test) split ratio

            Returns:
                train_data_mappings: mappings for training data
                validation_data_mappings: mappings for validation data
                test_data_mappings: mappings for test data

    '''
    if round(sum(split_ratio), 5) != 1.0:
        print('Error: Your splitting ratio should add up to 1')
        sys.exit()

    train_split = int(len(all_data_mappings) * split_ratio[0])
    val_split = train_split   int(len(all_data_mappings) * split_ratio[1])

    train_data_mappings = all_data_mappings[0:train_split]
    validation_data_mappings = all_data_mappings[train_split:val_split]
    test_data_mappings = all_data_mappings[val_split:]

    return [train_data_mappings, validation_data_mappings, test_data_mappings]
    
def generateDataMapAirSim(folders):
    ''' Data map generator for simulator(AirSim) data. Reads the driving_log csv file and returns a list of 'center camera image name - label(s)' tuples
           Inputs:
               folders: list of folders to collect data from

           Returns:
               mappings: All data mappings as a dictionary. Key is the image filepath, the values are a 2-tuple:
                   0 -> label(s) as a list of double
                   1 -> previous state as a list of double
    '''

    all_mappings = {}
    for folder in folders:
        print('Reading data from {0}...'.format(folder))
        current_df = pd.read_csv(os.path.join(folder, 'airsim_rec.txt'), sep='\t')
        
        for i in range(1, current_df.shape[0] - 1, 1): # 因為包含之前的狀態(tài),,所以從第1個開始,倒數(shù)第二個結(jié)束,。
            previous_state = list(current_df.iloc[i-1][['Steering', 'Throttle', 'Brake', 'Speed (kmph)']])
            current_label = list((current_df.iloc[i][['Steering']]   current_df.iloc[i-1][['Steering']]   current_df.iloc[i 1][['Steering']]) / 3.0) # 用當(dāng)下轉(zhuǎn)角及前后轉(zhuǎn)角的平均值作為label
            
            image_filepath = os.path.join(os.path.join(folder, 'images'), current_df.iloc[i]['ImageName']).replace('\\', '/')
            
            # Sanity check
            if (image_filepath in all_mappings):
                print('Error: attempting to add image {0} twice.'.format(image_filepath))
            
            all_mappings[image_filepath] = (current_label, previous_state) # all_mappings:字典{當(dāng)下圖像:(【轉(zhuǎn)角(取當(dāng)下和前后的平均值)】,【上一刻狀態(tài):轉(zhuǎn)角,油門,,剎車,,速度】,{'data_raw/normal_1/images/img_1.png': ([-0.011840666666666668], [0.0, 0.0, 0.0, 0]),  ……共計46720
    
    mappings = [(key, all_mappings[key]) for key in all_mappings],; # mappings:列表,。[('data_raw/normal_1/images/img_1.png', ([-0.011840666666666668], [0.0, 0.0, 0.0, 0])),……共計46720
    
    random.shuffle(mappings)
    
    return mappings

def generatorForH5py(data_mappings, chunk_size=32):
    '''
    This function batches the data for saving to the H5 file
    '''
    for chunk_id in range(0, len(data_mappings), chunk_size):
        # Data is expected to be a dict of <image: (label, previousious_state)>
        # Extract the parts
        data_chunk = data_mappings[chunk_id:chunk_id   chunk_size]
        if (len(data_chunk) == chunk_size):
            image_names_chunk = [a for (a, b) in data_chunk]
            labels_chunk = np.asarray([b[0] for (a, b) in data_chunk])
            previous_state_chunk = np.asarray([b[1] for (a, b) in data_chunk])
            
            #Flatten and yield as tuple
            yield (image_names_chunk, labels_chunk.astype(float), previous_state_chunk.astype(float)) # 對于yiedl,把他當(dāng)成return,。https://blog.csdn.net/mieleizhi0522/article/details/82142856/,。關(guān)于生成器和迭代器的概念自己好像一直不太懂。,。,。
            if chunk_id   chunk_size > len(data_mappings): # 對于多余的就不要了 
                raise StopIteration
    raise StopIteration
    
def saveH5pyData(data_mappings, target_file_path):
    '''
    Saves H5 data to file
    '''
    chunk_size = 32
    gen = generatorForH5py(data_mappings,chunk_size) # 實例化一個生成器

    image_names_chunk, labels_chunk, previous_state_chunk = next(gen)
    images_chunk = np.asarray(readImagesFromPath(image_names_chunk)) # 讀取一組(32張)圖片。images_chunk:【32,,144,,256,3】
    row_count = images_chunk.shape[0] # 可理解為一個batch的大小,,用來計數(shù)

    checkAndCreateDir(target_file_path) # 檢查目標(biāo)文件是否存在
    with h5py.File(target_file_path, 'w') as f: # 打開文件并開始寫入,。https://blog.csdn.net/qq_34859482/article/details/80115237

        # Initialize a resizable dataset to hold the output
        images_chunk_maxshape = (None,)   images_chunk.shape[1:]
        labels_chunk_maxshape = (None,)   labels_chunk.shape[1:]
        previous_state_maxshape = (None,)   previous_state_chunk.shape[1:]

        dset_images = f.create_dataset('image', shape=images_chunk.shape, maxshape=images_chunk_maxshape, chunks=images_chunk.shape, dtype=images_chunk.dtype) # 創(chuàng)建數(shù)據(jù)集,關(guān)于這幾個參數(shù):'image':名字,;shape:數(shù)據(jù)集形狀,。(32,144,,256,,3);maxshape:使數(shù)據(jù)集的大小可調(diào)整為此形狀,。(None, 144, 256, 3),;chunks:每一小塊的形狀?(32, 144, 256, 3),;dtype:數(shù)據(jù)類型,。

        dset_labels = f.create_dataset('label', shape=labels_chunk.shape, maxshape=labels_chunk_maxshape, chunks=labels_chunk.shape, dtype=labels_chunk.dtype) # 這里應(yīng)該只是占個內(nèi)存
        
        dset_previous_state = f.create_dataset('previous_state', shape=previous_state_chunk.shape, maxshape=previous_state_maxshape,
                                       chunks=previous_state_chunk.shape, dtype=previous_state_chunk.dtype)
                                       
        dset_images[:] = images_chunk # 這里賦值
        dset_labels[:] = labels_chunk
        dset_previous_state[:] = previous_state_chunk

        for image_names_chunk, label_chunk, previous_state_chunk in gen: 
            image_chunk = np.asarray(readImagesFromPath(image_names_chunk)) # 上一次讀取image_names_chunk是為了創(chuàng)建h5文件的格式(數(shù)據(jù)結(jié)構(gòu)),這里是為了迭代讀取
            
            # Resize the dataset to accommodate the next chunk of rows
            dset_images.resize(row_count   image_chunk.shape[0], axis=0) # 擴充形狀
            dset_labels.resize(row_count   label_chunk.shape[0], axis=0)
            dset_previous_state.resize(row_count   previous_state_chunk.shape[0], axis=0)
            # Write the next chunk
            dset_images[row_count:] = image_chunk # 接著向下賦值
            dset_labels[row_count:] = label_chunk
            dset_previous_state[row_count:] = previous_state_chunk

            # Increment the row count
            row_count  = image_chunk.shape[0]
            
            
def cook(folders, output_directory, train_eval_test_split):
    ''' Primary function for data pre-processing. Reads and saves all data as h5 files.
            Inputs:
                folders: a list of all data folders
                output_directory: location for saving h5 files
                train_eval_test_split: dataset split ratio
    '''
    output_files = [os.path.join(output_directory, f) for f in ['train.h5', 'eval.h5', 'test.h5']]
    if (any([os.path.isfile(f) for f in output_files])):
       print('Preprocessed data already exists at: {0}. Skipping preprocessing.'.format(output_directory))

    else:
        all_data_mappings = generateDataMapAirSim(folders) # all_data_mappings:[('data_raw/normal_1/images/img_1662.png', ([-0.007812999999999999], [-0.007812999999999999, 0.501961, 0.0, 18])),……共計46720,。數(shù)據(jù)的一個影射
        
        split_mappings = splitTrainValidationAndTestData(all_data_mappings, split_ratio=train_eval_test_split) # 把數(shù)據(jù)分為了三部分,。{list:3} 32703、
        
        for i in range(0, len(split_mappings), 1):
            print('Processing {0}...'.format(output_files[i]))
            saveH5pyData(split_mappings[i], output_files[i])
            print('Finished saving {0}.'.format(output_files[i]))

又出bug了,,報錯信息如下:

Processing data_cooked/train.h5...
Traceback (most recent call last):
  File '/home/wqf/AutonomousDrivingCookbook-master/AirSimE2EDeepLearning/Cooking.py', line 130, in generatorForH5py
    raise StopIteration
StopIteration

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File '/home/wqf/AutonomousDrivingCookbook-master/AirSimE2EDeepLearning/DataExplorationAndPreparation.py', line 133, in <module>
    Cooking.cook(full_path_raw_folders, COOKED_DATA_DIR, train_eval_test_split)
  File '/home/wqf/AutonomousDrivingCookbook-master/AirSimE2EDeepLearning/Cooking.py', line 197, in cook
    saveH5pyData(split_mappings[i], output_files[i])
  File '/home/wqf/AutonomousDrivingCookbook-master/AirSimE2EDeepLearning/Cooking.py', line 163, in saveH5pyData
    for image_names_chunk, label_chunk, previous_state_chunk in gen:
RuntimeError: generator raised StopIteration

Process finished with exit code 1

這是因為代碼中手動拋出了異常,,把該行代碼注釋掉即可。

def generatorForH5py(data_mappings, chunk_size=32):
    ...
    # raise StopIteration

Step 1 - 訓(xùn)練模型

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense, Lambda, Input, concatenate
from keras.layers.normalization import BatchNormalization
from keras.layers.advanced_activations import ELU
from keras.optimizers import Adam, SGD, Adamax, Nadam
from keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, CSVLogger, EarlyStopping
import keras.backend as K
from keras.preprocessing import image

from keras_tqdm import TQDMNotebookCallback

import json
import os
import numpy as np
import pandas as pd
from Generator import DriveDataGenerator
from Cooking import checkAndCreateDir
import h5py
from PIL import Image, ImageDraw
import math
import matplotlib.pyplot as plt

# << The directory containing the cooked data from the previous step >>
COOKED_DATA_DIR = 'data_cooked/'

# << The directory in which the model output will be placed >>
MODEL_OUTPUT_DIR = 'model'

讀取一些文件

train_dataset = h5py.File(os.path.join(COOKED_DATA_DIR, 'train.h5'), 'r') # https://www.jianshu.com/p/de9f33cdfba0
eval_dataset = h5py.File(os.path.join(COOKED_DATA_DIR, 'eval.h5'), 'r')
test_dataset = h5py.File(os.path.join(COOKED_DATA_DIR, 'test.h5'), 'r')

num_train_examples = train_dataset['image'].shape[0] # 32672
num_eval_examples = eval_dataset['image'].shape[0] # 9344
num_test_examples = test_dataset['image'].shape[0] # 4672

batch_size=32

對于圖像數(shù)據(jù)來說,,將整個數(shù)據(jù)集加載到內(nèi)存中代價太高,。keras中有DataGenerator的概念,DataGenerator本質(zhì)上是一個迭代器(iterator),它將從磁盤中分塊讀取數(shù)據(jù),。這使得保持CPU和GPU繁忙,,提高吞吐量。

運用如下訓(xùn)練tricks:

  • 只有一小部分圖像是感興趣的-當(dāng)生成批時,,我們可以刪除圖像中不感興趣的部分,。
  • 隨機水平翻轉(zhuǎn)圖像和轉(zhuǎn)角(label)
  • 隨機增加或者減小全劇光照信息
  • 在轉(zhuǎn)向角為零的位置隨機丟棄一定百分比的數(shù)據(jù)點,以便模型在訓(xùn)練時看到一個平衡的數(shù)據(jù)集,。
  • 從數(shù)據(jù)集中的轉(zhuǎn)彎策略中獲得示例,,以便模型學(xué)習(xí)如何快速轉(zhuǎn)彎(將數(shù)據(jù)集進行了分類,Step0已完成)

為了實現(xiàn)上述tricks,,我們在繼承keras中的ImageDataGenerator類基礎(chǔ)上,,創(chuàng)建自己的類,這些代碼在Generator.py文件中,。(本Step末解讀)

這里,,我們直接利用帶有如下參數(shù)的生成器:

  • Zero_Drop_Percentage: 0.9。隨機丟棄90%的label = 0的數(shù)據(jù)點
  • Brighten_Range: 0.4,。每幅圖像的亮度將最多修改40%,。圖像:RGB到HSV,調(diào)整V,,再轉(zhuǎn)回RGB
  • ROI: [76,135,0,255]:圖像感興趣區(qū)域的x1, x2, y1, y2,。
data_generator = DriveDataGenerator(rescale=1./255., horizontal_flip=True, brighten_range=0.4)
train_generator = data_generator.flow    (train_dataset['image'], train_dataset['previous_state'], train_dataset['label'], batch_size=batch_size, zero_drop_percentage=0.95, roi=[76,135,0,255])
eval_generator = data_generator.flow    (eval_dataset['image'], eval_dataset['previous_state'], eval_dataset['label'], batch_size=batch_size, zero_drop_percentage=0.95, roi=[76,135,0,255])

附上述文件導(dǎo)入的本地模塊from Generator import DriveDataGenerator的解讀:

from keras.preprocessing import image
import numpy as np
import keras.backend as K
import os
import cv2

class DriveDataGenerator(image.ImageDataGenerator):
    def __init__(self, # 初始化方法。調(diào)用該類時直接賦值給這些
                 featurewise_center=False,
                 samplewise_center=False,
                 featurewise_std_normalization=False,
                 samplewise_std_normalization=False,
                 zca_whitening=False,
                 zca_epsilon=1e-6,
                 rotation_range=0.,
                 width_shift_range=0.,
                 height_shift_range=0.,
                 shear_range=0.,
                 zoom_range=0.,
                 channel_shift_range=0.,
                 fill_mode='nearest',
                 cval=0.,
                 horizontal_flip=False,
                 vertical_flip=False,
                 rescale=None,
                 preprocessing_function=None,
                 data_format=None,
                 brighten_range=0):
        super(DriveDataGenerator, self).__init__(featurewise_center, # 調(diào)用父函數(shù)中的初始化方法,,接受上述傳入的值	
                 samplewise_center,
                 featurewise_std_normalization,
                 samplewise_std_normalization,
                 zca_whitening,
                 zca_epsilon,
                 rotation_range,
                 width_shift_range,
                 height_shift_range,
                 shear_range,
                 zoom_range,
                 channel_shift_range,
                 fill_mode,
                 cval,
                 horizontal_flip,
                 vertical_flip,
                 rescale,
                 preprocessing_function,
                 data_format)
        self.brighten_range = brighten_range

    def flow(self, x_images, x_prev_states = None, y=None, batch_size=32, shuffle=True, seed=None,
             save_to_dir=None, save_prefix='', save_format='png', zero_drop_percentage=0.5, roi=None):
        return DriveIterator(
            x_images, x_prev_states, y, self,
            batch_size=batch_size,
            shuffle=shuffle,
            seed=seed,
            data_format=self.data_format,
            save_to_dir=save_to_dir,
            save_prefix=save_prefix,
            save_format=save_format,
            zero_drop_percentage=zero_drop_percentage,
            roi=roi)
    
    def random_transform_with_states(self, x, seed=None):
        '''Randomly augment a single image tensor.
        # Arguments
            x: 3D tensor, single image.
            seed: random seed.
        # Returns
            A tuple. 0 -> randomly transformed version of the input (same shape). 1 -> true if image was horizontally flipped, false otherwise
        '''
        # x is a single image, so it doesn't have image number at index 0
        img_row_axis = self.row_axis
        img_col_axis = self.col_axis
        img_channel_axis = self.channel_axis

        is_image_horizontally_flipped = False

        # use composition of homographies
        # to generate final transform that needs to be applied
        if self.rotation_range:
            theta = np.pi / 180 * np.random.uniform(-self.rotation_range, self.rotation_range)
        else:
            theta = 0

        if self.height_shift_range:
            tx = np.random.uniform(-self.height_shift_range, self.height_shift_range) * x.shape[img_row_axis]
        else:
            tx = 0

        if self.width_shift_range:
            ty = np.random.uniform(-self.width_shift_range, self.width_shift_range) * x.shape[img_col_axis]
        else:
            ty = 0

        if self.shear_range:
            shear = np.random.uniform(-self.shear_range, self.shear_range)
        else:
            shear = 0

        if self.zoom_range[0] == 1 and self.zoom_range[1] == 1:
            zx, zy = 1, 1
        else:
            zx, zy = np.random.uniform(self.zoom_range[0], self.zoom_range[1], 2)

        transform_matrix = None
        if theta != 0:
            rotation_matrix = np.array([[np.cos(theta), -np.sin(theta), 0],
                                        [np.sin(theta), np.cos(theta), 0],
                                        [0, 0, 1]])
            transform_matrix = rotation_matrix

        if tx != 0 or ty != 0:
            shift_matrix = np.array([[1, 0, tx],
                                     [0, 1, ty],
                                     [0, 0, 1]])
            transform_matrix = shift_matrix if transform_matrix is None else np.dot(transform_matrix, shift_matrix)

        if shear != 0:
            shear_matrix = np.array([[1, -np.sin(shear), 0],
                                    [0, np.cos(shear), 0],
                                    [0, 0, 1]])
            transform_matrix = shear_matrix if transform_matrix is None else np.dot(transform_matrix, shear_matrix)

        if zx != 1 or zy != 1:
            zoom_matrix = np.array([[zx, 0, 0],
                                    [0, zy, 0],
                                    [0, 0, 1]])
            transform_matrix = zoom_matrix if transform_matrix is None else np.dot(transform_matrix, zoom_matrix)

        if transform_matrix is not None:
            h, w = x.shape[img_row_axis], x.shape[img_col_axis]
            transform_matrix = image.transform_matrix_offset_center(transform_matrix, h, w)
            x = image.apply_transform(x, transform_matrix, img_channel_axis,
                                fill_mode=self.fill_mode, cval=self.cval)

        if self.channel_shift_range != 0:
            x = image.random_channel_shift(x,
                                     self.channel_shift_range,
                                     img_channel_axis)
        if self.horizontal_flip:
            if np.random.random() < 0.5:
                x = image.flip_axis(x, img_col_axis)
                is_image_horizontally_flipped = True

        if self.vertical_flip:
            if np.random.random() < 0.5:
                x = image.flip_axis(x, img_row_axis)
                
        if self.brighten_range != 0:
            random_bright = np.random.uniform(low = 1.0-self.brighten_range, high=1.0 self.brighten_range)
            
            #TODO: Write this as an apply to push operations into C for performance
            img = cv2.cvtColor(x, cv2.COLOR_RGB2HSV)
            img[:, :, 2] = np.clip(img[:, :, 2] * random_bright, 0, 255)
            x = cv2.cvtColor(img, cv2.COLOR_HSV2RGB)

        return (x, is_image_horizontally_flipped)

class DriveIterator(image.Iterator):
    '''Iterator yielding data from a Numpy array.

    # Arguments
        x: Numpy array of input data.
        y: Numpy array of targets data.
        image_data_generator: Instance of `ImageDataGenerator`
            to use for random transformations and normalization.
        batch_size: Integer, size of a batch.
        shuffle: Boolean, whether to shuffle the data between epochs.
        seed: Random seed for data shuffling.
        data_format: String, one of `channels_first`, `channels_last`.
        save_to_dir: Optional directory where to save the pictures
            being yielded, in a viewable format. This is useful
            for visualizing the random transformations being
            applied, for debugging purposes.
        save_prefix: String prefix to use for saving sample
            images (if `save_to_dir` is set).
        save_format: Format to use for saving sample images
            (if `save_to_dir` is set).
    '''

    def __init__(self, x_images, x_prev_states, y, image_data_generator,
                 batch_size=32, shuffle=False, seed=None,
                 data_format=None,
                 save_to_dir=None, save_prefix='', save_format='png', zero_drop_percentage = 0.5, roi = None):
        if y is not None and len(x_images) != len(y):
            raise ValueError('X (images tensor) and y (labels) '
                             'should have the same length. '
                             'Found: X.shape = %s, y.shape = %s' %
                             (np.asarray(x_images).shape, np.asarray(y).shape))

        if data_format is None:
            data_format = K.image_data_format()
        
        self.x_images = x_images
        
        self.zero_drop_percentage = zero_drop_percentage
        self.roi = roi
        
        if self.x_images.ndim != 4:
            raise ValueError('Input data in `NumpyArrayIterator` '
                             'should ave rank 4. You passed an array '
                             'with shape', self.x_images.shape)
        channels_axis = 3 if data_format == 'channels_last' else 1
        if self.x_images.shape[channels_axis] not in {1, 3, 4}:
            raise ValueError('NumpyArrayIterator is set to use the '
                             'data format convention ''   data_format   '' '
                             '(channels on axis '   str(channels_axis)   '), i.e. expected '
                             'either 1, 3 or 4 channels on axis '   str(channels_axis)   '. '
                             'However, it was passed an array with shape '   str(self.x_images.shape)  
                             ' ('   str(self.x_images.shape[channels_axis])   ' channels).')
        if x_prev_states is not None:
            self.x_prev_states = x_prev_states
        else:
            self.x_prev_states = None

        if y is not None:
            self.y = y
        else:
            self.y = None
        self.image_data_generator = image_data_generator
        self.data_format = data_format
        self.save_to_dir = save_to_dir
        self.save_prefix = save_prefix
        self.save_format = save_format
        self.batch_size = batch_size
        super(DriveIterator, self).__init__(x_images.shape[0], batch_size, shuffle, seed)

    def next(self):
        '''For python 2.x.

        # Returns
            The next batch.
        '''
        # Keeps under lock only the mechanism which advances
        # the indexing of each batch.
        with self.lock:
            index_array = next(self.index_generator)
        # The transformation of images is not under thread lock
        # so it can be done in parallel

        return self.__get_indexes(index_array)

    def __get_indexes(self, index_array):
        index_array = sorted(index_array)
        if self.x_prev_states is not None:
            batch_x_images = np.zeros(tuple([self.batch_size]  list(self.x_images.shape)[1:]),
                                      dtype=K.floatx())
            batch_x_prev_states = np.zeros(tuple([self.batch_size] list(self.x_prev_states.shape)[1:]), dtype=K.floatx())
        else:
            batch_x_images = np.zeros(tuple([self.batch_size]   list(self.x_images.shape)[1:]), dtype=K.floatx())

        if self.roi is not None:
            batch_x_images = batch_x_images[:, self.roi[0]:self.roi[1], self.roi[2]:self.roi[3], :]
            
        used_indexes = []
        is_horiz_flipped = []
        for i, j in enumerate(index_array):
            x_images = self.x_images[j]
            
            if self.roi is not None:
                x_images = x_images[self.roi[0]:self.roi[1], self.roi[2]:self.roi[3], :]
            
            transformed = self.image_data_generator.random_transform_with_states(x_images.astype(K.floatx()))
            x_images = transformed[0]
            is_horiz_flipped.append(transformed[1])
            x_images = self.image_data_generator.standardize(x_images)
            batch_x_images[i] = x_images

            if self.x_prev_states is not None:
                x_prev_states = self.x_prev_states[j]
                
                if (transformed[1]):
                    x_prev_states[0] *= -1.0
                
                batch_x_prev_states[i] = x_prev_states
            
            used_indexes.append(j)

        if self.x_prev_states is not None:
            batch_x = [np.asarray(batch_x_images), np.asarray(batch_x_prev_states)]
        else:
            batch_x = np.asarray(batch_x_images)
            
        if self.save_to_dir:
            for i in range(0, self.batch_size, 1):
                hash = np.random.randint(1e4)
               
                img = image.array_to_img(batch_x_images[i], self.data_format, scale=True)
                fname = '{prefix}_{index}_{hash}.{format}'.format(prefix=self.save_prefix,
                                                                        index=1,
                                                                        hash=hash,
                                                                        format=self.save_format)
                img.save(os.path.join(self.save_to_dir, fname))

        batch_y = self.y[list(sorted(used_indexes))]
        idx = []
        for i in range(0, len(is_horiz_flipped), 1):
            if batch_y.shape[1] == 1:
                if (is_horiz_flipped[i]):
                    batch_y[i] *= -1
                    
                if (np.isclose(batch_y[i], 0)):
                    if (np.random.uniform(low=0, high=1) > self.zero_drop_percentage):
                        idx.append(True)
                    else:
                        idx.append(False)
                else:
                    idx.append(True)
            else:
                if (batch_y[i][int(len(batch_y[i])/2)] == 1):
                    if (np.random.uniform(low=0, high=1) > self.zero_drop_percentage):
                        idx.append(True)
                    else:
                        idx.append(False)
                else:
                    idx.append(True)
                
                if (is_horiz_flipped[i]):
                    batch_y[i] = batch_y[i][::-1]

        batch_y = batch_y[idx]
        batch_x[0] = batch_x[0][idx]
        batch_x[1] = batch_x[1][idx]
        
        return batch_x, batch_y
        
    def _get_batches_of_transformed_samples(self, index_array):
        return self.__get_indexes(index_array)
        

第一個bug:

Traceback (most recent call last):
  File '/home/wqf/下載/pycharm-community-2020.3/plugins/python-ce/helpers/pydev/pydevd.py', line 1477, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File '/home/wqf/下載/pycharm-community-2020.3/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py', line 18, in execfile
    exec(compile(contents '\n', file, 'exec'), glob, loc)
  File '/home/wqf/AutonomousDrivingCookbook-master/AirSimE2EDeepLearning/train.py', line 51, in <module>
    data_generator = DriveDataGenerator(rescale=1. / 255., horizontal_flip=True,
  File '/home/wqf/AutonomousDrivingCookbook-master/AirSimE2EDeepLearning/Generator.py', line 36, in __init__
    super(DriveDataGenerator, self).__init__(featurewise_center,
  File '/home/wqf/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/preprocessing/image.py', line 783, in __init__
    super(ImageDataGenerator, self).__init__(
  File '/home/wqf/anaconda3/lib/python3.8/site-packages/keras_preprocessing/image/image_data_generator.py', line 363, in __init__
    raise ValueError(
ValueError: `brightness_range should be tuple or list of two floats. Received: 0.0

Process finished with exit code 1

改正:

class DriveDataGenerator(image.ImageDataGenerator):
    ……
    brighten_range=None(原來是0,,會導(dǎo)致第一個錯)

后來發(fā)現(xiàn)會一直報錯,經(jīng)github上issue的高人點撥,,發(fā)現(xiàn)是版本不對,,于是切換環(huán)境為python3.6 keras2.1.2
(keras好像不同版本之間的兼容性不太好)

https://github.com/microsoft/AutonomousDrivingCookbook/issues/89

在這里插入圖片描述
我發(fā)現(xiàn)CSDN給我吞了好多,。,。。我明明保存了的,。,。。,??磥硪院筮€是要發(fā)表出去,不能留在草稿箱,。,。,。。

def draw_image_with_label(img, label, prediction=None):
    theta = label * 0.69 #Steering range for the car is  - 40 degrees -> 0.69 radians,。方向盤轉(zhuǎn)角范圍是40度,,即0.69弧度。label在Step0中被歸一化到[-1,1]了,,這里轉(zhuǎn)換回來。
    line_length = 50
    line_thickness = 3
    label_line_color = (255, 0, 0)
    prediction_line_color = (0, 0, 255)
    pil_image = image.array_to_img(img, K.image_data_format(), scale=True)
    print('Actual Steering Angle = {0}'.format(label))
    draw_image = pil_image.copy()
    image_draw = ImageDraw.Draw(draw_image)
    first_point = (int(img.shape[1]/2),img.shape[0])
    second_point = (int((img.shape[1]/2)   (line_length * math.sin(theta))), int(img.shape[0] - (line_length * math.cos(theta))))
    image_draw.line([first_point, second_point], fill=label_line_color, width=line_thickness)
    
    if (prediction is not None):
        print('Predicted Steering Angle = {0}'.format(prediction))
        print('L1 Error: {0}'.format(abs(prediction-label)))
        theta = prediction * 0.69
        second_point = (int((img.shape[1]/2)   (line_length * math.sin(theta))), int(img.shape[0] - (line_length * math.cos(theta))))
        image_draw.line([first_point, second_point], fill=prediction_line_color, width=line_thickness)
    
    del image_draw
    plt.imshow(draw_image)
    plt.show()

[sample_batch_train_data, sample_batch_test_data] = next(train_generator)
for i in range(0, 3, 1):
    draw_image_with_label(sample_batch_train_data[0][i], sample_batch_test_data[i])

在這里插入圖片描述下面定義網(wǎng)絡(luò):

image_input_shape = sample_batch_train_data[0].shape[1:]
state_input_shape = sample_batch_train_data[1].shape[1:]
activation = 'relu'

#Create the convolutional stacks
pic_input = Input(shape=image_input_shape)

img_stack = Conv2D(16, (3, 3), name='convolution0', padding='same', activation=activation)(pic_input)
img_stack = MaxPooling2D(pool_size=(2,2))(img_stack)
img_stack = Conv2D(32, (3, 3), activation=activation, padding='same', name='convolution1')(img_stack)
img_stack = MaxPooling2D(pool_size=(2, 2))(img_stack)
img_stack = Conv2D(32, (3, 3), activation=activation, padding='same', name='convolution2')(img_stack)
img_stack = MaxPooling2D(pool_size=(2, 2))(img_stack)
img_stack = Flatten()(img_stack)
img_stack = Dropout(0.2)(img_stack)

#Inject the state input
state_input = Input(shape=state_input_shape)
merged = concatenate([img_stack, state_input])

# Add a few dense layers to finish the model
merged = Dense(64, activation=activation, name='dense0')(merged)
merged = Dropout(0.2)(merged)
merged = Dense(10, activation=activation, name='dense2')(merged)
merged = Dropout(0.2)(merged)
merged = Dense(1, name='output')(merged)

adam = Nadam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
model = Model(inputs=[pic_input, state_input], outputs=merged)
model.compile(optimizer=adam, loss='mse')
model.summary()

在這里插入圖片描述
運用如下回調(diào)函數(shù):

  • ReduceLrOnPlateau
  • CsvLogger
  • ModelCheckpoint
  • EarlyStopping
plateau_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=3, min_lr=0.0001, verbose=1)
checkpoint_filepath = os.path.join(MODEL_OUTPUT_DIR, 'models', '{0}_model.{1}-{2}.h5'.format('model', '{epoch:02d}', '{val_loss:.7f}'))
checkAndCreateDir(checkpoint_filepath)
checkpoint_callback = ModelCheckpoint(checkpoint_filepath, save_best_only=True, verbose=1)
csv_callback = CSVLogger(os.path.join(MODEL_OUTPUT_DIR, 'training_log.csv'))
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=10, verbose=1)
callbacks=[plateau_callback, csv_callback, checkpoint_callback, early_stopping_callback, TQDMNotebookCallback()]

開始訓(xùn)練模型

history = model.fit_generator(train_generator, steps_per_epoch=num_train_examples//batch_size, epochs=500, callbacks=callbacks,                   validation_data=eval_generator, validation_steps=num_eval_examples//batch_size, verbose=2)

結(jié)果可視化

[sample_batch_train_data, sample_batch_test_data] = next(train_generator)
predictions = model.predict([sample_batch_train_data[0], sample_batch_train_data[1]])
for i in range(0, 3, 1):
    draw_image_with_label(sample_batch_train_data[0][i], sample_batch_test_data[i], predictions[i])

在這里插入圖片描述

Step 2 - 測試模型(可以不用看)

首先要配置環(huán)境,,又是漫漫長路,。
所幸路上有高人指點:

https://blog.csdn.net/mangohhhh/article/details/107215512

不得不說有能力的話還是看官方文檔最直接,也少了一些不必要的彎路:

https://microsoft./AirSim/build_linux/

當(dāng)然中間也必不可少的會出一些bug:

比如說git clone Airsim時下載到15%就會報錯,。經(jīng)過如下修改,,成功突破了15%這條坎。

https://blog.csdn.net/haockl/article/details/103846695

原因
由于git默認(rèn)緩存大小不足導(dǎo)致的,,使用下面的命令增加緩存大小
解決:

git config --global http.postBuffer 20000000

在這里插入圖片描述第二個BUG:
在執(zhí)行Airsim中的./setup.sh時,,出現(xiàn)如下錯誤:

在這里插入圖片描述這里參考如下博客:

https://blog.csdn.net/qq_44717317/article/details/103192013

在這里插入圖片描述
在這里插入圖片描述成功解決。

第三個不算bug的bug:car_assets.zip正如上述博客中所說,,下載巨慢,,解決方案:

  • 選擇不安裝(如果用不到汽車仿真)
  • 在上述博客中的百度云盤下載,之后進行一系列修改
  • 多嘗試幾次

第四個bug:
執(zhí)行./build.sh時,,會報錯,。
在這里插入圖片描述官網(wǎng)說要用clang8,于是安裝,。

在這里插入圖片描述
之前鼓搗GPU時,,clang是6.0,現(xiàn)在需要8,。這二者切換,,可以參考如下博客:

https://blog.csdn.net/dumpdoctorwang/article/details/84567757

在這里插入圖片描述
經(jīng)過上述一頓操作后,執(zhí)行./build.sh還是會報錯,。,。。,。
于是在github上找到了如下辦法:

https://github.com/microsoft/AirSim/issues/2417

在這里插入圖片描述雖然問題描述和自己的不太一樣,,但終歸都是在./build.sh環(huán)節(jié)出的錯誤,抱著試一試的心態(tài):

,。,。。

,。,。,。

。,。,。

還是不行,經(jīng)過自己兩天的操作無果后,,我決定給自己挖個坑,。就這樣吧。

好像和歷史遺留問題有關(guān),,自己之前在配置GPU環(huán)境時,,明明已經(jīng)配置好3060TI的顯卡,但是還是顯示如下:
在這里插入圖片描述于是自己在運行UE時,,也出現(xiàn)如下問題:
在這里插入圖片描述

cannot find a compatible vulkan device or driver.try updating your viedo driver to a more recent version and make sure your viedeo card supports vulank

同時Airsim的編譯也有問題,,所以自己決定先放。,。,。一。,。,。下。,。,。吧。
待到以后再解決吧,。,。。

運行./build.sh報錯信息如下,,這里記錄吧,。

  debug=false
  [[ 0 -gt 0 ]]
  '[' '!' -d ./external/rpclib/rpclib-2.2.1 ']'
  '[' -d ./cmake_build ']'
   which cmake
  CMAKE=/usr/bin/cmake
  false
  build_dir=build_release
   uname
  '[' Linux == Darwin ']'
  export CC=clang-8
  CC=clang-8
  export CXX=clang  -8
  CXX=clang  -8
  [[ -d ./AirLib/deps/eigen3/Eigen ]]
  echo 'putting build in build_release folder, to clean, just delete the directory...'
putting build in build_release folder, to clean, just delete the directory...
  [[ -f ./cmake/CMakeCache.txt ]]
  [[ -d ./cmake/CMakeFiles ]]
  folder_name=
  [[ ! -d build_release ]]
  mkdir -p build_release
  pushd build_release
  false
  folder_name=Release
  /usr/bin/cmake ../cmake -DCMAKE_BUILD_TYPE=Release
-- The C compiler identification is Clang 8.0.0
-- The CXX compiler identification is Clang 8.0.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/clang-8 - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - failed
-- Check for working CXX compiler: /usr/bin/clang  -8
-- Check for working CXX compiler: /usr/bin/clang  -8 - broken
CMake Error at /usr/share/cmake-3.19/Modules/CMakeTestCXXCompiler.cmake:59 (message):
  The C   compiler

    '/usr/bin/clang  -8'

  is not able to compile a simple test program.

  It fails with the following output:

    Change Dir: /home/wqf/AirSim/build_release/CMakeFiles/CMakeTmp
    
    Run Build Command(s):/usr/bin/make cmTC_e514f/fast && /usr/bin/make  -f CMakeFiles/cmTC_e514f.dir/build.make CMakeFiles/cmTC_e514f.dir/build
    make[1]: 進入目錄“/home/wqf/AirSim/build_release/CMakeFiles/CMakeTmp”
    Building CXX object CMakeFiles/cmTC_e514f.dir/testCXXCompiler.cxx.o
    /usr/bin/clang  -8    -o CMakeFiles/cmTC_e514f.dir/testCXXCompiler.cxx.o -c /home/wqf/AirSim/build_release/CMakeFiles/CMakeTmp/testCXXCompiler.cxx
    Linking CXX executable cmTC_e514f
    /usr/bin/cmake -E cmake_link_script CMakeFiles/cmTC_e514f.dir/link.txt --verbose=1
    /usr/bin/clang  -8 CMakeFiles/cmTC_e514f.dir/testCXXCompiler.cxx.o -o cmTC_e514f 
    /usr/bin/ld: 找不到 -lstdc  
    clang: error: linker command failed with exit code 1 (use -v to see invocation)
    CMakeFiles/cmTC_e514f.dir/build.make:105: recipe for target 'cmTC_e514f' failed
    make[1]: *** [cmTC_e514f] Error 1
    make[1]: 離開目錄“/home/wqf/AirSim/build_release/CMakeFiles/CMakeTmp”
    Makefile:140: recipe for target 'cmTC_e514f/fast' failed
    make: *** [cmTC_e514f/fast] Error 2
    
    

  

  CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
  CMakeLists.txt:2 (project)


-- Configuring incomplete, errors occurred!
See also '/home/wqf/AirSim/build_release/CMakeFiles/CMakeOutput.log'.
See also '/home/wqf/AirSim/build_release/CMakeFiles/CMakeError.log'.
  popd
~/AirSim ~/AirSim
  rm -r build_release
  exit 1

同時也推薦兩個比較好的環(huán)境配置博文:

https://blog.csdn.net/weixin_39059031/article/details/84028487
https://blog.csdn.net/mangohhhh/article/details/107215512

上面推薦過,看人家的操作怎么就這么得心應(yīng)手,。,。。哎

,。,。。

,。,。。

,。,。,。





之前我成功再Windows上安裝了Airsim(哈哈,但是忘了怎么裝的了,。,。。,。)

可能是windows下集成的比較好,?官方已經(jīng)編譯好了,而在linux下是作為UE的一個插件的,?如果是這樣的話,,那我果然不適合使用linux。

要么是因為我之前裝Carla時和Undicaty時已經(jīng)配置好環(huán)境了,?

哎。,。,。心累

—————————————————————————分割線——————————————————————————————
2020.3.10更新
我又回來填坑了。
之前自己的配置過程是自己編譯源碼,。其實配置環(huán)境的方式有兩種:
1.編譯源碼(適合懂計算機等各種的大佬)
(上述兩個別人配置環(huán)境的引用即是選擇的這種方式)
2.直接使用編譯好的文件,。

之前自己沒有認(rèn)真閱讀官方文檔。這次去官網(wǎng)好好看了下,。建議像我一樣的老白選擇“Download Binaries”選項,。
官網(wǎng)

在這里插入圖片描述下載好之后,解壓文件,,直接運行即可,。

在這里插入圖片描述
這樣便算是配置好了環(huán)境。

言歸正傳

Step 2 - 測試模型

Windows下:

代碼解讀如下:

from keras.models import load_model
import sys
import numpy as np
import glob
import os

if ('../../PythonClient/' not in sys.path): # sys.path是python的搜索模塊的路徑集,,是一個list,。
# 執(zhí)行前:['D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\python36.zip', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\DLLs', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4', '', 'C:\\Users\\文強\\AppData\\Roaming\\Python\\Python36\\site-packages', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\win32', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\win32\\lib', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\Pythonwin', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\IPython\\extensions', 'C:\\Users\\文強\\.ipython']
    sys.path.insert(0, '../../PythonClient/')
# 執(zhí)行后:['../../PythonClient/', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\python36.zip', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\DLLs', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4', '', 'C:\\Users\\文強\\AppData\\Roaming\\Python\\Python36\\site-packages', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\win32', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\win32\\lib', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\Pythonwin', 'D:\\Anaconda\\anaconda3\\envs\\keras2.2.4\\lib\\site-packages\\IPython\\extensions', 'C:\\Users\\文強\\.ipython']

# 就是添加了個路徑,用于后續(xù)搜索模塊

from AirSimClient import * # 導(dǎo)入本地模塊

# << Set this to the path of the model >>
# If None, then the model with the lowest validation loss from training will be used
MODEL_PATH = None

if (MODEL_PATH == None):
    models = glob.glob('model/models/*.h5') # glob.glob:查找符合特定規(guī)則的文件路徑名,。https://blog.csdn.net/georgeai/article/details/81035422
    best_model = max(models, key=os.path.getctime) # 按照最后時間取模型
    MODEL_PATH = best_model
    
print('Using model {0} for testing.'.format(MODEL_PATH))

在進行如下代碼之前,,要保證你的仿真進行器是開著的

model = load_model(MODEL_PATH)

client = CarClient() # 實例化一個車類
client.confirmConnection() # 確認(rèn)是否連接成功
client.enableApiControl(True) # 使鍵盤控制模式轉(zhuǎn)換到API模式
car_controls = CarControls() # 在API模式下,通過該類來控制汽車
print('Connection established!')

下述代碼將設(shè)置car的初始狀態(tài),,以及一些用于存儲模型輸出的緩沖區(qū)

car_controls.steering = 0
car_controls.throttle = 0
car_controls.brake = 0

image_buf = np.zeros((1, 59, 255, 3))
state_buf = np.zeros((1,4))

定義一個helper函數(shù)來從AirSim讀取RGB圖像,,并準(zhǔn)備讓模型使用它

def get_image():
    image_response = client.simGetImages([ImageRequest(0, AirSimImageType.Scene, False, False)])[0]
    image1d = np.fromstring(image_response.image_data_uint8, dtype=np.uint8)
    image_rgba = image1d.reshape(image_response.height, image_response.width, 4)
    
    return image_rgba[76:135,0:255,0:3].astype(float)

運用控制塊來運行小車。因為我們的模型不能預(yù)測速度,,所以我們將試圖保持汽車以5米/秒的恒定速度行駛,。運行下面的塊將使得模型駕駛汽車!

while (True):
    car_state = client.getCarState()
    
    if (car_state.speed < 5):
        car_controls.throttle = 1.0
    else:
        car_controls.throttle = 0.0
    
    image_buf[0] = get_image()
    state_buf[0] = np.array([car_controls.steering, car_controls.throttle, car_controls.brake, car_state.speed])
    model_output = model.predict([image_buf, state_buf])
    car_controls.steering = round(0.5 * float(model_output[0][0]), 2)
    
    print('Sending steering = {0}, throttle = {1}'.format(car_controls.steering, car_controls.throttle))
    
    client.setCarControls(car_controls)

關(guān)于github的issue中為什么會有人去說換一個編譯器,可能出自下述頁面:
在這里插入圖片描述

關(guān)于那個歷史遺留問題,,其實是沒有問題的,。只不過遠程時由于使用的協(xié)議,,調(diào)用不了GPU渲染出圖形界面。詳見我的下述博文文末:

配置深度強化學(xué)習(xí)環(huán)境(Ubuntu18.04 3060TI顯卡 遠程控制)

在這里插入圖片描述
現(xiàn)在看這個模型確實象征意義更大一些,,沒有特別復(fù)雜的地方,。就這樣吧,完結(jié),。

    本站是提供個人知識管理的網(wǎng)絡(luò)存儲空間,,所有內(nèi)容均由用戶發(fā)布,不代表本站觀點,。請注意甄別內(nèi)容中的聯(lián)系方式,、誘導(dǎo)購買等信息,謹(jǐn)防詐騙,。如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,,請點擊一鍵舉報。
    轉(zhuǎn)藏 分享 獻花(0

    0條評論

    發(fā)表

    請遵守用戶 評論公約

    類似文章 更多