久久国产成人av_抖音国产毛片_a片网站免费观看_A片无码播放手机在线观看,色五月在线观看,亚洲精品m在线观看,女人自慰的免费网址,悠悠在线观看精品视频,一级日本片免费的,亚洲精品久,国产精品成人久久久久久久

分享

入門指南:用Python實(shí)現(xiàn)實(shí)時目標(biāo)檢測(內(nèi)附代碼)

 新用戶0935snDB 2022-08-12 發(fā)布于河南

全文共6821字,預(yù)計(jì)學(xué)習(xí)時長20分鐘

文章圖片1

來源:Pexels

從自動駕駛汽車檢測路上的物體,到通過復(fù)雜的面部及身體語言識別發(fā)現(xiàn)可能的犯罪活動。多年來,研究人員一直在探索讓機(jī)器通過視覺識別物體的可能性,。

這一特殊領(lǐng)域被稱為計(jì)算機(jī)視覺 (Computer Vision, CV),在現(xiàn)代生活中有著廣泛的應(yīng)用。

目標(biāo)檢測 (ObjectDetection) 也是計(jì)算機(jī)視覺最酷的應(yīng)用之一,,這是不容置疑的事實(shí)。

現(xiàn)在的CV工具能夠輕松地將目標(biāo)檢測應(yīng)用于圖片甚至是直播視頻,。本文將簡單地展示如何用TensorFlow創(chuàng)建實(shí)時目標(biāo)檢測器,。

建立一個簡單的目標(biāo)檢測器

設(shè)置要求:

TensorFlow版本在1.15.0或以上

執(zhí)行pip install TensorFlow安裝最新版本

一切就緒,現(xiàn)在開始吧,!

設(shè)置環(huán)境

第一步:從Github上下載或復(fù)制TensorFlow目標(biāo)檢測的代碼到本地計(jì)算機(jī)

在終端運(yùn)行如下命令:

git clonehttps://github.com/tensorflow/models.git

第二步:安裝依賴項(xiàng)

下一步是確定計(jì)算機(jī)上配備了運(yùn)行目標(biāo)檢測器所需的庫和組件,。

下面列舉了本項(xiàng)目所依賴的庫。(大部分依賴都是TensorFlow自帶的)

· Cython

· contextlib2

· pillow

· lxml

· matplotlib

若有遺漏的組件,,在運(yùn)行環(huán)境中執(zhí)行pip install即可,。

第三步:安裝Protobuf編譯器

谷歌的Protobuf,又稱Protocol buffers,,是一種語言無關(guān),、平臺無關(guān)、可擴(kuò)展的序列化結(jié)構(gòu)數(shù)據(jù)的機(jī)制,。Protobuf幫助程序員定義數(shù)據(jù)結(jié)構(gòu),,輕松地在各種數(shù)據(jù)流中使用各種語言進(jìn)行編寫和讀取結(jié)構(gòu)數(shù)據(jù)。

Protobuf也是本項(xiàng)目的依賴之一,。點(diǎn)擊這里了解更多關(guān)于Protobufs的知識,。接下來把Protobuf安裝到計(jì)算機(jī)上。

打開終端或者打開命令提示符,,將地址改為復(fù)制的代碼倉庫,,在終端執(zhí)行如下命令:

cd models/research \
wget -Oprotobuf.zip https://github.com/protocolbuffers/protobuf/releases/download/v3.9.1/protoc-3.9.1-osx-x86_64.zip\
unzipprotobuf.zip

注意:請務(wù)必在models/research目錄解壓protobuf.zip文件,。

文章圖片2

來源:Pexels

第四步:編輯Protobuf編譯器

從research/ directory目錄中執(zhí)行如下命令編輯Protobuf編譯器:

./bin/protoc object_detection/protos/*.proto--python_out=.

用Python實(shí)現(xiàn)目標(biāo)檢測

現(xiàn)在所有的依賴項(xiàng)都已經(jīng)安裝完畢,可以用Python實(shí)現(xiàn)目標(biāo)檢測了,。

在下載的代碼倉庫中,,將目錄更改為:

models/research/object_detection

這個目錄下有一個叫
object_detection_tutorial.ipynb的ipython notebook。該文件是演示目標(biāo)檢測算法的demo,,在執(zhí)行時會用到指定的模型:

ssd_mobilenet_v1_coco_2017_11_17

這一測試會識別代碼庫中提供的兩張測試圖片,。下面是測試結(jié)果之一:

文章圖片3

要檢測直播視頻中的目標(biāo)還需要一些微調(diào)。在同一文件夾中新建一個Jupyter notebook,,按照下面的代碼操作:

[1]:

import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from distutils.version import StrictVersion
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This isneeded since the notebook is stored in the object_detection folder.
sys.path.append('..')
from utils import ops as utils_ops
if StrictVersion(tf.__version__) < StrictVersion('1.12.0'):
    raise ImportError('Please upgrade your TensorFlow installation to v1.12.*.')

[2]:

# This isneeded to display the images.
get_ipython().run_line_magic('matplotlib', 'inline')

[3]:

# Objectdetection imports
# Here arethe imports from the object detection module.
from utils import label_map_util
from utils import visualization_utils as vis_util

[4]:

# Modelpreparation
# Anymodel exported using the `export_inference_graph.py` tool can be loaded heresimply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file.
# Bydefault we use an 'SSD with Mobilenet' model here.
#See https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
#for alist of other models that can be run out-of-the-box with varying speeds andaccuracies.
# Whatmodel to download.
MODEL_NAME= 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_FILE= MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE= 'http://download./models/object_detection/'
# Path tofrozen detection graph. This is the actual model that is used for the objectdetection.
PATH_TO_FROZEN_GRAPH= MODEL_NAME + '/frozen_inference_graph.pb'
# List ofthe strings that is used to add correct label for each box.
PATH_TO_LABELS= os.path.join('data', 'mscoco_label_map.pbtxt')

[5]:

#DownloadModel
opener =urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE+ MODEL_FILE, MODEL_FILE)
tar_file =tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name= os.path.basename(file.name)
    if'frozen_inference_graph.pb'in file_name:
tar_file.extract(file,os.getcwd())

[6]:

# Load a(frozen) Tensorflow model into memory.
detection_graph= tf.Graph()
with detection_graph.as_default():
od_graph_def= tf.GraphDef()
    withtf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
serialized_graph= fid.read()
        od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def,name='')

[7]:

# Loadinglabel map
# Labelmaps map indices to category names, so that when our convolution networkpredicts `5`,
#we knowthat this corresponds to `airplane`. Here we use internal utilityfunctions,
#butanything that returns a dictionary mapping integers to appropriate stringlabels would be fine
category_index= label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS,use_display_name=True)

[8]:

defrun_inference_for_single_image(image, graph):
    with graph.as_default():
with tf.Session() as sess:
            # Get handles to input and output tensors
ops= tf.get_default_graph().get_operations()
            all_tensor_names= {output.name for op in ops for output in op.outputs}
tensor_dict= {}
            for key in [
'num_detections', 'detection_boxes', 'detection_scores',
                  'detection_classes', 'detection_masks']:
tensor_name= key + ':0'
                if tensor_name in all_tensor_names:
tensor_dict[key]= tf.get_default_graph().get_tensor_by_name(tensor_name)
            if'detection_masks'in tensor_dict:
# The following processing is only for single image
                detection_boxes= tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks= tf.squeeze(tensor_dict['detection_masks'], [0])
                # Reframe is required to translate mask from boxcoordinates to image coordinates and fit the image size.
real_num_detection= tf.cast(tensor_dict['num_detections'][0], tf.int32)
                detection_boxes= tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks= tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
                detection_masks_reframed= utils_ops.reframe_box_masks_to_image_masks(
detection_masks,detection_boxes, image.shape[1],image.shape[2])
                detection_masks_reframed= tf.cast(
tf.greater(detection_masks_reframed,0.5),tf.uint8)
                # Follow the convention by adding back the batchdimension
tensor_dict['detection_masks'] =tf.expand_dims(
                                    detection_masks_reframed,0)
image_tensor= tf.get_default_graph().get_tensor_by_name('image_tensor:0')
            # Run inference
output_dict= sess.run(tensor_dict, feed_dict={image_tensor: image})
            # all outputs are float32 numpy arrays, so convert typesas appropriate
output_dict['num_detections'] =int(output_dict['num_detections'][0])
            output_dict['detection_classes'] =output_dict[
'detection_classes'][0].astype(np.int64)
            output_dict['detection_boxes'] =output_dict['detection_boxes'][0]
output_dict['detection_scores'] =output_dict['detection_scores'][0]
            if'detection_masks'in output_dict:
output_dict['detection_masks'] =output_dict['detection_masks'][0]
        return output_dict

[9]:

import cv2
cam =cv2.cv2.VideoCapture(0)
rolling = True
while (rolling):
ret,image_np = cam.read()
    image_np_expanded= np.expand_dims(image_np, axis=0)
# Actual detection.
    output_dict= run_inference_for_single_image(image_np_expanded, detection_graph)
# Visualization of the results of a detection.
    vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
      output_dict['detection_boxes'],
output_dict['detection_classes'],
      output_dict['detection_scores'],
category_index,
      instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
      line_thickness=8)
cv2.imshow('image', cv2.resize(image_np,(1000,800)))
    if cv2.waitKey(25) & 0xFF == ord('q'):
break
        cv2.destroyAllWindows()
cam.release()

在運(yùn)行Jupyter notebook時,,網(wǎng)絡(luò)攝影系統(tǒng)會開啟并檢測所有原始模型訓(xùn)練過的物品類別。

文章圖片4

感謝閱讀本文,,如果有什么建議,,歡迎在留言區(qū)積極發(fā)言喲~

文章圖片5

    本站是提供個人知識管理的網(wǎng)絡(luò)存儲空間,所有內(nèi)容均由用戶發(fā)布,,不代表本站觀點(diǎn),。請注意甄別內(nèi)容中的聯(lián)系方式、誘導(dǎo)購買等信息,,謹(jǐn)防詐騙,。如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點(diǎn)擊一鍵舉報(bào),。
    轉(zhuǎn)藏 分享 獻(xiàn)花(0

    0條評論

    發(fā)表

    請遵守用戶 評論公約

    類似文章 更多