관련 Facebook 포스팅 링크입니다.    카카오톡을 안쓰기 때문에 모든 영상은 페북에 올리고 있습니다.

www.facebook.com/permalink.php?story_fbid=4051251598328692&id=100003316754962

 

프로젝트 원본 : https://stereopi.com/blog/opencv-and-depth-map-stereopi-tutorial

 

OpenCV and Depth Map on StereoPi tutorial | StereoPi - DIY stereoscopic camera based on Raspberry Pi

UPD> We have updated version of this article, including C++ code, here: OpenCV: comparing the speed of C++ and Python code on the Raspberry Pi for stereo vision Today we’re pleased to share with you a series of Python examples for OpenCV development. Th

stereopi.com

윗 프로젝트는 원래 Raspberry Pi용으로 만들어져서 PiCam라이브러리를 사용하기 때문에 일반 리눅스 용으로 사용하기에는 불편한데, 코드를 일반 Linux의 USB Cam 용으로 바꿨습니다.

 

첫 코드 1_test.py를 바꾼 내용을 올립니다.   이 코드와 저 사이트에서 다운받는 코드를 비교해 보면 나머지도 어떻게 바꾸는 지 쉽게 나옵니다.   그리고 초저가 USB Dual CAM 이 Dual 파이 캠보다 사용이 훨씬 좋고 resize를 안해서 속도도 훨씬 빨랐습니다.  

바뀐 1_test.py

# Modified by 현자 to work with non-Raspberry Pi PC's
# Cam used: OV9732 Binocular Sync Camera Module, 72 Degree, 1 Million Pixel

import time
import cv2
import os
from datetime import datetime


# File for captured image
filename = './scenes/photo.png'

# Camera settimgs (at 640x240, its default frame rate = 25)
cam_width  = 640  # Width must be dividable by 32
cam_height = 240  # Height must be dividable by 16

print ("Camera Resolution: "+str(cam_width)+" x "+str(cam_height))

# Initialize the camera
camera = cv2.VideoCapture(0)
# Must set camera WORKING camera resolution to get Left/Right side by side
camera.set(cv2.CAP_PROP_FRAME_WIDTH, cam_width)
camera.set(cv2.CAP_PROP_FRAME_HEIGHT, cam_height)

t2 = datetime.now()
counter = 0
avgtime = 0
# Capture frames from the camera
while camera.isOpened():  
    ret, frame = camera.read()
    counter+=1
    t1 = datetime.now()
    timediff = t1-t2
    avgtime = avgtime + (timediff.total_seconds())
    cv2.imshow("Both Eyes", frame)
    key = cv2.waitKey(1) & 0xFF
    t2 = datetime.now()
    # if the `q` key was pressed, break from the loop and save last image
    if key == ord("q") :
        avgtime = avgtime/counter
        print ("Average time between frames: " + str(avgtime))
        print ("Average FPS: " + str(1/avgtime))
        if (os.path.isdir("./scenes")==False):
            os.makedirs("./scenes")
        cv2.imwrite(filename, frame)
        break
   
camera.release()

 바뀐 2_chess_cycle.py

# Copyright (C) 2019 Eugene Pomazov, <stereopi.com>, virt2real team
#
# This file is part of StereoPi tutorial scripts.
#
# StereoPi tutorial is free software: you can redistribute it 
# and/or modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation, either version 3 of the 
# License, or (at your option) any later version.
#
# StereoPi tutorial is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with StereoPi tutorial.  
# If not, see <http://www.gnu.org/licenses/>.
#
# Most of this code is updated version of 3dberry.org project by virt2real
# 
# Thanks to Adrian and http://pyimagesearch.com, as there are lot of
# code in this tutorial was taken from his lessons.
# 
# ================================================
# Modified by 현자 to work with non-Raspberry Pi PC's
# Cam used: OV9732 Binocular Sync Camera Module, 72 Degree, 1 Million Pixel

import os
import time
from datetime import datetime
import cv2
import numpy as np

# Photo session settings
total_photos = 30             # Number of images to take
countdown = 5                 # Interval for count-down timer, seconds
font=cv2.FONT_HERSHEY_SIMPLEX # Cowntdown timer font
 
# Camera settimgs (at 640x240, its default frame rate = 25)
cam_width  = 640  # Width must be dividable by 32
cam_height = 240  # Height must be dividable by 16

#capture = np.zeros((img_height, img_width, 4), dtype=np.uint8)
print ("Final resolution: "+str(cam_width)+" x "+str(cam_height))

# Initialize the camera
camera = cv2.VideoCapture(0)
# Must set camera WORKING camera resolution to get Left/Right side by side
camera.set(cv2.CAP_PROP_FRAME_WIDTH, cam_width)
camera.set(cv2.CAP_PROP_FRAME_HEIGHT, cam_height)

# Lets start taking photos! 
counter = 0
t2 = datetime.now()
print ("Starting photo sequence")
while camera.isOpened():  
   ret, frame = camera.read()
   t1 = datetime.now()
   cntdwn_timer = countdown - int ((t1-t2).total_seconds())
   # If cowntdown is zero - let's record next image
   if cntdwn_timer == -1:
      counter += 1
      filename = './scenes/scene_'+str(cam_width)+'x'+str(cam_height)+'_'+\
                  str(counter) + '.png'
      cv2.imwrite(filename, frame)
      print (' ['+str(counter)+' of '+str(total_photos)+'] '+filename)
      t2 = datetime.now()
      time.sleep(1)
      cntdwn_timer = 0      # To avoid "-1" timer display 
      next
   # Draw cowntdown counter, seconds
   cv2.putText(frame, str(cntdwn_timer), (50,50), font, 2.0, (0,0,255),4, cv2.LINE_AA)
   cv2.imshow("pair", frame)
   key = cv2.waitKey(1) & 0xFF
    
   # Press 'Q' key to quit, or wait till all photos are taken
   if (key == ord("q")) | (counter == total_photos):
      break

 
print ("Photo sequence finished")
camera.release()

아래 화일들은 카메라를 열지 않기때문에 바뀔 필요가 없습니다.

3_pairs_cut.py

4_calibration.py

5_dm_tune.py

 

바뀐 6_dm_video.py

# Copyright (C) 2019 Eugene Pomazov, <stereopi.com>, virt2real team
#
# This file is part of StereoPi tutorial scripts.
#
# StereoPi tutorial is free software: you can redistribute it 
# and/or modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation, either version 3 of the 
# License, or (at your option) any later version.
#
# StereoPi tutorial is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with StereoPi tutorial.  
# If not, see <http://www.gnu.org/licenses/>.
#
# Most of this code is updated version of 3dberry.org project by virt2real
# 
# Thanks to Adrian and http://pyimagesearch.com, as there are lot of
# code in this tutorial was taken from his lessons.
# 
# ================================================
# Modified by 현자 to work with non-Raspberry Pi PC's
# Cam used: OV9732 Binocular Sync Camera Module, 72 Degree, 1 Million Pixel

import time
import cv2
import numpy as np
import json
from stereovision.calibration import StereoCalibrator
from stereovision.calibration import StereoCalibration
from datetime import datetime

# Depth map default preset
SWS = 5
PFS = 5
PFC = 29
MDS = -30
NOD = 160
TTH = 100
UR = 10
SR = 14
SPWS = 100

# Camera settimgs (at 640x240, its default frame rate = 25)
cam_width  = 640  # Width must be dividable by 32
cam_height = 240  # Height must be dividable by 16

#capture = np.zeros((img_height, img_width, 4), dtype=np.uint8)
print ("Final resolution: "+str(cam_width)+" x "+str(cam_height))

# Initialize the camera
camera = cv2.VideoCapture(0)
# Must set camera WORKING camera resolution to get Left/Right side by side
camera.set(cv2.CAP_PROP_FRAME_WIDTH, cam_width)
camera.set(cv2.CAP_PROP_FRAME_HEIGHT, cam_height)

# Implementing calibration data
print('Read calibration data and rectifying stereo pair...')
calibration = StereoCalibration(input_folder='calib_result')

# Initialize interface windows
cv2.namedWindow("Image")
cv2.moveWindow("Image", 50,100)
cv2.namedWindow("left")
cv2.moveWindow("left", 450,100)
cv2.namedWindow("right")
cv2.moveWindow("right", 850,100)


disparity = np.zeros((cam_width, cam_height), np.uint8)
sbm = cv2.StereoBM_create(numDisparities=0, blockSize=21)

def stereo_depth_map(rectified_pair):
    dmLeft = rectified_pair[0]
    dmRight = rectified_pair[1]
    disparity = sbm.compute(dmLeft, dmRight)
    local_max = disparity.max()
    local_min = disparity.min()
    disparity_grayscale = (disparity-local_min)*(65535.0/(local_max-local_min))
    disparity_fixtype = cv2.convertScaleAbs(disparity_grayscale, alpha=(255.0/65535.0))
    disparity_color = cv2.applyColorMap(disparity_fixtype, cv2.COLORMAP_JET)
    cv2.imshow("Image", disparity_color)
    key = cv2.waitKey(1) & 0xFF   
    if key == ord("q"):
        quit();
    return disparity_color

def load_map_settings( fName ):
    global SWS, PFS, PFC, MDS, NOD, TTH, UR, SR, SPWS, loading_settings
    print('Loading parameters from file...')
    f=open(fName, 'r')
    data = json.load(f)
    SWS=data['SADWindowSize']
    PFS=data['preFilterSize']
    PFC=data['preFilterCap']
    MDS=data['minDisparity']
    NOD=data['numberOfDisparities']
    TTH=data['textureThreshold']
    UR=data['uniquenessRatio']
    SR=data['speckleRange']
    SPWS=data['speckleWindowSize']    
    #sbm.setSADWindowSize(SWS)
    sbm.setPreFilterType(1)
    sbm.setPreFilterSize(PFS)
    sbm.setPreFilterCap(PFC)
    sbm.setMinDisparity(MDS)
    sbm.setNumDisparities(NOD)
    sbm.setTextureThreshold(TTH)
    sbm.setUniquenessRatio(UR)
    sbm.setSpeckleRange(SR)
    sbm.setSpeckleWindowSize(SPWS)
    f.close()
    print ('Parameters loaded from file '+fName)


load_map_settings ("3dmap_set.txt")

# capture frames from the camera
while camera.isOpened():  
    ret, frame = camera.read()
    t1 = datetime.now()
    pair_img = cv2.cvtColor (frame, cv2.COLOR_BGR2GRAY)
    imgLeft = pair_img [0:cam_height,0:int(cam_width/2)] #Y+H and X+W
    imgRight = pair_img [0:cam_height,int(cam_width/2):cam_width] #Y+H and X+W
    rectified_pair = calibration.rectify((imgLeft, imgRight))
    disparity = stereo_depth_map(rectified_pair)
    # show the frame
    cv2.imshow("left", imgLeft)
    cv2.imshow("right", imgRight)    

    t2 = datetime.now()
    print ("DM build time: " + str(t2-t1))

camera.release()

파이나 기타 리눅스 계열의 SBC에서 C나 Python을 사용하지 않으면 요즘 나오는 대부분의 센서를 다루기가 여간 골치가 아닌데, 이걸 간단한 파이션으로 만든 UDP 서버 프로그램을 수정해서  아주 간단하게 socket으로 센서값을 전달받으면 단순하게 풀릴 듯 하다.

 

아래 코드는 파이션 UDP 서버 코드이다.

  출처는 : pythontic.com/modules/socket/udp-client-server-example

아래 코드를 실행시키고 혹시라도 CPU와 RAM의 부하를 봤는데 거의 변화가 없었다.   ㅎㅎㅎ.

  

import socket

 

localIP     = "127.0.0.1"

localPort   = 20001

bufferSize  = 1024

 

msgFromServer       = "Hello UDP Client"

bytesToSend         = str.encode(msgFromServer)

 

# Create a datagram socket

UDPServerSocket = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM)

 

# Bind to address and ip

UDPServerSocket.bind((localIP, localPort))

 

print("UDP server up and listening")

 

# Listen for incoming datagrams

while(True):

    bytesAddressPair = UDPServerSocket.recvfrom(bufferSize)

    message = bytesAddressPair[0]

    address = bytesAddressPair[1]

    clientMsg = "Message from Client:{}".format(message)
    clientIP  = "Client IP Address:{}".format(address)
    
    print(clientMsg)
    print(clientIP)

   

    # Sending a reply to client

    UDPServerSocket.sendto(bytesToSend, address)

위의 채팅 서버에 거리측정 센서 VL53L0X를 읽는 코드를 섞어서 센서값을 보내라고 명령하면 보내게만 만들면 컨트롤 프로그램은 통신만 잘 관리하면 된다.   아래 코드는 이 센서를 읽는 코드이다.   

출처 : github.com/johnbryanmoore/VL53L0X_rasp_python

#!/usr/bin/python

# MIT License
# 
# Copyright (c) 2017 John Bryan Moore

import time
import VL53L0X

# Create a VL53L0X object
tof = VL53L0X.VL53L0X()

# Start ranging
tof.start_ranging(VL53L0X.VL53L0X_BETTER_ACCURACY_MODE)

timing = tof.get_timing()
if (timing < 20000):
    timing = 20000
# print ("Timing %d ms" % (timing/1000))

for count in range(1,2):
    distance = tof.get_distance()
    if (distance > 0):
        print ("%d mm, %d cm, %d" % (distance, (distance/10), count))

    time.sleep(timing/1000000.00)

tof.stop_ranging()

====================================================================

For Lazarus IDE's UDP Component for Orange Pi Zero or Raspberry Pi, download from github.com/almindor/lnet

====================================================================

 

몇시간만에 뚝딱뚝딱 간단하게 만들어 본 실험 모델인데 기능들을 넣고 많이 다듬으면 쓸만할 듯 하다.

 

==============================================================================

Completed Basic Structure : 현재 VL53L0X library에 관한 부분들은 모두 #로 막아놨습니다. 

#!/usr/bin/python

import socket
import time
#import VL53L0X

localIP     = "127.0.0.1"
localPort   = 6767
bufferSize  = 1024

msgFromServer       = "Sensor service server connected!"
bytesToSend         = str.encode(msgFromServer)
cmd                 = ""
respStr             = ""

# Create a datagram socket
UDPServerSocket = socket.socket(family=socket.AF_INET, type=socket.SOCK_DGRAM)

# Bind to address and ip
UDPServerSocket.bind((localIP, localPort))

print("UDP server up and listening")

# Create a VL53L0X object
#tof = VL53L0X.VL53L0X()

# Start ranging
#tof.start_ranging(VL53L0X.VL53L0X_BETTER_ACCURACY_MODE)

#timing = tof.get_timing()
#if (timing < 20000):
#    timing = 20000

# Listen for incoming datagrams
while(True):
    data, client = UDPServerSocket.recvfrom(bufferSize) 
    cmd = data.decode('utf-8')
    
    #print(cmd)  # for debug purpose

    if cmd == '1' :  # Test by sending '1' from Client Application 
       #for count in range(1,2):
          #distance = tof.get_distance()
          #if (distance > 0):
          #    print ("%d mm, %d cm, %d" % (distance, (distance/10), count))
          #    respStr = f"{distance} mm, {distance/10}"

          #time.sleep(timing/1000000.00)
       respStr = "VL53L0X result"
       bytesToSend = str.encode(respStr)
       UDPServerSocket.sendto(bytesToSend, client)
    else :
       # Sending a reply to client
       respStr = "Server received %s " % cmd 
       bytesToSend = str.encode(respStr)
       UDPServerSocket.sendto(bytesToSend, client)

 

'Orange Pi > O-Pi Zero (H2)' 카테고리의 다른 글

HW-290 (짝퉁 GY-87)  (0) 2023.10.01

+ Recent posts