1

I am trying to get my Raspberry Pi B+ to use a USB webcam to measure distances between it and an object of fixed width (11.0 inches).

I am following this guide now. However, instead of using static images, I am using a video feed from my webcam.

This is the code I am trying to run:

import argparse
import datetime
import imutils
import time
import cv2
import numpy as np

def find_marker(frame):
    # convert the image to grayscale, blur it, and detect edges
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    gray = cv2.GaussianBlur(gray, (5, 5), 0)
    edged = cv2.Canny(gray, 35, 125)

    # find the contours in the edged image and keep the largest one;
    # we'll assume that this is our piece of paper in the image
    (cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
    c = max(cnts, key = cv2.contourArea)

    # compute the bounding box of the of the paper region and return it
    return cv2.minAreaRect(c)

def distance_to_camera(knownWidth, focalLength, perWidth):
    # compute and return the distance from the maker to the camera
    return (knownWidth * focalLength) / perWidth


#======================================================================
#main is here

# initialize the known distance from the camera to the object, which
# in this case is 24 inches
KNOWN_DISTANCE = 24.0

# initialize the known object width, which in this case, the piece of
# paper is 11 inches wide
KNOWN_WIDTH = 11.0

frame = cv2.VideoCapture(0)
marker = find_marker(frame)
focalLength = (marker[1][0] * KNOWN_DISTANCE) / KNOWN_WIDTH

# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
args = vars(ap.parse_args())

# if the video argument is None, then we are reading from webcam
if args.get("video", None) is None:
    camera = cv2.VideoCapture(0)
    time.sleep(0.25)

# otherwise, we are reading from a video file
else:
    camera = cv2.VideoCapture(args["video"])

# loop over the frames of the video
while True:
    # grab the current frame and initialize the occupied/unoccupied
    # text
    (grabbed, frame) = camera.read()

    # if the frame could not be grabbed, then we have reached the end
    # of the video
    if not grabbed:
        break

    # resize the frame, convert it to grayscale, and blur it
    frame = imutils.resize(frame, width=500)
    marker = find_marker(frame)
    inches = distance_to_camera(KNOWN_WIDTH, focalLength, marker[1][0])

    # draw a bounding box around the image and display it
    box = np.int0(cv2.cv.BoxPoints(marker))
    cv2.drawContours(frame, [box], -1, (0, 255, 0), 2)
    cv2.putText(frame, "%.2fft" % (inches / 12),
        (frame.shape[1] - 200, frame.shape[0] - 20), cv2.FONT_HERSHEY_SIMPLEX,
        2.0, (0, 255, 0), 3)
    cv2.imshow("Frame", frame)
    cv2.waitKey(0)

However, this is the output I get when I try to run it:

Traceback (most recent call last):
  File "testcam.py", line 39, in <module>
    marker = find_marker(frame)
  File "testcam.py", line 10, in find_marker
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
TypeError: src is not a numpy array, neither a scalar

I am new to opencv, so I am unsure what this error means..

1
  • it means the passed frame is not a valid np array . I think the bug lies on the line frame = imutils.resize(frame, width=500) change it to cv2.resize() probably imutils does not return a np array . can you use the cv2.resize() instead of imutils.resize() Commented Jan 19, 2016 at 8:34

1 Answer 1

2

The thing you are doing is

frame = cv2.VideoCapture(0)

cv2.VideoCapture(0) initialize the capture device or the camera device to fetch a frame from that you need to call cap.read() but instead you passed the capture object that gave the error

Which Should be

capForFocal = cv2.VideoCapture(0)
_,frame=capForFocal.read()
capForFocal.release()
Sign up to request clarification or add additional context in comments.

2 Comments

@Arijit_Mukherjee I replaced the frame = 'cv2.VideoCapture(0)' to 'capForFocal = cv2.VideoCapture(0) _,frame=capForFocal.read()' and the output I got was: 'libv4l2: error setting pixformat: Device or resource busy HIGHGUI ERROR: libv4l unable to ioctl S_FMT libv4l2: error setting pixformat: Device or resource busy libv4l1: error setting pixformat: Device or resource busy HIGHGUI ERROR: libv4l unable to ioctl VIDIOCSPICT'
sorry you should call nt call cv2.VideoCapture for the second time when u called cv2.VideoCapture in the first it reserves the resource of camera indexed 0 so you should add capForFocal.release() after _,frame=capForFocal.read()

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.