6

I'm trying to stream some images form opencv using gstreamer and I got ome issues with the pipeline. I'm new to gstreamer and opencv in general. I compiled opencv 3.2 with gstreamer for python3 on a raspberry pi 3. I have a little bash script that I use with raspivid

raspivid -fps 25 -h 720 -w 1080 -vf -n -t 0 -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=192.168.1.27 port=5000

I wanted to translate this pipeline in order to use it from opencv and feed into it images that my algorithm manipulates. I did some research and figured that I can use videoWriter with appsrc instead of fdsrc but I get the following error

GStreamer Plugin: Embedded video playback halted; module appsrc0 reported: Internal data flow error.

The python script that I came up with is the following by the way import cv2

cap = cv2.VideoCapture(0)


# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
out = cv2.VideoWriter('appsrc  ! h264parse ! '
                      'rtph264pay config-interval=1 pt=96 ! '
                      'gdppay ! tcpserversink host=192.168.1.27 port=5000 ',
                      fourcc, 20.0, (640, 480))

while cap.isOpened():
    ret, frame = cap.read()
    if ret:
        frame = cv2.flip(frame, 0)

        # write the flipped frame
        out.write(frame)

        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    else:
        break

# Release everything if job is finished
cap.release()
out.release()
cv2.destroyAllWindows()

Is there any error in the pipeline? I don't understand the error. I already have a Python client that can read from the bash pipeline and the result are pretty good from the latency perspective and consumed resources .

1 Answer 1

11

I came across the solution and I hope this helps other people that come across the same issue. The pipeline was mistakenly arranged and videoconvert was needed. On the other hand the latency was quite relevant but setting speed.preset to ultrafast solved the issue even if there's not much of compression going on there, it was a good compromise. Here's my solution.

import cv2

cap = cv2.VideoCapture(0)

framerate = 25.0

out = cv2.VideoWriter('appsrc ! videoconvert ! '
                      'x264enc noise-reduction=10000 speed-preset=ultrafast tune=zerolatency ! '
                      'rtph264pay config-interval=1 pt=96 !'
                      'tcpserversink host=192.168.1.27 port=5000 sync=false',
                      0, framerate, (640, 480))

while cap.isOpened():
    ret, frame = cap.read()
    if ret:

        out.write(frame)

        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    else:
        break

# Release everything if job is finished
cap.release()
out.release()
Sign up to request clarification or add additional context in comments.

4 Comments

thanks, but how i can get and show output?
same here, how to get the output?
you need a tcpsrc based pipeline, like the following tcpsrc {params like port and so on} ! rtph264depay ! h264parse ! avh264dec ! ...
I used to use this gst-launch-1.0 -v tcpclientsrc host=192.168.1.27 port=5000 ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.