I'm playing around with Vision Programming Interface (VPI) and trying to bend images. I came across this Lens Distortion Correction example (https://docs.nvidia.com/vpi/algo_ldc.html) and added some code so it takes an input image and shows the undistorted output image. The following code runs fine and I'm able to view the output image.
I'd like to run it in a loop for a video input. As soon as I uncomment the "videoCapture" line, I get the following error:
"Segmentation fault (core dumped).
Anyone able to help me use this code for video input?
import vpi
import numpy as np
import cv2
import PIL
from PIL import Image
img = cv2.imread('input.jpeg')
#cap = cv2.VideoCapture(0)
vpi_image = vpi.asimage(np.asarray(img))
grid = vpi.WarpGrid((2064,1544))
sensorWidth = 7.12
focallength = 3.5
f = focallength * (2064 / sensorWidth)
K = [[f, 0, 2064/2],
[0, f, 1544/2]]
X = np.eye(3,4)
warp = vpi.WarpMap.fisheye_correction(grid, K=K, X=X,
mapping=vpi.FisheyeMapping.EQUIDISTANT,
coeffs=[-0.01, 0.22])
with vpi.Backend.CUDA:
output = vpi_image.remap(warp, interp=vpi.Interp.CATMULL_ROM, border=vpi.Border.ZERO)
with output.rlock():
output = Image.fromarray(output.cpu()).save('output.jpeg')
pil_image = PIL.Image.open('output.jpeg').convert('RGB')
cv2_image = np.array(pil_image)
cv2_image = cv2_image[:, :, ::-1].copy()
cv2_image = cv2.resize(cv2_image, (920,590))
img = cv2.resize(img, (920, 590))
sbs = cv2.hconcat([img, cv2_image])
cv2.imshow("sbs", sbs)
cv2.waitKey(0)