Trying to read Thermal Data from Hikvision Camera in Python - python

I'm looking for solution to access thermal data of camera. well i used OpenCV and only could got original image. but there is no more data for process like temperature. I tried available library about HikVision cameras and surfed the net for this. but i could not be succeed. also I tried FLIR library but no success.
second solution that I have is converting RGB to temperature but i don't know what to do for this kind of process. Also I know the range of device temperature which is between-20 to 150 degree
looking for something like this:
# cam model: hikvision DS-2TD2615-10
import cv2
import hikvision api library for example
thermal = cv2.VideoCapture()
thermal.open("rtsp://""user:pass#ip:port/Streaming/channels/202/")
ret, frame = thermal.read()
while True:
ret, frame = thermal.read()
temp_data = api.read_temperature(frame) # -> array or excel file
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
thermal.release()
cv2.destroyAllWindows()
and my video input is something similar to this pic and for example i want to find out the nose is how much hot just by click on it:

General answer for thermal images from any camera - you can't just convert grayscale level (or color if you already applied pallete to your image) to temperature values. You need to know some coefficients which relates to IR matrix. Some software may embed that data to image file metadata but there is no standard for that. Also, if you resaved your image file without knowing it, you'll probably lose that metadata.
Also, like plain visible-light camera, IR-camera can adapt it's range to current image. So, if you're shooting a human in a room, minimum temperature in your picture will be like 22°С (cold wall or floor), maximum will be like 37°C (hottest part of human body). In that case you'll get 256 gray levels covering range of 15 degrees, so black color is 22°С, white is 37°С (keep in mind proportion is not linear!). You move your camera to a cup of hot tea with like 60°С and your relation of gray level to temperature changes. So, you need to get coefficients for every frame.
It is possible to "fix" temperature range on some cameras but that depends on specific models.
More than that - most cheap thermal cameras don't deal with temperature values at all.
P.S. Oh, I just noticed exact model of your camera. So answer is even stronger "YOU CAN'T". Don't expect capabilities of science or medical thermal camera from that chinese poorly documented surveilance hardware.

Related

What features do glitched images have that I could detect?

I'm trying to build a footage filter that only sends only "good" frames to the database.
Here is my current rating function:
def rateImg(img):
try:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
except:
gray = img
edges = cv2.Canny(gray, 0, 255)
countours, _ = cv2.findContours(
edges, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
num_of_countours = len(countours)
lap = cv2.Laplacian(gray, cv2.CV_64F).var()
lap = round(lap, 2)
return [lap, num_of_countours]
First off, I use variance of Laplacian to calculate the sharpness of an image from a particular time window.
It should technically provide me a "good" frame, but that's not always the case.
The camera I have to use isn't great and sometimes glitches out like this and frames like this have the highest variance of Laplacian.
So, my current solution is to calculate the number of countours in an image and if an image crosses a particular threshold I classify it as "glitched". But with this approach the algorithm rates images with a lot of objects as "glitched".
Also, I have tried detecting squares and rectangles, but that proved to be much less effective than the countour approach.
Is there any way to detect obvious glitches in an image?
I feel like there should be, because as a human I can easily classify glitched and normal images at a glance. I just can't seem to pin-point what exactly makes them different.
Is there any way to detect obvious glitches in an image?
Yes, but probably not for complex random glitches, have a look in this similar question
In that case, you can detect if there is a large area of the image containing the same color. Photo taken from the camera would never contain the same RGB value although they look similar. However, this would be perfectly normal if the images are arts drawn on a digital devices.
As a human I can easily classify glitched and normal images at a
glance... What exactly makes them (me and a program) different. Is there any way to detect obvious glitches in an image
In fact, you can't identify a glitched image. You try to recognize the objects in it. When you see something "weird" that you don't recognize, you consider it as a glitched image. The machine can neither achieve this. You can train an AI that report images with unrecognizable "parts" as glitched but it will never be 100% accurate
Converting your image to HSV and runnign the Brightness Channel through an edge filter on ImageJ gives me this:
As you can see, the "glitched" region appears pretty uniformly brighter then the rest of the image, and should be detectable in some form. How often do you get a picture from you camera ? Depending on how much change occurs between two pictures, you might get away with subtracting the current one from the one before it to just look at changes.
You have not shown what an image with
a lot of objects
looks like, so you'd have to try if this works for those cases.
OpenCV functions needed for this workflow would be:
cvtColor() with COLOR_BGR2GRAY for color conversion (there might be faster ways to get a good greyvalue then HSV)
one of the edge detectors. Canny() or Sobel() would be the first i'd try
some greyvalue statistics. threshold() and CountNonZero() for a simple approach, which you could refine for sectors on the image or such
PS:
I feel like there should be, because as a human I can easily classify
glitched and normal images at a glance.
That's a common fallacy: Us humans (the sight-centric beings that we are) are fantastic at pattern recognition and interpolation and are rarely aware how much of that (including a lot of error correction) is happening every microsecond. A puny 2D camera can not hope to live up to that. (obligatory XKCD: https://xkcd.com/1425/)

How to differentiate between two progressive images in opencv

I have a video file of evening time ( 6pm-9pm). And I want to detect movement of people on the road.
While trying to find the difference between a handful of images from "10 minute" time frame videos (10 equally time spaced images within any 10 minutes video frame clip) I'm facing these challenges:
All the images are coming as different (coming as Alert) because there is some plant moving due to wind all the time.
All the 10 images are coming different also because the sun is setting down and hence due to "natural light variation" the 10
images from 10 minute frames after coming different even though
there is no public/human movement.
How do I restrict my algorithm to focus only on movements ion certain area of the video rather than all of it ? (Couldn't find
anything on google or dont know if there's any algo in opencv for this)
This one is rather difficult to deal with. I recommend you try to blur the frames a little bit to reduce the noises from moving plants. Also, if the range of the movement is not so large, try changing the difference threshold and area threshold (if your algorithm contains contour detection as the following step). Hope this can help a little bit.
For detecting "movement" of people, a (10 frame/10 min) fps is a little too low. People in the frames can be totally different. This means you cannot detect the movement of a single person, but to find the differences between two frames. In the case where you are using low fps videos, I recommend you try Background Subtraction, to find people in the frames instead of people movements between the frames. For Background Subtraction, to solve
All the 10 images are coming different also because the sun is setting down and hence due to "natural light variation" the 10 images from 10 minute frames after coming different even though there is no public/human movement.
you can try using the average image of all frames as the background_img in
difference = current_img - background_img
If the time span is longer, you can use the average of images more recent to current_img as background_img. And keep updating background_img when running the video.
If your ROI is a rectangle in the frame, use
my_ROI = cv::Rect(x, y, width, height)
cv::Mat ROI_img= frame(my_ROI)
If not, try using a mask.
I think what you are looking for is a Pedestrian Detection. You can do this easily in Python with OpenCV package.
# Initialize a HOG descriptor
hog = cv2.HOGDescriptor()
# Set it for Pedestrian Detection
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
# Then use the detector
hog.detectMultiScale()
Exemple : Pedestrian Detection OpenCV

Open CV to Capture Unique Objects from a Video

I was doing a frame slicing from the OpenCV Library in Python, and I am successfully able to create frames from the video being tested on.
I am doing it on a CCTV Camera installed at a parking entry gateway where the video plays 24x7, and at times the car is standing still for good number of minutes, leading to having consecutive frames of the same vehicle.
My question is how can I create a frame only when a new vehicle enters the parking lot?
Stackoverflow is for code related queries. I suggest you try some code and share your results and your problems before posting anything here. That being said, you can start with object detection tutorials like this and then do tracking with sort. Many pre trained models are available that include the cars class so you won't even need to train a new model.
Do you need to detect license plates, etc? Or just notice if something happens? For the latter, you could use a very simple approach. Take an average of say the frames of the last 30 seconds and subtract that from a current frame. If the mean absolute average of the delta image is above a threshold, that could be the change you are looking for.
You could do some simpler motion detection with opencv, it's nicely explained in https://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/
So if you have a picture of the background as reference, you can compare each new image to the background and only save the image if it's different enough from the background (hopefully only when a car entered). Then make this the new background and reset for a new car when the new images again start looking like the original background.
Hopefully I stated my idea clear enough and that link provides enough information to implement it. If not just ask for a clarification!
First you have to have a specific xml to detect only cars.You can get it from the here.I have developed a code just to uniquely identify and count the cars that are visible to the cctv you are using,sometimes it totally depends on the frame rate and detection too,so you can control the frame rate and also the total count variable.
import cv2
cascade_src = 'cars.xml'
cap = cv2.VideoCapture('rtsp_of_ur_cctv')
car_cascade = cv2.CascadeClassifier(cascade_src)
prev_count=0
total_count=0
while True:
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cars = car_cascade.detectMultiScale(gray, 1.1, 1)
if len(cars)>prev_count:
diffrence=len(cars)-prev_count
total_count=total_count+diffrence
#here yo can save unique new entry and possibly avoid the recursive ones
print(total_count)
for (x,y,w,h) in cars:
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2)
prev_count=len(cars)
cv2.imshow('video', img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

how to set the camera in raspberry pi to take black and white image?

Are there any ways to set the camera in raspberry pi to take black and white image?, like using some commands / code in picamera library?
Since I need to compare the relative light intensity of a few different images, I'm worried that the camera will already so some adjustments itself when the object is under different illuminations, so even if I convert the image to black and white later on the object's 'true' black and white image will have been lost.
thanks
edit: basically what I need to do is to capture a few images of an object when the camera position is fixed, but the position of the light source is changed (and so the direction of illumination is changed as well). Then for each point on the image I will need to compare the relative light intensity of the different images. As long as the light intensity, or the 'brightness' of all the images are relative to the same scale, then it's ok, but I'm not sure if this is the case. I'm sure if the camera will adjust something like contrast automatically itself when an image is 'inherently' darker or brighter.
To get a black and white image (monochrome, grayscale), just configure your camera. Create a "takeashot.py" ( sudo nano takeashot.py ):
import picamera # import files
camera = picamera.PiCamera() # initialize camera
camera.color_effects = (128,128) # turn camera to black and white
camera.capture('image1.jpg') # take a shot
Execute: sudo python takeashot.py
That´s it
You can learn more at 10. API - picamera.camera Module
Under color_effects, you read "to make the image black and white set the value to (128, 128)."
What do you mean by "black and white image," in this case? There is no "true" black and white image of anything. You have sensors that have some frequency response to light, and those give you the values in the image.
In the case of the Raspberry Pi camera, and almost all standard cameras, there are red, green and blue sensors that have some response centered around their respective frequencies. Those sensors are laid out in a certain pattern, as well. If it's particularly important to you, there are cameras that only have an array of a single sensor type that is sensitive to a wider range of frequencies, but those are likely going to be considerable more expensive.
You can get raw image data from the raspi camera with picamera. This is not the "raw" format described in the documentation and controlled by format, which is really just the processed data before encoding. The bayer option will return the actual raw data. However, that means you'll have to deal with processing by yourself. Each pixel in that data will be from a different color sensor, for example, and will need to be adjusted based on the sensor response.
The easiest thing to do is to just use the camera normally, as you're not going to get great accuracy measuring light intensity in this way. In order to get accurate results, you'd need calibration, and you'd need to be specific about what the data is for, how everything is going to be illuminated, and what data you're actually interested in.
v4l before python:
v4l2-ctl -c color_effects=1
From:
v4l2-ctl -L
User Controls
...
color_effects (menu) : min=0 max=15 default=0 value=1
0: None
1: Black & White
2: Sepia
...
Nota: I've done this successfully, while my camera was running!

How to calculate Depth Information from USB 3.0 Stereo Camera?

I'm interested in using a stereo camera for calculating depth in video/images. The camera is a USB 3.0 Stereoscopic camera from Leopard Imaging https://www.leopardimaging.com/LI-USB30-V024STEREO.html. I'm using MAC OS X btw.
I was told by their customer support that it's a "UVC" camera. When connected to an apple computer it gives a greenish image.
My end goal is to use OpenCV to grab the left and right frames from both lenses so that I can calculate depth. I'm familiar with OpenCV, but not familiar with working with stereo cameras. Any help would be much appreciated. So far I have been doing this in Python 3:
import numpy as np
import cv2
import sys
from matplotlib import pyplot as plt
import pdb; pdb.set_trace()
print("Camera 1 capture", file=sys.stderr)
cap = cv2.VideoCapture(1)
print("Entering while", file=sys.stderr)
while(True):
_ = cap.grab()
retVal, frame = cap.retrieve()
#gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
This works, but it only gives me a green picture/image with no depth. Any advice on how to get both left and right frames from the camera?
The leopard imaging people sent me a clue but I'm not able to make progress because i guess some missing file in my core file. However, I thought it might help somebody. So I'm posting it as an answer.
The message was sent by one of the leopard imaging people I contacted through mail. It goes like this.....
We have a customer who successfully separated the two videos on Linux OS one year ago. I tried to contact him to see if he can share the source code with us. Unfortunately, he already left the former company, but he still found some notes (below). Hope it helps.
The camera combines the image from the two sensors into one 16-bit pixel data (The high 8 bits from one camera, and the low 8 bit from the other camera).
To fix this problem in linux opencv, should skip color transform by opencv: at
modules/videoio/src/cap_v4l.cpp static IplImage* icvRetrieveFrameCAM_V4L( CvCaptureCAM_V4L* capture, int)
case V4L2_PIX_FMT_YUYV:
#if 1
/*
skip color convert. Just copy image buffer to frame.imageData
*/
memcpy(capture->frame.imageData, capture->buffers[capture->bufferIndex].start, capture->buffers[capture->bufferIndex].length);
#else
yuyv_to_rgb24(capture->form.fmt.pix.width, capture->form.fmt.pix.height,(unsigned char*) capture->buffers[capture->bufferIndex].start),
(unsigned char*)capture->frame.imageData);
#endif
Hope this helps.
DISCLAIMER : I went forward with the C++ version(my project was in C++) given in libuvc used the provided routines to obtain the left and right frames separately.
I am chasing the same issue, but with C/C++. I have contacted Leopard and am waiting for an answer. My understanding is that the two grayscale cameras are interlaced into a single image (and I think that OpenCV sees this as a color image, hence the strange colors and being out of focus.) You need to break apart the bytes into two separate frames. I am experimenting, trying to figure out the byte placement, but have not gotten too far. If you figure this out, please let me know!
They have a C# on Windows example here:
https://www.dropbox.com/sh/49cpwx0s70fuich/2e0_mFTJY_
Unfortunately it is using their libraries (which are not source) to do the heavy lifting, so I can't figure out what they are doing.
I met the same issue as you and finally arrived to a solution. But I don't know if OpenCV can handle it directly, especially in Python.
As jordanthompson said, the two images are interlaced into one. The image you receive is in YUYV (Y is the light intensity, UV contains the color information). Each pixel is coded on 16 bits, 8 for Y, 8 for U or V depending on which pixel you are looking at.
Here, the Y bits come from the left image, the UV bits from the right image. But when OpenCV receives this image, it converts it to RGB which then definitely mix the two images.. I could not find a way in Python to tell OpenCV to get the image without converting it... Therefore we need to read the image before OpenCV. I managed to do it with libuvc (https://github.com/ktossell/libuvc) after adding two small functions to perform the proper conversions. I guess you can use this library if you can use openCV in C++ instead of Python. If you really have to stick with Python, then I don't have a complete solution, but at least now you know what to look for: try to read the image directly in YUYV and then separate the bits into left and right image (you will get two grayscale images).
Good luck !

Categories