Here is the problem
i have an IP camera that stream a h264 video using RTSP protocol,
all i want to do is read this stream and pass it to Open CV to decode it,
using this function
cv2.imdecode()
how?
Update
i solved this problem
here is the solution : Convert YUVj420p pixel format to RGB888 using gstreamer
import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst
import numpy as np
import cv2
GObject.threads_init()
Gst.init(None)
def YUV_stream2RGB_frame(data):
w=640
h=368
size=w*h
stream=np.fromstring(data,np.uint8) #convert data form string to numpy array
#Y bytes will start form 0 and end in size-1
y=stream[0:size].reshape(h,w) # create the y channel same size as the image
#U bytes will start from size and end at size+size/4 as its size = framesize/4
u=stream[size:(size+(size/4))].reshape((h/2),(w/2))# create the u channel itssize=framesize/4
#up-sample the u channel to be the same size as the y channel and frame using pyrUp func in opencv2
u_upsize=cv2.pyrUp(u)
#do the same for v channel
v=stream[(size+(size/4)):].reshape((h/2),(w/2))
v_upsize=cv2.pyrUp(v)
#create the 3-channel frame using cv2.merge func watch for the order
yuv=cv2.merge((y,u_upsize,v_upsize))
#Convert TO RGB format
rgb=cv2.cvtColor(yuv,cv2.cv.CV_YCrCb2RGB)
#show frame
cv2.imshow("show",rgb)
cv2.waitKey(5)
def on_new_buffer(appsink):
sample = appsink.emit('pull-sample')
#get the buffer
buf=sample.get_buffer()
#extract data stream as string
data=buf.extract_dup(0,buf.get_size())
YUV_stream2RGB_frame(data)
return False
def Init():
CLI="rtspsrc name=src location=rtsp://192.168.1.20:554/live/ch01_0 latency=10 !decodebin ! appsink name=sink"
#simplest way to create a pipline
pipline=Gst.parse_launch(CLI)
#getting the sink by its name set in CLI
appsink=pipline.get_by_name("sink")
#setting some important properties of appsnik
appsink.set_property("max-buffers",20) # prevent the app to consume huge part of memory
appsink.set_property('emit-signals',True) #tell sink to emit signals
appsink.set_property('sync',False) #no sync to make decoding as fast as possible
appsink.connect('new-sample', on_new_buffer) #connect signal to callable func
def run():
pipline.set_state(Gst.State.PLAYING)
GObject.MainLoop.run()
Init()
run()
Related
I want to filter out image data as per the text message attached to it.
Hub Code
import imagezmq
import cv2
hub = imagezmq.ImageHub()
print('Listening')
while True:
rpi_name, image = hub.recv_jpg()
print('Received from ' + rpi_name)
hub.send_reply(b'OK')
Server 1
import imagezmq
import cv2
import simplejpeg
hub = imagezmq.ImageSender()
feed = cv2.VideoCapture(url)
while True:
ret, frame = feed.read()
image = simplejpeg.encode_jpeg(frame, quality=60, colorspace='BGR')
hub.send_jpg('1',image)
Server 2
import imagezmq
import cv2
import simplejpeg
hub = imagezmq.ImageSender()
feed = cv2.VideoCapture(url)
while True:
ret, frame = feed.read()
image = simplejpeg.encode_jpeg(frame, quality=60, colorspace='BGR')
hub.send_jpg('2',image)
In the hub I receive data form both the servers simultaneously. What i want is to filter the data out. Like i just want to receive data which have message(this is used to differentiate servers) 1 only.
Or what would be the fasted way to get data from a server if the hub is receiving data from many servers.
For each (text, image) pair received, the text message contained in rpi_name
contains the '1' or '2' depending on the sending server. To filter images by sending server, you need to
use the text to differentiate what action is done based on the text portion of each message.
Here is one example of how to do that. I have added an example function
do_something() that uses an if statement to take different actions depending
on the source of the image.
Hub Code (with added function to take different actions based on sending server)
# Here is example Hub code showing how to Filter or take other
# action based on the source of rpi_name, image
#
import imagezmq
import cv2
def do_something(rpi_name, image):
# parameter rpi_name contains the "text message" which differs by server
# parameter image contains the image that arrived from that server
if rpi_name == '1':
pass # or do something like save the image or process it somehow
elif rpi_name == '2':
pass # or do something like save the image or process it somehow
else:
pass # what do you want to do if server is not one of the 2 above?
hub = imagezmq.ImageHub()
print('Listening')
while True:
rpi_name, image = hub.recv_jpg()
print('Received from ' + rpi_name)
do_something(rpi_name, image)
hub.send_reply(b'OK')
FYI, I am the author of imageZMQ.
I have a question on threading in python.
I have 3 applications for which I want to access the same resource (a camera). I would like to start the camera up streaming frames and then each of these processes take frames as needed and at different resolutions (or I can scale the images later if that is not efficient). I can get each of these three applications to run independently but have not found a way to thread them successfully. All three have been learned from others and I am grateful for them sharing. Credits below.
Application 1 streams the video feed to a web server.
with picamera.PiCamera(resolution='640x480', framerate=16) as camera:
output = StreamingOutput()
camera.start_recording(output, format='mjpeg')
try:
address = ('', 8000)
server = StreamingServer(address, StreamingHandler)
server.serve_forever()
finally:
camera.stop_recording()
Application 2 grabs small frames to memory often for detecting motion and then if certain conditions are met an image is saved to disk. In this case I want to grab low resolution frames for speeding up the analysis and also keeping them in memory only. For the part of saving I am interested in grabbing a higher resolution file for disk
def captureTestImage(settings, width, height):
command = "raspistill %s -w %s -h %s -t 200 -e bmp -n -o -" % (settings, width, height)
imageData = io.BytesIO()
imageData.write(subprocess.check_output(command, shell=True))
imageData.seek(0)
im = Image.open(imageData)
buffer = im.load()
imageData.close()
return im, buffer
while True:
motionState = P3picam.motion()
if motionState:
with picamera.PiCamera() as camera:
camera.resolution = (1920,1080)
camera.capture(picPath + picName)
Application 3 grabs frames from a video stream and then uses openCV on them.
vs = VideoStream(src=0).start()
time.sleep(2.0)
while True:
frame = vs.read()
frame = imutils.resize(frame, width=500)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
Application 1 credits: http://picamera.readthedocs.io/en/latest/recipes2.html#web-streaming
Application 2 credits: http://pastebin.com/raw.php?i=yH7JHz9w
Application 3 credits: https://www.pyimagesearch.com/2018/06/25/raspberry-pi-face-recognition/
I'm using Basler camera and python to record some video. I can successfully capture individual frames, but I don't know how to record a video.
Following is my code:
import os
import pypylon
from imageio import imwrite
import time
start=time.time()
print('Sampling rate (Hz):')
fsamp = input()
fsamp = float(fsamp)
time_exposure = 1000000*(1/fsamp)
available_cameras = pypylon.factory.find_devices()
cam = pypylon.factory.create_device(available_cameras[0])
cam.open()
#cam.properties['AcquisitionFrameRateEnable'] = True
#cam.properties['AcquisitionFrameRate'] = 1000
cam.properties['ExposureTime'] = time_exposure
buffer = tuple(cam.grab_images(2000))
for count, image in enumerate(buffer):
filename = str('I:/Example/{}.png'.format(count))
imwrite(filename, image)
del buffer
I haven't found a way to record a video using pypylon; it seems to be a pretty light wrapper around Pylon. However, I have found a way to save a video using imageio:
from imageio import get_writer
with get_writer('I:/output-filename.mp4', fps=fps) as writer:
# Some stuff with the frames
The above can be used with .mov, .avi, .mpg, .mpeg, .mp4, .mkv or .wmv, so long as the FFmpeg program is available. How you will install this program depends on your operating system. See this link for details on the parameters you can use.
Then, simply replace the call to imwrite with:
writer.append_data(image)
ensuring that this occurs in the with block.
An example implementation:
import os
import pypylon
from imageio import get_writer
while True:
try:
fsamp = float(input('Sampling rate (Hz): '))
break
except ValueError:
print('Invalid input.')
time_exposure = 1000000 / fsamp
available_cameras = pypylon.factory.find_devices()
cam = pypylon.factory.create_device(available_cameras[0])
cam.open()
cam.properties['ExposureTime'] = time_exposure
buffer = tuple(cam.grab_images(2000))
with get_writer(
'I:/output-filename.mkv', # mkv players often support H.264
fps=fsamp, # FPS is in units Hz; should be real-time.
codec='libx264', # When used properly, this is basically
# "PNG for video" (i.e. lossless)
quality=None, # disables variable compression
pixelformat='rgb24', # keep it as RGB colours
ffmpeg_params=[ # compatibility with older library versions
'-preset', # set to faster, veryfast, superfast, ultrafast
'fast', # for higher speed but worse compression
'-crf', # quality; set to 0 for lossless, but keep in mind
'11' # that the camera probably adds static anyway
]
) as writer:
for image in buffer:
writer.append_data(image)
del buffer
I want to stream the audio of my microphone (that is being recorded via pyaudio) via Flask to any client that connects.
This is where the audio comes from:
def getSound(self):
# Current chunk of audio data
data = self.stream.read(self.CHUNK)
self.frames.append(data)
wave = self.save(list(self.frames))
return data
Here's my flask-code:
#app.route('/audiofeed')
def audiofeed():
def gen(microphone):
while True:
sound = microphone.getSound()
#with open('tmp.wav', 'rb') as myfile:
# yield myfile.read()
yield sound
return Response(stream_with_context(gen(Microphone())))
And this is the client:
<audio controls>
<source src="{{ url_for('audiofeed') }}" type="audio/x-wav;codec=pcm">
Your browser does not support the audio element.
</audio>
It does work sometimes, but most of the times I'm getting "[Errno 32] Broken pipe"
When uncommenting that with open("tmp.wav")-part (the self.save() optionally takes all previous frames and saves them in tmp.wav), I kind of get a stream, but all that comes out of the speakers is a "clicking"-noise.
I'm open for any suggestions. How do I get the input of my microphone live-streamed (no pre-recording!) to a webbrowser?
Thanks!
Try This its worked for me. shell cmd "cat" is working perfect see the code
iam using FLASK
import subprocess
import os
import inspect
from flask import Flask
from flask import Response
#app.route('/playaudio')
def playaudio():
sendFileName=""
def generate():
# get_list_all_files_name this function gives all internal files inside the folder
filesAudios=get_list_all_files_name(currentDir+"/streamingAudios/1")
# audioPath is audio file path in system
for audioPath in filesAudios:
data=subprocess.check_output(['cat',audioPath])
yield data
return Response(generate(), mimetype='audio/mp3')
This question was asked long time ago, but since I spent entire day to figure out how to implement the same, I want to give the answer. Maybe it will be helpful for somebody.
"[Errno 32] Broken pipe" error comes from the fact that client can not play audio and closes this stream.
Audio can not be played due to absence of the header in the data stream. You can easily create the header using genHeader(sampleRate, bitsPerSample, channels, samples) function from the code here . This header has to be attached at least to the first chunck of sent data ( chunck=header+data ). Pay attention, that audio can be played ONLY untill client reaches file size in download that you have to specify in the header. So, workaround would be to set in the header some big files size, e.g. 2Gb.
Instead of datasize = len(samples) * channels * bitsPerSample in the header function write datasize = 2000*10**6.
def gen_audio():
CHUNK = 512
sampleRate = 44100
bitsPerSample = 16
channels = 2
wav_header = genHeader(sampleRate, bitsPerSample, channels)
audio = AudioRead()
data = audio.get_audio_chunck()
chunck = wav_header + data
while True:
yield (chunck)
data = audio.get_audio_chunck()
chunck = data
After lots research and tinkering I finally found the solution.
Basically it came down to serving pyaudio.paFloat32 audio data through WebSockets using Flask's SocketIO implementation and receiving/playing the data in JavaScript using HTML5's AudioContext.
As this is requires quite some code, I think it would not be a good idea to post it all here. Instead, feel free to check out the project I'm using it in: simpleCam
The relevant code is in:
- noise_detector.py (recording)
- server.py (WebSocket transfer)
- static/js/player.js (receiving/playing)
Thanks everyone for the support!
I'm trying to play video in mp4 format but not working.
In console I execute this line and it works:
gst-launch playbin uri=rtmp://localhost:1935/files/video.mp4
But if I change to version 1.0 only works the audio:
gst-launch-1.0 playbin uri=rtmp://localhost:1935/files/video.mp4
in python I have the following code:
self.player = Gst.Pipeline.new("player")
source = Gst.ElementFactory.make("filesrc", "file-source")
demuxer = Gst.ElementFactory.make("mp4mux", "demuxer")
demuxer.connect("pad-added", self.demuxer_callback)
self.video_decoder = Gst.ElementFactory.make("x264enc", "video-decoder")
self.audio_decoder = Gst.ElementFactory.make("vorbisdec", "audio-decoder")
audioconv = Gst.ElementFactory.make("audioconvert", "converter")
audiosink = Gst.ElementFactory.make("autoaudiosink", "audio-output")
videosink = Gst.ElementFactory.make("autovideosink", "video-output")
self.queuea = Gst.ElementFactory.make("queue", "queuea")
self.queuev = Gst.ElementFactory.make("queue", "queuev")
colorspace = Gst.ElementFactory.make("videoconvert", "colorspace")
self.player.add(source)
self.player.add(demuxer)
self.player.add(self.video_decoder)
self.player.add(self.audio_decoder)
self.player.add(audioconv)
self.player.add(audiosink)
self.player.add(videosink)
self.player.add(self.queuea)
self.player.add(self.queuev)
self.player.add(colorspace)
source.link(demuxer)
self.queuev.link(self.video_decoder)
self.video_decoder.link(colorspace)
colorspace.link(videosink)
self.queuea.link(self.audio_decoder)
self.audio_decoder.link(audioconv)
audioconv.link(audiosink)
but I get this error:
Error: Error in the internal data flow. gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:player/GstFileSrc:file-source:
streaming task paused, reason not-linked (-1)
What can be happening? think I am no good decoding
You are missing linking the demuxer pads to your queues. Demuxers have 'sometimes' pads so you need to listen to the pad-added signal of them and link in this callback. Remember to check the pad caps once you get them and link to the appropriate branch of your pipeline.
You can read about dynamic pads here: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/chapter-pads.html#section-pads-dynamic
You have in your code:
demuxer = Gst.ElementFactory.make("mp4mux", "demuxer")
demuxer.connect("pad-added", self.demuxer_callback)
I hope this is a cut/paste error, as demuxing with a mux will not work. I believe for an .mp4 file, the normal demuxer (if you are choosing one by hand) is qtdemux.
You could also use decodebin to decode the file for you.