Streaming mp2t video from IP Camera directly in Python - python

I found a project on Github here, which is a project to reverse engineer the Tapo App for TP-Link camera to control/stream the camera remotely (since TP-Link doesn't make a PC version for the app, no idea why). So the code works perfectly, I'm able to view the camera from my office. But the thing is the repo download the stream into a mp4 file, which you play again later.
However I want to build a GUI based on this repo with Tkinter and I want to stream the video directly instead of download it. I've tried output it with OpenCV but the server seems to return bytes by bytes in video/mp2t format and not frame by frame. This is my first question so sorry for my long speech.
Anyone have any idea? Thanks
The code used to write the stream into mp4 file:
while True:
boundaryLine = sslPipe.readline().decode('ascii')
# print(boundaryLine)
if len(boundaryLine) == 0:
print('Reached EOL', file=sys.stderr)
break
if boundaryLine != '--' + replyHeaders.get_boundary() + '\r\n':
print('Unexpected boundary: %s' % boundaryLine, file=sys.stderr)
sys.exit(1)
chunkHeaders = http.client.parse_headers(sslPipe)
chunkType = chunkHeaders.get_content_type()
chunkLength = int(chunkHeaders.get('content-length'))
chunk = sslPipe.read(chunkLength)
if not chunkType.startswith('video/'):
print('Ignoring ' + chunkType, file=sys.stderr)
print("Printing command data sized at: %s bytes" % str(len(chunk)), file=sys.stderr)
elif chunkType.startswith('video/'):
------> f.write(chunk) #Save video here
boundaryLine = sslPipe.readline().decode('ascii')
if boundaryLine != '\r\n':
print('End of chunk new line missing', file=sys.stderr)
print(boundaryLine, file=sys.stderr)
break

Related

Enable cache on ffmpeg to record streaming

now I'm using steamlink and ffmpeg to record streams and save them to a file, many times the video file saved have so much lag. I found this link https://www.reddit.com/r/Twitch/comments/62601b/laggy_stream_on_streamlinklivestreamer_but_not_on/
where they claim that the lag problem occurs from the fact of not having the cache enabled on the player.
I tried putting options -hls_allow_cache allowcache -segment_list_flags cache with the result that the ffmpeg process starts for 8seconds more or less, after which it ends and starts again immediately afterwards without returning a video file,if I don't put those two options the video is recorded correctly but most of the time with some lag.
Obviously if I visit streaming from the browser I have no lag problem
this is the code
from streamlink import Streamlink, NoPluginError, PluginError
streamlink = Streamlink()
#this code is just a snippet, it is inside a while loop to restart the process
try:
streams = streamlink.streams(m3u8_url)
stream_url = streams['best'].url
#note hls options not seem to work
ffmpeg_process = Popen(
["ffmpeg", "-hide_banner", "-loglevel", "panic", "-y","-hls_allow_cache", "allowcache", "-segment_list_flags", "cache","-i", stream_url, "-fs", "10M", "-c", "copy",
"-bsf:a", "aac_adtstoasc", fileName])
ffmpeg_process.wait()
except NoPluginError:
print("noplugin")
except PluginError:
print("plugin")
except Exception as e:
print(e)
what are the best options to enable the cache and limit the lag as much as possible?
You can read FFmpeg StreamingGuide for more details on Latency. For instances, you have
an option -fflags nobuffer which might possibly help, usually for
receiving streams ​reduce latency.
As you can read here about nobuffer
Reduce the latency introduced by buffering during initial input
streams analysis.
I simply solved the lag problem by avoiding using ffmpeg to save videos but using streamlink directly and writing a .mp4 file
streamlink = Streamlink()
try:
streams = streamlink.streams(m3u8_url)
stream_url = streams['480p']
fd = stream_url.open()
out = open(fileName,"wb")
while True:
data = fd.read(1024)
if data is None or data == -1 or data == 0:
break
else:
out.write(data)
fd.flush()
fd.close()
out.flush()
out.close()
except NoPluginError:
#handle exception
except PluginError:
#handle exception
except StreamError:
#handle exception
except Exception as e:
#handle exception

Post beanstalk queue data to Mssql using python script

Guys I'm currently developing an offline ALPR solution.
So far I've used OpenAlpr software running on Ubuntu. By using a python script I found on StackOverlFlow I'm able to read the beanstalk queue data (plate number & meta data) of the ALPR but I need to send this data from the beanstalk queue to a mssql database. Does anyone know how to export beanstalk queue data or JSON data to the database? The code below is for local-host, how do i modify it to automatically post data to the mssql database? The data in the beanstalk queue is in JSON format [key=value].
The read & write csv was my addition to see if it can save the json data as csv on localdisk
import beanstalkc
import json
from pprint import pprint
beanstalk = beanstalkc.Connection(host='localhost', port=11300)
TUBE_NAME='alprd'
text_file = open('output.csv', 'w')
# For diagnostics, print out a list of all the tubes available in Beanstalk.
print beanstalk.tubes()
# For diagnostics, print the number of items on the current alprd queue.
try:
pprint(beanstalk.stats_tube(TUBE_NAME))
except beanstalkc.CommandFailed:
print "Tube doesn't exist"
# Watch the "alprd" tube; this is where the plate data is.
beanstalk.watch(TUBE_NAME)
while True:
# Wait for a second to get a job. If there is a job, process it and delete it from the queue.
# If not, return to sleep.
job = beanstalk.reserve(timeout=5000)
if job is None:
print "No plates yet"
else:
plates_info = json.loads(job.body)
# Do something with this data (e.g., match a list, open a gate, etc.).
# if 'data_type' not in plates_info:
# print "This shouldn't be here... all OpenALPR data should have a data_type"
# if plates_info['data_type'] == 'alpr_results':
# print "Found an individual plate result!"
if plates_info['data_type'] == 'alpr_group':
print "Found a group result!"
print '\tBest plate: {} ({:.2f}% confidence)'.format(
plates_info['candidates'][0]['plate'],
plates_info['candidates'][0]['confidence'])
make_model = plates_info['vehicle']['make_model'][0]['name']
print '\tVehicle information: {} {} {}'.format(
plates_info['vehicle']['year'][0]['name'],
plates_info['vehicle']['color'][0]['name'],
' '.join([word.capitalize() for word in make_model.split('_')]))
elif plates_info['data_type'] == 'heartbeat':
print "Received a heartbeat"
text_file.write('Best plate')
# Delete the job from the queue when it is processed.
job.delete()
text_file.close()
AFAIK there is no way to directly export data from beanstalkd.
What you have makes sense, that is streaming data out of a tube into a flat file or performing a insert into the DB directly
Given the IOPS beanstalkd can be produced, it might still be a reasonable solution (depends on what performance you are expecting)
Try asking https://groups.google.com/forum/#!forum/beanstalk-talk as well

Stream audio from pyaudio with Flask to HTML5

I want to stream the audio of my microphone (that is being recorded via pyaudio) via Flask to any client that connects.
This is where the audio comes from:
def getSound(self):
# Current chunk of audio data
data = self.stream.read(self.CHUNK)
self.frames.append(data)
wave = self.save(list(self.frames))
return data
Here's my flask-code:
#app.route('/audiofeed')
def audiofeed():
def gen(microphone):
while True:
sound = microphone.getSound()
#with open('tmp.wav', 'rb') as myfile:
# yield myfile.read()
yield sound
return Response(stream_with_context(gen(Microphone())))
And this is the client:
<audio controls>
<source src="{{ url_for('audiofeed') }}" type="audio/x-wav;codec=pcm">
Your browser does not support the audio element.
</audio>
It does work sometimes, but most of the times I'm getting "[Errno 32] Broken pipe"
When uncommenting that with open("tmp.wav")-part (the self.save() optionally takes all previous frames and saves them in tmp.wav), I kind of get a stream, but all that comes out of the speakers is a "clicking"-noise.
I'm open for any suggestions. How do I get the input of my microphone live-streamed (no pre-recording!) to a webbrowser?
Thanks!
Try This its worked for me. shell cmd "cat" is working perfect see the code
iam using FLASK
import subprocess
import os
import inspect
from flask import Flask
from flask import Response
#app.route('/playaudio')
def playaudio():
sendFileName=""
def generate():
# get_list_all_files_name this function gives all internal files inside the folder
filesAudios=get_list_all_files_name(currentDir+"/streamingAudios/1")
# audioPath is audio file path in system
for audioPath in filesAudios:
data=subprocess.check_output(['cat',audioPath])
yield data
return Response(generate(), mimetype='audio/mp3')
This question was asked long time ago, but since I spent entire day to figure out how to implement the same, I want to give the answer. Maybe it will be helpful for somebody.
"[Errno 32] Broken pipe" error comes from the fact that client can not play audio and closes this stream.
Audio can not be played due to absence of the header in the data stream. You can easily create the header using genHeader(sampleRate, bitsPerSample, channels, samples) function from the code here . This header has to be attached at least to the first chunck of sent data ( chunck=header+data ). Pay attention, that audio can be played ONLY untill client reaches file size in download that you have to specify in the header. So, workaround would be to set in the header some big files size, e.g. 2Gb.
Instead of datasize = len(samples) * channels * bitsPerSample in the header function write datasize = 2000*10**6.
def gen_audio():
CHUNK = 512
sampleRate = 44100
bitsPerSample = 16
channels = 2
wav_header = genHeader(sampleRate, bitsPerSample, channels)
audio = AudioRead()
data = audio.get_audio_chunck()
chunck = wav_header + data
while True:
yield (chunck)
data = audio.get_audio_chunck()
chunck = data
After lots research and tinkering I finally found the solution.
Basically it came down to serving pyaudio.paFloat32 audio data through WebSockets using Flask's SocketIO implementation and receiving/playing the data in JavaScript using HTML5's AudioContext.
As this is requires quite some code, I think it would not be a good idea to post it all here. Instead, feel free to check out the project I'm using it in: simpleCam
The relevant code is in:
- noise_detector.py (recording)
- server.py (WebSocket transfer)
- static/js/player.js (receiving/playing)
Thanks everyone for the support!

Downloading Streams Simulatenously with Python 3.5

EDIT: I think I've figured out a solution using subprocess.Popen with separate .py files for each stream being monitored. It's not pretty, but it works.
I'm working on a script to monitor a streaming site for several different accounts and to record when they are online. I am using the livestreamer package for downloading a stream when it comes online, but the problem is that the program will only record one stream at a time. I have the program loop through a list and if a stream is online, start recording with subprocess.call(["livestreamer"... The problem is that once the program starts recording, it stops going through the loop and doesn't check or record any of the other livestreams. I've tried using Process and Thread, but none of these seem to work. Any ideas?
Code below. Asterisks are not literally part of code.
import os,urllib.request,time,subprocess,datetime,random
status = {
"********":False,
"********":False,
"********":False
}
def gen_name(tag):
return stuff <<Bunch of unimportant code stuff here.
def dl(tag):
subprocess.call(["livestreamer","********.com/"+tag,"best","-o",".\\tmp\\"+gen_name(tag)])
def loopCheck():
while True:
for tag in status:
data = urllib.request.urlopen("http://*******.com/" + tag + "/").read().decode()
if data.find(".m3u8") != -1:
print(tag + " is online!")
if status[tag] == False:
status[tag] = True
dl(tag)
else:
print(tag+ " is offline.")
status[tag] = False
time.sleep(15)
loopCheck()

Speeding Up Data Transfer Using Pyserial in Python

I have created a data transfer program using python and the pyserial module. I am currently using it to communicate text file over a radio device between a Raspberry Pi and my computer. The problem is, the file I am trying to send, which contains 5000 lines of text and is 93.0 Kb in size is taking quite a while to send. To be exact, it takes about a full minute. I need this to be done within seconds. Here is the following code, I am sure that there are many optimizations that can be made with file reading and such that would increase the data transfer speed. My radio device has a data speed of 250 kbps, which is obviously not being reached. Any help would be greatly appreciated.
Code to send(located on raspberry pi)
def s_file():
print 'start'
readline = lambda : iter(lambda:ser.read(1),"\n")
name = "".join(readline())
print name
file_loc = directory_name + name
sleep(1)
print('Waiting for command from client to send file...')
while "".join(readline()) != "<<SENDFILE>>":
pass
with open(file_loc) as FileObj:
for lines in FileObj:
ser.write(lines)
ser.write("\n<<EOF>>\n")
print 'done'
Code to receive(on my laptop)
def r_f_bird(self): #send command to bird to start func,
if ser_open == True:
readline = lambda : iter(lambda:ser.read(1),"\n")
NAME = self.tb2.get()
ser.write('/' + NAME)
print NAME
sleep(0.5)
ser.write('\n<<SENDFILE>>\n')
start = clock()
with open(str(NAME),"wb") as outfile:
while True:
line = "".join(readline())
if line == "<<EOF>>":
break
print >> outfile, line
elapsed = clock() - start
print elapsed
ser.flush()
else:
pass
Perhaps the overhead of ser.read(1) is slowing things down. It seems like you have a \n at the end of each line, so try using pySerial's readline() method rather than rolling your own. Changing line = "".join(readline()) to line = ser.readline() ought to do it. You will also need to change your loop end condition to == "<<EOF>>\n".
You may also need to add a ser.flush() on the writing side.

Categories