im trying to create a server script that accepts audio packets, plays them and saves the data in real time. The problem is that saving the data causes the streaming to hold until the data is saved. this happens with both Multiprocessing and Threading. Appreciate any help!
# stream audio recieved
def stream_audio():
global recording
# instantiating streaming service
p = pyaudio.PyAudio()
stream = p.open(format=p.get_format_from_width(sample_size),
channels=channel_num,
rate=sample_rate,
output=True,
frames_per_buffer=CHUNK)
#playing and saving
while recording :
try:
frame = PrePlayQue.get(True, timer)
stream.write(frame)
SaveQue.put(frame)
print("streamed a frame")
# if queue is empty for longer then timer
except queue.Empty:
print("queue is empty")
recording=False
# saving data - saving if it exceeds 3MB or after prolonged lack of audio input
finally:
if (SaveQue.qsize()>3*1024 or not recording) and (SaveQue.qsize()>1) :
print("saving to file")
#save_data = threading.Thread(target=save_audio_data(), args=())
#save_data.start()
save_data=process = multiprocessing.Process(target=save_audio_data(),
args=())
save_data.start()
# closing streaming service
stream.stop_stream()
stream.close()
p.terminate()
return
# fucntion to save data in a file
def save_audio_data():
# creates file directory if it does not exist yet
file_dir=os.path.abspath(os.sep)+"Audio recordings"
if not os.path.exists(file_dir):
try:
os.makedirs(file_dir)
# checks for file creation errors not caused by file already existing
except OSError as e:
if e.errno != errno.EEXIST:
raise
# retrieving audio from save queue
audio=SaveQue.get()
FrameNo=SaveQue.qsize()
for i in range(FrameNo):
audio+=SaveQue.get()
# saving audio data
t=datetime.datetime.now()
file_name='record'+' '+str(t.year)+'_'+str(t.month)+'_'+str(t.day)+' '+str(t.hour)+'_'+str(t.minute)+'_'+str(t.second)
data=wave.open(file_dir+ '/' +file_name+'.wav', 'wb')
data.setnchannels(channel_num)
data.setframerate(sample_rate)
data.setsampwidth(sample_size)
data.writeframes(audio)
data.close()
# making sure memory is freed
del audio
print("saved!")
return
important to note that saving works, the stream plays fine. but the as soon as the thread/process that handles the writing part is started, the streaming function is halted until it is saved.
Related
I am developing a console application that alternately broadcasts videos in mp4 format, which are in a local directory. A video stream is created, broadcast to the network, using flask, or rather Response. HTTP protocol. Later, in addition to video, it was necessary to broadcast audio from a file, the .wav format.
I wanted to implement it in such a way that one route app.route('/video_feed') would broadcast video and audio in random order, conditionally in one stream.
A stream is being received in VLC.
I implemented the check in a separate function, where, depending on the type of file, that is, the extension, different functions were launched, when mp4 is for video, wav for audio.
But I ran into a problem that after the video, the audio does not start, and vice versa, after the audio, the video does not start. On the console, using simple prints, I tracked that the script sends an audio stream after the video, but at some point it just stops.
If anyone has experienced this or has thoughts on this, please let me know.
def pere():
for x in main.full_akt_table: #перебор массива с видео
path = main.path_video + x
full_name = os.path.basename(path)
name = os.path.splitext(full_name)[1]
if name == '.wav': #
with open(path, "rb") as fwav: #for audio
data = fwav.read(1024)
while data:
yield data
else:
camera = cv2.VideoCapture(path) #for video
fps = camera.get(5)
while True:
cv2.waitKey(int(600 / int(fps)))
success, frame = camera.read()
if not success:
break
else:
ret, buffer = cv2.imencode('.jpg', frame)
frame = buffer.tobytes()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
#app.route('/video_feed') # роут
def video_feed():
while True: #цикл запуска, возвращает response
return Response(pere())
Hey guys I programmatically created a video using Moviepy and Gizeh. Right now it saves the final video as a mp4 file.
Is it possible to upload that video frame by frame to youtube (livestream) without saving it on my computer ?
Hi thanks for posting the question. Yes, it is possible! Since you specifically mentioned Youtube I would refer to these two references:
PYTHON FFMPEG VIDEO STREAMING DOCS
Delivering Live YouTube Content via Dash
I wish all the best. Do write if you have more questions
You can use VidGear Library's WriteGear API with its FFmpeg backend which can easily upload any live video frame by frame to YouTube (livestream) over RTMP protocol in few lines of python code as follows:
Without Audio
# import required libraries
from vidgear.gears import CamGear
from vidgear.gears import WriteGear
import cv2
# define and open video source
stream = CamGear(source="/home/foo/foo.mp4", logging=True).start()
# define required FFmpeg parameters for your writer
output_params = {
"-clones": ["-f", "lavfi", "-i", "anullsrc"],
"-vcodec": "libx264",
"-preset": "medium",
"-b:v": "4500k",
"-bufsize": "512k",
"-pix_fmt": "yuv420p",
"-f": "flv",
}
# [WARNING] Change your YouTube-Live Stream Key here:
YOUTUBE_STREAM_KEY = "xxxx-xxxx-xxxx-xxxx-xxxx"
# Define writer with defined parameters
writer = WriteGear(
output_filename="rtmp://a.rtmp.youtube.com/live2/{}".format(YOUTUBE_STREAM_KEY),
logging=True,
**output_params
)
# loop over
while True:
# read frames from stream
frame = stream.read()
# check for frame if Nonetype
if frame is None:
break
# {do something with the frame here}
# write frame to writer
writer.write(frame)
# safely close video stream
stream.stop()
# safely close writer
writer.close()
With Audio
# import required libraries
from vidgear.gears import CamGear
from vidgear.gears import WriteGear
import cv2
# define video source
VIDEO_SOURCE = "/home/foo/foo.mp4"
# Open stream
stream = CamGear(source=VIDEO_SOURCE, logging=True).start()
# define required FFmpeg optimizing parameters for your writer
# [NOTE]: Added VIDEO_SOURCE as audio-source
# [WARNING]: VIDEO_SOURCE must contain audio
output_params = {
"-i": VIDEO_SOURCE,
"-acodec": "aac",
"-ar": 44100,
"-b:a": 712000,
"-vcodec": "libx264",
"-preset": "medium",
"-b:v": "4500k",
"-bufsize": "512k",
"-pix_fmt": "yuv420p",
"-f": "flv",
}
# [WARNING]: Change your YouTube-Live Stream Key here:
YOUTUBE_STREAM_KEY = "xxxx-xxxx-xxxx-xxxx-xxxx"
# Define writer with defined parameters and
writer = WriteGear(
output_filename="rtmp://a.rtmp.youtube.com/live2/{}".format(YOUTUBE_STREAM_KEY),
logging=True,
**output_params
)
# loop over
while True:
# read frames from stream
frame = stream.read()
# check for frame if Nonetype
if frame is None:
break
# {do something with the frame here}
# write frame to writer
writer.write(frame)
# safely close video stream
stream.stop()
# safely close writer
writer.close()
Reference: https://abhitronix.github.io/vidgear/latest/help/writegear_ex/#using-writegears-compression-mode-for-youtube-live-streaming
So, I'm logging temperature and humidity data from a DHT22 hooked up to the GPIO on a raspberry pi. It logs everything correctly - but I can only see the updated log after I stop logger.py running.
I think the problem is that I'm not closing the file after writing to it - but I'm not sure. Can I just add a f = open(xxx) and f.close() to the loop so that it 'saves' it everytime it logs?
import os
import time
import Adafruit_DHT
DHT_SENSOR = Adafruit_DHT.DHT22
DHT_PIN = 4
try:
f = open('/home/pi/temphumid/log.csv', 'a+')
if os.stat('/home/pi/temphumid/log.csv').st_size == 0:
f.write('Date,Time,Temperature,Humidity\r\n')
except:
pass
while True:
humidity, temperature = Adafruit_DHT.read_retry(DHT_SENSOR, DHT_PIN)
if humidity is not None and temperature is not None:
f.write('{0},{1},{2:0.1f}*C,{3:0.1f}%\r\n'.format(time.strftime('%m/%d/%y'), time.strftime('%H:%M:%S'), temperature, humidity))
else:
print("Failed to retrieve data from humidity sensor")
time.sleep(60)
expected:
log.csv is updated, so that if I use tail log.csv I can see the up to date data.
actual:
log.csv doesn't update until I stop logger.py from running (using sigint from htop as it is currently run as a cronjob on boot).
Every time we open a file we need to close it to push the output to disk:
fp = open("./file.txt", "w")
fp.write("Hello, World")
fp.close()
To avoid calling the close() method every time, we can use the context manager of the open() function, which will automatically close the file after exiting the block:
with open("./file.txt", "w") as fp:
fp.write("Hello, World")
We do not need to call here the close method every time to push the data into the file.
Write data to the file and hit file.flush() and then do file.fsync() which writes the data to the disk and you'll even be able to open file using different program and see changes at the real time.
Currently all I have is:
from livestreamer import Livestreamer
session = Livestreamer()
stream = session.streams('http://www.twitch.tv/riotgames')
stream = stream['source']
fd = stream.open()
As I'm still a newbie to python I'm at complete loss on what I should do next. How do I continuously save, let's say, last 40 seconds of the stream to file?
Here's a start:
from livestreamer import Livestreamer
session = Livestreamer()
stream = session.streams('http://www.twitch.tv/riotgames')
stream = stream['source']
fd = stream.open()
with open("/tmp/stream.dat", 'wb') as f:
while True:
data = fd.read(1024)
f.write(data)
I've tried it. You can open the /tmp/stream.dat in VLC, for example. The example will read 1 kb at a time and write it to a file.
The program will run forever so you have to interrupt it with Ctrl-C, or add some logic for that. You probably need to handle errors and the end of a stream somehow.
I am trying to snatch a valid JPEG frame from a webcam's MJPEG stream. I successfully write it to disk and I can open it in a program like Infranview, but Python and other graphic programs don't see it as a valid JPEG file. I wrote this function that snarfs it and writes it to disk
def pull_frame(image_url,filename):
flag = 0
# open the url of the webcam stream
try:
f = urllib.urlopen(image_url)
except IOError:
print "uh oh"
else:
# kill first two lines
try:
null = f.readline()
except:
print "duh"
null = f.readline()
pop = f.readline()
# this pulls the length of the content
count = pop[16:] # length of content.
print "Size Of Image:" + count
# read just the amount of the length of content
if int(count):
s = f.read(int(count))
flag = 1
# write it to a file name
p = file(filename,'wb')
p.write(s)
p.close()
So i'm getting broken JPEG's and I even tried a hamfisted workaround by trying to use FFMPEG to make a valid PNG from the broken jpegs, and it worked from the comamnd line but not from subprocess. Is there a way in python to pull this jpeg from the mjpeg steam with a proper jpeg header?
Here is a link to a camera1.jpg that is an example of saved data from this routine that isnt recognized and also has no thumbnail (although infraview can open it find as a jpg)
http://www.2shared.com/photo/OfZh2XeD/camera1.html