I am trying to snatch a valid JPEG frame from a webcam's MJPEG stream. I successfully write it to disk and I can open it in a program like Infranview, but Python and other graphic programs don't see it as a valid JPEG file. I wrote this function that snarfs it and writes it to disk
def pull_frame(image_url,filename):
flag = 0
# open the url of the webcam stream
try:
f = urllib.urlopen(image_url)
except IOError:
print "uh oh"
else:
# kill first two lines
try:
null = f.readline()
except:
print "duh"
null = f.readline()
pop = f.readline()
# this pulls the length of the content
count = pop[16:] # length of content.
print "Size Of Image:" + count
# read just the amount of the length of content
if int(count):
s = f.read(int(count))
flag = 1
# write it to a file name
p = file(filename,'wb')
p.write(s)
p.close()
So i'm getting broken JPEG's and I even tried a hamfisted workaround by trying to use FFMPEG to make a valid PNG from the broken jpegs, and it worked from the comamnd line but not from subprocess. Is there a way in python to pull this jpeg from the mjpeg steam with a proper jpeg header?
Here is a link to a camera1.jpg that is an example of saved data from this routine that isnt recognized and also has no thumbnail (although infraview can open it find as a jpg)
http://www.2shared.com/photo/OfZh2XeD/camera1.html
Related
I have this method who read a file and put the content into a Plain Text.
def show_open_dialog():
global file_path
if not save_if_modified():
return
file_name, _ = QFileDialog.getOpenFileName(
window_area,
'Open fle...',
os.getcwd(),
'Text files (*.txt *.py)'
)
if file_name:
with open(file_name, 'r') as f:
# Print content into text area.
text_area.setPlainText(f.read())
file_path = file_name
When this method is called, it open a window where I can choose a file and charge it like Notepad of Windows, and it works fine. Now what I want to do is create a Markdown with the information from that file, that is, pass it to HTML.
I have already created the QWebEngineView element.
browser_area = QWebEngineView()
And this is the modifications I made inside of "with open" but that It does not work.
# Print content into text area.
text_area.setPlainText(f.read())
# Raw data.
file_content = f.read()
# To HTML.
browser_area.setHtml(file_content)
# Show it.
browser_area.show()
After print the content, only show an empty window.
I also tried Markdown2 (markdown2.markdown(file_content)) instead of .setHtml() but It does not work too.
For the moment I just want to show the content in a new window and show a message if the HTML cannot be loaded.
When accessing file objects, the read(size=-1) function reads the size amount of bytes from the stream and puts the stream at that position.
with open('somefile', 'r') as f:
# reads the first 10 bytes
start = f.read(10)
# reads the *next* 10 bytes
more = f.read(10)
# move the position at the beginning
f.seek(0)
another = f.read(10)
print(start == another)
# This will print "True"
If size is -1 (the default) this means that the whole object is read, and after that the position is at the end. Since there's nothing left to read at the end of the file, if you try to read again you will get, obviously, nothing.
If you need to access the read data multiple times, you should store it in a temporary variable:
with open(file_name, 'r') as f:
data = f.read()
text_area.setPlainText(data)
browser_area.setHtml(data)
Previously, I converted a mp4 video file to a mp3 audio file. Now I would now like to remove the orginial mp4 video file using
os.remove
However, when I execute my code, it shows me an error as follows:
Win32 error : The process cannot access the file because it is being used by another process
Below is my code:
try:
global stram
b2.config(text="Please wait...")
b2.config(state=DISABLED)
stream = yt.streams.filter(progressive=True)
path = filedialog.askdirectory()
if path == None:
return
stream[0].download(path)
for i in os.listdir(path):
os.rename(os.path.join(path,i),os.path.join(path,i.replace(' ','_')))
title = yt.title.replace(' ','_')
video = VideoFileClip(os.path.join(path+"/"+title+".mp4"))
video.audio.write_audiofile(os.path.join(path+"/"+title+".mp3"))
l3 = Label(action,text="Download Complete",font=("Calibri",12),fg = "green").pack()
b2.config(text="Download Audio")
try:
file = str(f'{title}.mp4')
os.remove(os.path.join(path,file))
except Exception as e:
print(e)
except Exception as e:
l3 = Label(action,text="Error occured while Downloading",font=("Calibri",12),fg = "red").pack()
Does anyone know why this error occurs? Your help would be much appreciated. Thanks in advance!
I am no expert when it comes to windows but I think by "another process" it is actually referring to your python script, or the module you are using within your python script. My guess would be that you should find a way to close the video variable.
Based on this you should be able to do video.close() before os.remove.
i have 2 microservices, A is written in java and sending a video in the form of bytes[ ] to B which is written in python.
B is doing some treatement over the video using openCV and this command in particular
stream = cv2.VideoCapture(video)
the command works fine when provided by a streaming or a ready local video, but when i give it my request.data which java is sending it says
TypeError: an integer is required (got type bytes)
so my question is :
is there any way to save a video to disk from that bytes i'm receiving from java or can i just give the bytes to cv2.capture ?
Thank you.
Just a slight improvement to your own solution: using the with context-manager closes the file for you even if something unexpected happens:
FILE_OUTPUT = 'output.avi'
# Checks and deletes the output file
# You cant have a existing file or it will through an error
if os.path.isfile(FILE_OUTPUT):
os.remove(FILE_OUTPUT)
# opens the file 'output.avi' which is accessable as 'out_file'
with open(FILE_OUTPUT, "wb") as out_file: # open for [w]riting as [b]inary
out_file.write(request.data)
i solved my problem like this :
FILE_OUTPUT = 'output.avi'
# Checks and deletes the output file
# You cant have a existing file or it will through an error
if os.path.isfile(FILE_OUTPUT):
os.remove(FILE_OUTPUT)
out_file = open(FILE_OUTPUT, "wb") # open for [w]riting as [b]inary
out_file.write(request.data)
out_file.close()
The Problem:
I have been playing around with CherryPy for the past couple of days but I'm still having some trouble with getting images to work how I could expect them to. I can save an uploaded image as a jpg without issue but I can't convert it to a base64 image properly. Here's the simple server I wrote:
server.py
#server.py
import os
import cherrypy #Import framework
frameNumber = 1
lastFrame = ''
lastFrameBase64 = ''
class Root (object):
def upload(self, myFile, username, password):
global frameNumber
global lastFrameBase64
global lastFrame
size = 0
lastFrameBase64 = ''
lastFrame = ''
while True:
data = myFile.file.read(8192)
if not data:
break
size += len(data)
lastFrame += data
lastFrameBase64 += data.encode('base64').replace('\n','')
f = open('/Users/brian/Files/git-repos/learning-cherrypy/tmp_image/lastframe.jpg','w')
f.write(lastFrame)
f.close()
f = open('/Users/brian/Files/git-repos/learning-cherrypy/tmp_image/lastframe.txt','w')
f.write(lastFrameBase64)
f.close()
cherrypy.response.headers['Content-Type'] = 'application/json'
print "Image received!"
frameNumber = frameNumber + 1
out = "{\"status\":\"%s\"}"
return out % ( "ok" )
upload.exposed = True
cherrypy.config.update({'server.socket_host': '192.168.1.14',
'server.socket_port': 8080,
})
if __name__ == '__main__':
# CherryPy always starts with app.root when trying to map request URIs
# to objects, so we need to mount a request handler root. A request
# to '/' will be mapped to HelloWorld().index().
cherrypy.quickstart(Root())
When I view the lastframe.jpg file, the image renders perfectly. However, when I take the text string found in lastframe.txt and prepend the proper data-uri identifier data:image/jpeg;base64, to the base64 string, I get a broken image icon in the webpage I'm trying to show the image in.
<!DOCTYPE>
<html>
<head>
<title>Title</title>
</head>
<body>
<img src="data:image/jpeg;base64,/9....." >
</body>
</html>
I have tried using another script to convert my already-saved jpg image into a data-uri and it works. I'm not sure what I'm doing wrong in the server example b/c this code gives me a string that works as a data-uri:
Working Conversion
jpgtxt = open('tmp_image/lastframe.jpg','rb').read().encode('base64').replace('\n','')
f = open("jpg1_b64.txt", "w")
f.write(jpgtxt)
f.close()
So basically it comes down to how is the data variable taken from myFile.file.read(8192) is different from the data variable taken from open('tmp_image/lastframe.jpg','rb') I read that the rb mode in the open method tells python to read the file as a binary file rather than a string. Here's where I got that.
Summary
In summary, I don't know enough about python or the cherrypy framework to see how the actual data is stored when reading from the myFile variable and how the data is store when reading from the output of the open() method. Thanks for taking the time to look at this problem.
Base64 works by taking every 3 bytes of input and producing 4 characters. But what happens when the input isn't a multiple of 3 bytes? There's special processing for that, appending = signs to the end. But that's only supposed to happen at the end of the file, not in the middle. Since you're reading 8192 bytes at a time and encoding them, and 8192 is not a multiple of 3, you're generating corrupt output.
Try reading 8190 bytes instead, or read and encode the entire file at once.
I am looking for a way to save the current frame, from a specified live Twitch.tv channel, to disk. Any programming language is welcomed.
So far I've found a possible solution using Python here but unfortunately it doesn't work.
import time, Image
import cv2
from livestreamer import Livestreamer
# change to a stream that is actually online
livestreamer = Livestreamer()
plugin = livestreamer.resolve_url("http://twitch.tv/flosd")
streams = plugin.get_streams()
stream = streams['best']
# download enough data to make sure the first frame is there
fd = stream.open()
data = ''
while len(data) < 3e5:
data += fd.read()
time.sleep(0.1)
fd.close()
fname = 'stream.bin'
open(fname, 'wb').write(data)
capture = cv2.VideoCapture(fname)
imgdata = capture.read()[1]
imgdata = imgdata[...,::-1] # BGR -> RGB
img = Image.fromarray(imgdata)
img.save('frame.png')
Apparently cv2.VideoCapture(fname) returns none although it managed to write about 300K of information to stream.bin