I recorded a HSL stream by writing the MPEG-TS streams contents into GridFS filesystem.
i'm now trying to serve this content back to the browser using aiohttps SessionResponse which fails for different reasons.
async def get_video(request):
stream_response = StreamResponse()
stream_response.headers['Content-Type'] = 'video/mp2t'
stream_response.headers['Cache-Control'] = 'no-cache'
stream_response.headers['Connection'] = 'keep-alive'
await stream_response.prepare(request)
fd = GridFS()
video_stream = await fd(video_id)
while True:
try:
chunk = await video_stream.readchunk()
if not chunk:
break
stream_response.write(chunk)
except CancelledError as e:
# fails here in safari or with diff content-type also in chrome
break
await stream_response.write_eof()
return stream_response
When trying to access the url using safari i get the player ui presented but nothing plays while the server throws a CancelledError exception trying to write on the already closed SessionResponse
Opening the URL in Chrome results in downloading the video file. This file works when playing it back in VLC. Even playing the URL inside VLC using "Network Source" works.
I also tried serving a static m3u playlist in front of this direct url like this but without luck (VLC also works using the playlist instread of direct stream):
#EXT-X-MEDIA:TYPE=VIDEO,GROUP-ID="medium",NAME="Medium",AUTOSELECT=YES,DEFAULT=YES
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=992000,RESOLUTION=852x480,CODECS="avc1.66.31,mp4a.40.2",VIDEO="medium"
http://localhost:8080/videos/{video_id}
I'm not sure how do debug this any further and would appreciate any help (or ask in comments if i'm unclear). What am i missing that the files don't get played back in browser when accessing them directly? Also embedding my resource url into a html video tag didn't help (obviously, since browser do the same when accessing a video directly)
Some more informations about the video content and the raw http resonses i'm sending:
Video Informations VLC
Direct Video Stream HTTP Response (start)
M3U Playlist HTTP Response
I have no experience with HLS personally but even vast overview of RFC draft displays that you breaks the protocol.
It's not about sending video chunks all together in single endless response but about sending multiple http responses utilizing the same socket connection by keep-alive usage.
Client sends request for new data portions providing protocol-specific EXT* flags and server should respond properly. At very beginning client asks for playlist, server should answer with proper data.
Communication protocol is complex enough, sorry.
I cannot just fix a couple lines in your snippet to make it work.
Related
I have a Flask app that generates video stream links. It connects to a server using login credentials and grabs a one time use link (that expires when a new link is generated using the same credentials). Using a list of credentials I am able to stream to as many devices as I like, so long as I have enough accounts.
The issue I am having is that one of the clients doesn't like the way the stream is returned.
#app.route("/play", methods=["GET"])
def play():
def streamData():
try:
useAccount(<credentials>)
with requests.get(link, stream=True) as r:
for chunk in r.iter_content(chunk_size=1024):
yield chunk
except:
pass
finally:
freeAccount(<credentials>)
...
# return redirect(link)
return Response(streamData())
If I return a redirect then there are no playback issues at all. The problem with a redirect is I don't have a way of marking the credentials as in use, then freeing them after.
The problem client is TVHeadend. I am able to get it to work by enabling the additional avlib inside of TVHeadend... But I shouldn't have to do that. I don't have to when I return a redirect.
What could be the cause of this?
Is it possible to make my app respond in the same way as the links server does?
My guess is that TVHeadend is very strict on if something complies to whatever standards... and I am guessing my app doesn't?
Long story short i'm making script for parsing sankakucomplex in search of pictures and downloading them. And everything until this function goes fine
def download_image_from_link(link, auth, number):
url = f"https:{link}"
bruh = requests.get(url)
with open(f"images/{auth}/image{number}.png", "wb") as im:
im.write(bruh.content)
{link} is something like //v.sankakucomplex.com/data/sample/a5/dc/sample-a5dcd9f21fc80b6bfe5acfa74c3bd936.jpg?e=1648321131&expires=1648321131&m=JqNaW8AUSE-G-jyihn0uiQ&token=hLsB-lYp7bQhgUbkLsZScpch1DBX6JQoFh4T-CglAKA
{auth} and {number} is not important
It just downloads all the puctires from array that i received from other inconsiderable in this situation functions.
And half of links to images starts with http://s. and other half is http://v.
So the thing is when it downloads http://v. files everything is fine, but in when it downloads http://s. it receives
b'<html>\r\n<head><title>500 Internal Server Error</title></head>\r\n<body>\r\n<center><h1>500 Internal Server Error</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n'
Here is the safest links that i could find if you need to check:
for //s. link
https://chan.sankakucomplex.com/ru/post/show/30856755
for //v. link (requests for //beta. doesnt work too)
https://beta.sankakucomplex.com/ru/post/show/30856755
I'm trying to send a long video to a telegram chat using python 3.6 with the following code:
import requests
files = {'document':open('video.avi', 'rb') }
resp = requests.post('https://api.telegram.org/bot<token>/sendDocument?chat_id=<chatid>', files = files, timeout = None )
My problem is that even after I set the timeout as 'none', I keep getting an exception after about 30 seconds.
Is there any way to prevent this from happening and allow me to get a timeout as long as I need to upload any file of any dimension?
I'm learning how to use socket to make https request, and my problem is that I can success request (status 200), but I will only have a part of the webpage content (can't understand why it's splitted in this way)
I will receive my Http header, with a part of the html code. I tried it with at least 3 different website (including github), and I always have the same result.
I'm able to connect with my account to a website, having my cookies to use my account, load a new page with those cookie and get a status 200, and juste have a part of the website... Like just having site's navbars.
If someone have any clue.
import socket
import ssl
HOST = 'www.python.org'
PORT = 443
MySock = socket.socket()
MySock = ssl.wrap_socket(MySock, ssl_version=ssl.PROTOCOL_SSLv23)
MySock.connect((HOST,PORT))
MySock.send("""GET / HTTP/1.1
Host: {}
""".format(HOST).encode())
#Create file to check reponse content
with open('PythonOrg.html', 'w') as File:
print(MySock.recv(50000).decode(), file=File)
1) I seem to not be able to load content with a large buffer, in MySock.recv(50000), I need to loop with smaller buffer, like 4096, and concatenate a variable.
2) A request required time to receive the entire response, I used time.sleep function to manage this waiting, not sur if it's the best way with an ssl socket to wait the server. If anyone have a nice way to get the entire response message when it's big, feel free.
I don't know if it is possible with Bottle.
My website (powered by Bottle) allow users to upload image files. But I limited the size of it to 100K. I use the following code in web server to do that.
uploadLimit = 100 # 100k
uploadLimitInByte = uploadLimit* 2**10
print("before call request.headers.get('Content-Length')")
contentLen = request.headers.get('Content-Length')
if contentLen:
contentLen = int(contentLen)
if contentLen > uploadLimitInByte:
return HTTPResponse('upload limit is 100K')
But when I clicked upload button in web browser to upload a file with its size like 2MB, it seems the server is receiving the whole 2MB http request.
I expect the above code just receive http headers instead of receiving whole http request. That could not prevent wasting time on receving unecessary bytes