I'm working on a project that involves streaming .OGG (or .mp3) files from my webserver. I'd prefer not to have to download the whole file and then play it, is there a way to do that in pure Python (no GStreamer - hoping to make it truly cross platform)? Is there a way to use urllib to download the file chunks at a time and load that into, say, PyGame to do the actual audio playing?
Thanks!
I suppose your server supports Range requests. You ask the server by header Range with start byte and end byte of the range you want:
import urllib2
req = urllib2.Request(url)
req.headers['Range'] = 'bytes=%s-%s' % (startByte, endByte)
f = urllib2.urlopen(req)
f.read()
You can implement a file object and always download just a needed chunk of file from server. Almost every library accepts a file object as input.
It will be probably slow because of a network latency. You would need to download bigger chunks of the file, preload the file in a separate thread, etc. In other words, you would need to implement the streaming client logic yourself.
Related
What is the best way to send a lot of POST requests to a REST endpoint via Python?
E.g. I want to upload ~500k files to a database.
What I've done so far is a loop that creates for each file a new request using the requests package.
# get list of files
files = [f for f in listdir(folder_name)]
# loop through the list
for file_name in files:
try:
# open file and get content
with open(folder_name + "\\" + file_name, "r") as file:
f = file.read()
# create request
req = make_request(url, f)
# error handling, logging, ...
But as this is quite slow: what is the best practice to do that? Thank you.
First approach:
I dont know if it is the best practice you can split the files in batches of 1000 and zip it and send it as post requests using threads ( set the num threads = number of processor cores)
( The rest end point can extract the zipped contents and then process it )
second approach:
zip the files in batches and transfer it in batches
after the transfer is completed , validate in the server side
Then start the database upload at one go.
The first thing you want to do is determine exactly which part of your script is the bottleneck. You have both disk and network I/O here (reading files and sending HTTP requests, respectively).
Assuming that the HTTP requests are the actual bottleneck (highly likely), consider using aiohttp instead of requests. The docs have some good examples to get you started and there are plenty of "Quick Start" articles out there. This would allow your network requests to be cooperative, meaning that other python code can run while one of your network requests is waiting. Just to be careful to not overwhelm whatever server is receiving the requests.
I want to read data from a GZIP dataset file directly the internet without downloading the complete file. Considering the size of the dataset, is it possible in python to stream the data directly from the server through HTTP and read the data? I took a look at zlib and gzip packages. I'm a beginner to python, I want to know if this is possible using python or any other language, if possible any references to such code. Thanks in Advance!
The webApp I'm currently developing requires large JSON files to requested by the client, built on the server using Python and sent back to the client. The solution is implemented via CGI, and is working correctly in every way.
At this stage I'm just employing various techniques to minimize the size of the resulting JSON objects sent back to the client which are around 5-10mb ( Without going into detail, this is more or less fixed, and cannot be lazy loaded in any way).
The host we're using doesn't support mod_deflate or mod_gzip, so while we can't configure Apache to automatically create gzipped content on the server with .htaccess, I figure we'll still be able to receive it and decode on the client side as long as the Content-encoding header is set correctly.
What I was wondering, is what is the best way to achieve this. Gzipping something in Python is trivial. I already know how to do that, but the problem is:
How do I compress the data in such a way, that printing it to the output stream to send via CGI will be both compressed, and readable to the client?
The files have to be created on the fly, based upon input data, so storing premade and prezipped files is not an option, and they have to be received via xhr in the webApp.
My initial experiments with compressing the JSON string with gzip and io.stringIO, then printing it to the output stream caused it to be printed in Python's normal byte format eg: b'\n\x91\x8c\xbc\xd4\xc6\xd2\x19\x98\x14x\x0f1q!\xdc|C\xae\xe0 and such, which bloated the request to twice it's normal size...
I was wondering if someone could point me in the right direction here with how I could accomplish this, if it is indeed possible.
I hope I've articulated my problem correctly.
Thank you.
I guess you use print() (which first converts its argument to a string before sending it to stdout) or sys.stdout (which only accepts str objects).
To write directly on stdout, you can use sys.stdout.buffer, a file-like object that supports bytes objects:
import sys
import gzip
s = 'foo'*100
sys.stdout.buffer.write(gzip.compress(s.encode()))
Which gives valid gzip data:
$ python3 foo.py | gunzip
foofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoofoo
Thanks for the answers Valentin and Phillip!
I managed to solve the issue, both of you contributed to the final answer. Turns out it was a combination of things.
Here's the final code that works:
response = json.JSONEncoder().encode(loadData)
sys.stdout.write('Content-type: application/octet-stream\n')
sys.stdout.write('Content-Encoding: gzip\n\n')
sys.stdout.flush()
sys.stdout.buffer.write(gzip.compress(response.encode()))
After switching over to sys.stdout instead of using print to print the headers, and flushing the stream it managed to read correctly. Which is pretty curious... Always something more to learn.
Thanks again!
Anyone here experienced with Requests and HTTP streaming with Chunked Data encoding.
I'm wondering if Requests inherently knows the chunk size provided by the server, and uses it in requests.iter_lines() as the chunk size. I'm finding if i reduce the default chunk size, it processes faster, but is there any correlation to what the server sends back and i shouldn't be monkeying around with setting it. Note, i'm eating social data feeds from DataSift in real time and ultimately shooting them to standard out.
code is:
#!/usr/bin/env python
import requests
import json
headers={'Auth': 'username:api_key'}
r = requests.get('http://stream.datasift.com/988098098sd09fsd89fsd0f7',headers=headers, stream=True)
for line in r.iter_lines(chunk_size=128):
if line:
print line
Looking at the source code (models.py line 531 and 31), the preconfigured value of 512 is simply a "sane default".
I have been trying, in vain, to make a program that reads text out loud using the web application found here (http://www.ispeech.org/text.to.speech.demo.php). It is a demo text-to-speech program, that works very well, and is relatively fast. What I am trying to do is make a Python program that would input text to the application, then output the result. The result, in this case, would be sound. Is there any way in Python to do this, like, say, a library? And if not, is it possible to do this through any other means? I have looked into the iSpeech API (found here), but the only problem with it is that there is a limited number of free uses (I believe that it is 200). While this program is only meant to be used a couple of times, I would rather it be able to use the service more then 200 times. Also, if this solution is impractical, could anyone direct me towards another alternative?
# AKX I am currently using eSpeak, and it works well. It just, well, doesn't sound too good, and it is hard to tell at times what is being said.
If using iSpeech is not required, there's a decent (it's surely not as beautifully articulated as many commercial solutions) open-source text-to-speech solution available called eSpeak.
It's usable from the command line (subprocess with Python), or as a shared library. It seems there's also a Python wrapper (python-espeak) for it.
Hope this helps.
OK. I found a way to do it, seems to work fine. Thanks to everyone who helped! Here is the code I'm using:
from urllib import quote_plus
def speak(text):
import pydshow
words = text.split()
temp = []
stuff = []
while words:
temp.append(words.pop(0))
if len(temp) == 24:
stuff.append(' '.join(temp))
temp = []
stuff.append(' '.join(temp))
for i in stuff:
pydshow.PlayFileWait('http://api.ispeech.org/api/rest?apikey=8d1e2e5d3909929860aede288d6b974e&format=mp3&action=convert&voice=ukenglishmale&text='+quote_plus(i))
if __name__ == '__main__':
speak('Hello. This is a text-to speech test.')
I find this ideal because it DOES use the API, but it uses the API key that is used for the demo program. Therefore, it never runs out. The key is 8d1e2e5d3909929860aede288d6b974e.
You can actually test this at work without the program, by typing the following into your address bar:
http://api.ispeech.org/api/rest?apikey=8d1e2e5d3909929860aede288d6b974e&format=mp3&action=convert&voice=ukenglishmale&text=
Followed by the text you want to speak. You can also adjust the language, by changing, in this case, the ukenglishmale to something else that iSpeech offers. For example, ukenglishfemale. This will speak the same text, but in a feminine voice.
NOTE: Pydshow is my wrapper around DirectShow. You can use yours instead.
The flow of your application would be like this:
Client-side: User inputs text into form, and form submits a request to server
Server: may be python or whatever language/framework you want. Receives http request with text.
Server: Runs text-to-speech either with pure python library or by running a subprocess to a utility that can generate speech as a wav/mp3/aiff/etc
Server: Sends HTTP response back by streaming file with a mime type to Client
Client: Receives the http response and plays the content
Specifically about step 3...
I don't have any particular advise on the most articulate open source speech synthesizing software available, but I can say that it does not have to necessarily be pure python, or even python at all for that matter. Most of these packages have some form of a command line utility to take stdin or a file and produce an audio file as output. You would simply launch this utility as a subprocess to generate the file, and then stream the file back in your http response.
If you decide to make use of an existing web service that provides text-to-speech via an API (iSpeech), then step 3 would be replaced with making your own server-side http request out to iSpeech, receiving the response and pretty much forwarding that response back to the original client request, like a proxy. I would say the benefit is not having to maintain your own speech synthesis solution or getting better quality that you could from an open source... but the downside is that you probably will have a bit more latency in your response time since your server has to make its own external http request and download the data first.