I have followed the tutorial posted here in order to get AJAX file uploads on my Django app. The thing is that it doesn't work, and the closest I could get to the issue is finding out that the save_upload() method raises the following exception: 'WSGIRequest' object has no attribute 'read'. Any ideas on what I am doing wrong?
EDIT: I figured out that this is only works in Django 1.3. Any ideeas on how to make it work in Django 1.2?
I think I have gotten to the bottom of your problem.
1) You are trying to run .read() on a request object, which is not allowed. Instead, you need to run it on request.raw_post_data.
2) Before you can run .read(), which takes a file-like object, you need to convert it from a str to a file-like object.
Try this:
import StringIO
output = StringIO.StringIO()
output.write(request.raw_post_data)
...now you'll be able to run output.read() and get the data you want.
#loop through, writing more of the file each time
file_so_far = output.read( 1024 ) #Get ready....
while file_so_far: #..get set...
dest.write( file_so_far ) #Go.
file_so_far = output.read( 1024
Related
I've downloaded a small .mp3 file from the net using Python's requests module, like this:
file_on_disk = open(filename, 'wb')
write(downloaded_file.content)
file_on_disk.close()
Now it works fine on the disk as an MP3 file should. However, I'd like to play it using python-vlc's MediaPlayer while it's still not written to disk, i.e. from memory. However, this doesn't work:
vlc.MediaPlayer(downloaded_file.content).play()
I get a TypeError: a bytes-like object is required, not 'str' message. I checked the type of the content and type(downloaded_file.content) identifies it as <class 'bytes'>, not str. The same MediaPlayer nicely plays the file once it's saved to the disk. What am I missing here, and how can I convert the file in-memory to a format that vlc.MediaPlayer is able to play?
(I'm a beginner so the answer might be obvious, but it keeps eluding me.)
edit: my full code is actually:
import requests, vlc
downloaded_file = requests.get('https://cdn.duden.de/_media_/audio/ID4111733_460321137.mp3')
from vlc import MediaPlayer
MediaPlayer(downloaded_file.content).play()
Check out this possibly related question, you need to create a vlc Instance.
I have tried everything I can find and think of, but cannot seem to get this code right.
I'm using Airflow, trying to run a SQL select statement, return the results, and upload them directly to s3 using a PythonCallable task.
I am unable to save the DataFrame as a csv locally, so that is not an option.
Ultimately, I keep looping back to this ERROR - Fileobj must implement read. The only "successful" attempts have produced empty results in my s3 file. I tried using the .seek(0) method that I found in another post, but then I got ERROR - Unicode-objects must be encoded before hashing. Anyway, below is my code. Any direction is enormously appreciated.
snow_hook = SnowflakeHook(
snowflake_conn_id='Snowflake_ETL_vault'
)
df = snow_hook.get_pandas_df(sql=sql)
with io.StringIO() as stream:
df.to_csv(stream)
stream.seek(0)
f = stream.getvalue()
s3_hook = S3Hook(aws_conn_id='s3_analytics')
s3_hook.load_file_obj(
file_obj=f,
bucket_name=bkt,
key=key,
replace=True
)
Edit: I have also tried f = stream.read() and still somehow get Fileobj must implement read.
Thanks!
I also faced the same issue and spent some time in understanding the nature of the issue.
The reason why you are getting ERROR - Fileobj must implement read is that file_obj expects stream object itself not stream.getvalue()
And pandas.to_csv having some encoding issues, you can find the issue details here
https://github.com/pandas-dev/pandas/issues/23854
The workaround is write bytes using load_bytes function from s3hook
with io.BytesIO() as buffer:
buffer.write(
bytes(
df.to_csv(None, sep="|", quotechar='"'),
encoding="utf-8"
)
)
hook.load_bytes(
buffer.getvalue(),
bucket_name="bucket_name",
key="keyname.csv",
replace=True
)
I am still looking for a better solution though
You can do it like through the load_string command
df = snow_hook.get_pandas_df(sql=sql)
csv_data_as_str = df.to_csv(index=False)
s3_hook.load_string(string_data=csv_data_as_str, key=s3_key, bucket_name=s3_bucket, replace=True)
I am trying to acquire a file from a url on the web and then open that file for use in an application I’m making in python on AWS Lambda. There doesn’t seem to be a way for me to acquire the file in the form I need it, which I believe to be an os.Pathlike object.
Here is what I am trying now, which doesn’t work since requests.get returns a response not path. I’m posting from a phone right now so I cannot use code tags. Apologies.
filename = requests.get(“url.com/file.txt”)
f = open(filename, ‘rb’)
I have also tried a urlparse and a urllib urlretrieve on the url but that does not return a pathlike object either. Note that I don’t believe I can just use wget or something on the shell level as I am using AWS lambda.
import requests
url = 'http://url.com/file.txt'
r = requests.get(url, allow_redirects=True)
f = open(r, ‘rb’)
When you do such operation, it's always a good to see the entire response of the request you are doing. I usually use the dict attribute, works quite often
print(response.__dict__)
On the ones I have done, there were a _content field in the response object with the file bytes. Then you can simply use the io module to read this file :
file = io.BytesIO(response._content)
This can then be used as a file just like when you do open() function
i am dealing with cTrader Trading platform.
My project is written in python 3 on tornado.
And have issue in decoding the prtobuf message from report API Events.
Below will list everything what i achieved and where have the problem.
First cTrader have Rest API for Report
so i got the .proto file and generated it for python 3
proto file is called : cTraderReportingMessages5_9_pb2
from rest Report API getting the protobuf message and able to decode in the following way because i know which descriptor to pass for decoding
from models import cTraderReportingMessages5_9_pb2
from protobuf_to_dict import protobuf_to_dict
raw_response = yield async_client.fetch(base_url, method=method, body=form_data, headers=headers)
decoded_response = cTraderReportingMessages5_9_pb2._reflection.ParseMessage(descriptors[endpoint]['decode'], raw_response.body)
descriptors[endpoint]['decode'] = is my descriptor know exactly which descriptor to pass to decode my message
my content from cTraderReportingMessages5_9_pb2
# here is .proto file generated for python 3 is too big cant paste content here
https://ufile.io/2p2d6
So until here using rest api and know exactly which descriptor to pass, i am able to decode protobuf message and go forward.
2. Now the issue i face
Connecting with python 3 to the tunnel on 127.0.0.:5672
i am listening for events and receiving this kind of data back
b'\x08\x00\x12\x88\x01\x08\xda\xc9\x06\x10\xb6\xc9\x03\x18\xa1\x8b\xb8\x01 \x00*\x00:\x00B\x00J\x00R\x00Z\x00b\x00j\x00r\x00z\x00\x80\x01\xe9\x9b\x8c\xb5\x99-\x90\x01d\x98\x01\xea\x9b\x8c\xb5\x99-\xa2\x01\x00\xaa\x01\x00\xb0\x01\x00\xb8\x01\x01\xc0\x0
1\x00\xd1\x01\x00\x00\x00\x00\x00\x00\x00\x00\xd9\x01\x00\x00\x00\x00\x00\x00\x00\x00\xe1\x01\x00\x00\x00\x00\x00\x00\x00\x00\xea\x01\x00\xf0\x01\x01\xf8\x01\x00\x80\x02\x00\x88\x02\x00\x90\x02\x00\x98\x02\x00\xa8\x02\x00\xb0\x02\x00\xb8\x02\x90N\xc0\x02\x00\xc8\x0
2\x00
as recommendation i got, i need to use same .proto file generated for python that i did in step 1 and decode the message but without any success because i don't know the descriptor need to be passed.
so in 1 step was doing and working perfect this way
decoded_response = cTraderReportingMessages5_9_pb2._reflection.ParseMessage(descriptors[endpoint]['decode'], raw_response.body)
but in second step can not decode the message using in the same way, what i am missing or how to decode the message using same .proto file?
Finally found a workaround by my self, maybe is a primitive way but only this worked for me.
By the answer got from providers need to use same .proto file for both situations
SOLUTION:
1. Did list with all the descriptors from .proto file
here is .proto file generated for python 3 is too big cant paste content here
https://ufile.io/2p2d6
descriptors = [cTraderReportingMessages5_9_pb2.descriptor_1, cTraderReportingMessages5_9_pb2.descriptor_2]
2. Loop throw list and pass one by one
for d in descriptors:
decoded_response = cTraderReportingMessages5_9_pb2._reflection.ParseMessage(d, raw_response.body)
3. Check if decoded_response is not blank
if decoded_response:
# descriptor was found
# response is decoded
else:
# no descriptor
4. After decoded response we go parse it into dict:
from protobuf_to_dict import protobuf_to_dict
decoded_response_to_dict = protobuf_to_dict(decoded_response)
This solution that spent weeks on it finally worked.
Apologies in advance since I'm new to Django (and I've also had to freshen up my Python skills). I'm trying to make a simple example of uploading a file through a form and then printing the rows in my terminal (as a test before doing some actual processing). My views.py contains the following:
def upload_csv(request):
if "GET" == request.method:
return render(request, "portal/csvupload.html")
csv_file = request.FILES["csv_file"]
handle_files(csv_file)
return HttpResponseRedirect(reverse("portal:csvupload"))
def handle_files(csvfile):
csv_reader = csv.reader(csvfile)
for line in csv_reader:
print(line)
Now this returns an error message saying "expected str, bytes or os.PathLike object, not InMemoryUploadedFile", and I'm unsure what's wrong with the code based on the error message? From a Python perspective it looks fine I think, but perhaps it's something to do with the re-direct? Apperciate all answers
request.FILES["csv_file"] is returning an InMemoryUploadedFile object and csv.reader does not know how to handle such an object. I believe you need to call the object's read method: handle_files(csv_file.read()). Note the warning in the documentation: "Be careful with this method: if the uploaded file is huge it can overwhelm your system if you try to read it into memory. You’ll probably want to use chunks() instead; see below."