I have a upload form that is going to S3 --
<form action="https://test.s3.amazonaws.com/" method="post" enctype="multipart/form-data">
<input type="file" name="file" id="id_file" />
<input type="submit" value="Upload to Amazon S3" name="upload">
</form>
When the form is submitted to S3, I also need to create an object in my db and get the id of the object. How would I do this (without redirecting to my view)? Thank you.
You could use an iframe to send the form and js to search for the id you are looking for in the retrieved page.
You could also use another script to perform the post method like php with the curl library. You could grab all the data retrieved from there, search for what you are looking for and add what you need to your database.
Related
I am an html page with a form to enter your email:
<!DOCTYPE html>
<html>
<body>
<h1>The form novalidate attribute</h1>
<p>The novalidate attribute specifies that the form data should not be validated when submitted.</p>
<form action="/action_page.php" novalidate>
<label for="email">Enter your email:</label>
<input type="email" id="email" name="email" required><br><br>
<input type="submit" value="Submit">
</form>
<p><strong>Note:</strong> The novalidate attribute of the form tag is not supported in Safari 10 (or earlier).</p>
</body>
</html>
Is there a way to run a python script when the user enters his email address and then takes the input and runs it through this def function:
def send_email():
email_address=form_input
print(email_address)
So basically, when a user enters an email in the form, it takes the value and runs it through the send email function. I am new to using python with html so the syntax is confusing me. Any ideas or suggestions as to how to but it in the html file?
No, it is not possible to do it just in html.
If you use .php as backend script, read this php form handling
If you want to use python as backend script, you need to use ajax, e.g. ajax intro. In this case, need to run web server that take ajax request and respond the request
Say I wish to scrape products on this page(http://shop.coles.com.au/online/national/bread-bakery/fresh/bread#pageNumber=2¤tPageSize=20)
But the products is loaded from a post request. A lot of posts here suggest to simulate a request to get dynamic contents, but in my case the Form Data is unknown for me, i.e. catalogId, categoryId.
I'm wondering is it possible to get the response after the ajax call is finished?
You can get the catalogId and other parameter values needed to make the POST request from the form with id="search":
<form id="search" name="search" action="http://shop.coles.com.au/online/SearchDisplay?pageView=image&catalogId=10576&beginIndex=0&langId=-1&storeId=10601" method="get" role="search">
<input type="hidden" name="storeId" value="10601" id="WC_CachedHeaderDisplay_FormInput_storeId_In_CatalogSearchForm_1">
<input type="hidden" name="catalogId" value="10576" id="WC_CachedHeaderDisplay_FormInput_catalogId_In_CatalogSearchForm_1">
<input type="hidden" name="langId" value="-1" id="WC_CachedHeaderDisplay_FormInput_langId_In_CatalogSearchForm_1">
<input type="hidden" name="beginIndex" value="0" id="WC_CachedHeaderDisplay_FormInput_beginIndex_In_CatalogSearchForm_1">
<input type="hidden" name="browseView" value="false" id="WC_CachedHeaderDisplay_FormInput_browseView_In_CatalogSearchForm_1">
<input type="hidden" name="searchSource" value="Q" id="WC_CachedHeaderDisplay_FormInput_searchSource_In_CatalogSearchForm_1">
...
</form>
Use the FormRequest to submit this form.
I'm wondering is it possible to get the response after the ajax call is finished?
Scrapy is not a browser - it does not make additional AJAX requests to load the page and there is nothing built-in to execute JavaScript. You may look into using a real browser and solve it on a higher level - look into selenium package. There is also the related scrapy-splash project.
See also:
selenium with scrapy for dynamic page
Utilizing the Tornado library within Python I have come across a very unusual error. It seems that when I have decorated my file upload handler with '#tornado.web.stream_request_body' the webserver throws the error:
WARNING:tornado.general:403 POST /upload (ip-address): '_xsrf' argument missing from POST
WARNING:tornado.access:403 POST /upload (ip-address) 1.44ms
The code governing the upload is as follows:
#tornado.web.stream_request_body
class Upload(BaseHandler):
def prepare(self):
print self.request.headers
def data_received(self,chunk):
print chunk
#tornado.web.authenticated
def post(self):
self.redirect("/")
where my BaseHandler is a web.RequestHandler subclass with various helper functions (retrieving user info from cookies and whatnot).
Within my HTML template, I have the appropriate xsrf function call as seen here:
<form enctype="multipart/form-data" action="/upload" method="post" id="upload_form" class="form-upload">
{% raw xsrf_form_html() %}
<input type="file" name="upFile" required/>
<button class="btn btn-lg btn-primary btn-block-submit" type="submit">Submit</button>
</form>
and is generating the proper xsrf input within the browser:
<form enctype="multipart/form-data" action="/upload" method="post" id="upload_form" class="form-upload">
<input type="hidden" name="_xsrf" value="2|787b7c6e|4a82eabcd1c253fcabc9cac1e374e913|1430160367"/>
<input type="file" name="upFile" required/>
<button class="btn btn-lg btn-primary btn-block-submit" type="submit">Submit</button>
</form>
When I turn off xsrf_cookies within the webserver settings, all is well and everything functions as normal. However I feel that this is not ideal.
While xsrf_cookies is set to False, if given a text file called "stuff.txt" with a body of "testfile" the output is:
------WebKitFormBoundary4iHkIqUNgfqVErRB
Content-Disposition: form-data; name="_xsrf"
2|787b7c6e|4a82eabcd1c253fcabc9cac1e374e913|1430160367
------WebKitFormBoundary4iHkIqUNgfqVErRB
Content-Disposition: form-data; name="upFile"; filename="stuff.txt"
Content-Type: text/plain
testfile
------WebKitFormBoundary4iHkIqUNgfqVErRB--
From that output, my guess is that the xsrf value is being captured by the stream_request_body and not passed to the appropriate xsrf validation class.
Any help on this would be greatly appreciated. Thank you in advance!
Tornado does not currently (as of version 4.1) support streaming multi-part uploads. This means that uploads you wish to stream must be simple PUTs, instead of a POST that mixes the uploaded data with other form fields like _xsrf. To use XSRF protection in this scenario you must pass the XSRF token via an HTTP header (X-Xsrf-Token) instead of via a form field. Unfortunately this is incompatible with non-javascript web form uploads; you must have a client capable of setting arbitrary HTTP headers.
I need to allow users to upload content directly to Amazon S3. This form works:
<form action="https://me.s3.amazonaws.com/" method="post" enctype='multipart/form-data' class="upload-form">{% csrf_token %}
<input type="hidden" name="key" value="videos/test.jpg">
<input type="hidden" name="AWSAccessKeyId" value="<access_key>">
<input type="hidden" name="acl" value="public-read">
<input type="hidden" name="policy" value="{{policy}}">
<input type="hidden" name="signature" value="{{signature}}">
<input type="hidden" name="Content-Type" value="image/jpeg">
<input type="submit" value="Upload" name="upload">
</form>
And in the function, I define policy and signature. However, I need to pass two variables to the form -- Content-Type and Key, which will only be known when the user presses the upload button. Thus, I need to pass these two variables to the template after the POST request but before the re-direction to Amazon.
It was suggested that I use urllib to do this. I have tried doing so the following way, but I keep getting an inscrutable HTTPError. This is what I currently have:
if request.method == 'POST':
# define the variables
urllib2.urlopen("https://me.amazonaws.com/",
urllib.urlencode([('key','videos/test3.jpg'),
('AWSAccessKeyId','<access_key'),
('acl','public-read'),
('policy',policy),
('signature',signature),
('Content-Type',content_type),
('file',file)]))
I have also tried hardcoding all the values instead of using variables but still get the same error. What am I doing incorrectly and what do I need to change to be able to redirect the form to Amazon, so the content can be uploaded directly to Amazon?
I recommend watching the form do its work with Firebug, enabled and set to the Net tab.
After completing the POST, click its [+] icon to expand, study the Headers, POST, Response tabs to see what you are missing and/or doing wrong.
Next separate this script from Django and put into a standalone file. Add one thing at a time to it and retest until it works. The lines below should increase visibility into your script.
import httplib
httplib.HTTPConnection.debuglevel = 1
I tried poking around with urllib myself, but as I don't have an account on AWS I didn't get farther than getting a 400 Bad Request response. Seems like a good sign, probably I just need valid host and key params etc.
I'm trying to upload multiple files in a form to the BlobStore.
Form:
<form action="{{upload_url}}" method="POST" enctype="multipart/form-data">
<label>Key Name</label><input type="text" name="key_name" size="50"><br/>
<label>name</label><input type="text" name="name" size="50"><br/>
<label>image</label><input type="file" name="image" size="50"><br/>
<label>thumb</label><input type="file" name="thumb" size="50"><br/>
<input type="submit" name="submit" value="Submit">
</form>
I'm then trying to fetch the BlobInfo objects for each of those files uploaded:
def post(self):
image_upload_files = self.get_uploads('image')
thumb_upload_files = self.get_uploads('thumb')
image_blob_info = image_upload_files[0]
thumb_blob_info = thumb_upload_files[0]
I'm seeing some weird behavior. Both files are making it into the BlobStore, but I cannot figure out how to get the Keys so that I can store those on another Entity. The code above manages to get the key for image_blob_info, but not thumb_blob_info. I don't understand how to use get_uploads. I want to pass multiple files through the form and then fetch them by name so I can store them in the appropriate BlobReferenceProperties on another Entity.
Each file needs its own unique upload url, so my guess is something wacky is happening when all three files are posted to the same url.
The best solution for supporting multiple file uploads is described in Nick Johnson's blog post:
http://blog.notdot.net/2010/04/Implementing-a-dropbox-service-with-the-Blobstore-API-part-3-Multiple-upload-support
You could post the files to the same name, followed by [], which will post an array:
<form action="{{upload_url}}" method="POST" enctype="multipart/form-data">
<label>Key Name</label><input type="text" name="key_name" size="50"><br/>
<label>name</label><input type="text" name="files[]" size="50"><br/>
<label>image</label><input type="file" name="files[]" size="50"><br/>
<label>thumb</label><input type="file" name="thumb" size="50"><br/>
<input type="submit" name="submit" value="Submit">
</form>
Then in your form handler, you can something like this (depending on your web framework):
for uploaded_file in request.FILES.getlist('files'):
#do something with uploaded_file
Using the latest version of plupload, I was able to get the UploadQueue to work with GAE with this bit of code. Note, it is CoffeeScript, but should be easy to convert back to JavaScript if you really need to. It assumes you get a bit of json back from your server as {url:"gae generated url"}
$("#fileUploader").pluploadQueue
runtimes : 'html5,html4'
use_query_string : false
max_file_size : '3mb'
multipart: true
unique_names : true
multiple_queues : true
filters : [{title : "Image files", extensions : "jpg,gif,png"}]
preinit:
UploadFile: (up, file) ->
$.ajax
url: '/api/upload/url'
async: false
success: (data) ->
up.settings.url = data.url