How to communicate between 2 Django separate projects? - python

I have two separate Django projects and want to locate them in 2 different servers. Is there a way to communicate with each other? In one projects I need to be able to upload a file and send it to second project for processing.

It's called service discovery. read more about it here: https://microservices.io/patterns/server-side-discovery.html

basically you can just use python request library:
import request
files = {'upload_file': open('file.txt','rb')}
values = {'DB': 'photcat', 'OUT': 'csv', 'SHORT': 'short'}
r = requests.post(url, files=files, data=values)
and the response from the other server can be for example json with urls to the files, or something...
check python requests file upload
but be careful, you should send some hashes/keys or something so other people can't imitate the behavior and send you false data. That ofc depends on what you need it for

Related

Python sending POST requests/ multipart/form-data

I'm just working on API conection at my work. I already made some GET and PUT request, but now i have problem with POST. API documantation is here. And here is my code I test but get 400 bad request:
import requests
files = {'files': ('fv.pdf', open(r"C:\python\API\fv.pdf", 'rb'))}
data = {"order_documents":[{'file_name':"fv.pdf", 'type_code':'CUSTOMER_INVOICE' }]}
header = {
'Authorization': '###########################',
}
response = requests.post("https://######.com/api/orders/40100476277994-A/documents", headers=header, files = files, data = data)
print(response.status_code)
print(response.url)
Someone have any idea how i can handle with this?
Looks like you are missing the order_documents parameter, it needs to be an array and also needs to be called order_documents.
Try changing your data variable into:
data = {"order_documents": [ {'file_name':"fv.pdf", 'type_code':'CUSTOMER_INVOICE' } ] }
The API expects files as the parameter name and your dictionary sends file to the server. The parameter name files that you give to session.post is just for requests library and not the actual parameter sent to the server.
The API also expects multiple files in an array, so you need to change your files object.
files = [
('files', ('fv.pdf', open(r"C:\python\API\fv.pdf", 'rb')),
]
Also, I don't think you need to use requests.Session(), just use requests.post(), unless you're planning on using the session object multiple times for subsequent requests.

Creating a CKAN package/dataset with resources using ckanapi and Python

CKAN provides the ckanapi package for accessing the CKAN API via Python or the command line.
I can use it to download metadata, create resources, etc. But I can't create a package and upload resources to it in a single API call. (A package is also referred to as a dataset.)
Internally, ckanapi scans all keys moving any file-like parameters into a separate dict, which it passes to the requests.session.post(files=..) parameter.
This is the closest I can get but CKAN returns an HTTP 500 error (copied from this guide to requests):
with ckanapi.RemoteCKAN('http://myckan.example.com', apikey='real-key', user_agent=ua, username='joe', password='pwd') as ckan:
ckan.action.package_create(name='joe_data',
resources=('report.xls',
open('/path/to/file.xlsx', 'rb'),
'application/vnd.ms-excel',
{'Expires': '0'}))
I've also tried resources=open('path/file'), files=open('file'), shorter or longer tuples, but get the same 500 error.
The requests documentation says:
:param files: (optional) Dictionary of ``'filename': file-like-objects``
for multipart encoding upload.
I can't pass ckanapi resources={'filename': open('file')} as ckanapi doesn't detect the file, attempts to pass it to requests as a normal parameter, and fails ("BufferedReader is not JSON serializable" as it attempts to make the file a POST parameter). I get the same if I try to pass a list of files. But the API is able to create a package and add a number of resources in a single call.
So how do I create a package and multiple resources with a single ckanapi call?
I was curious about this and thought I'd put something together to test it. Unfortunately I haven't played with the CLI you mentioned. But I hope this will help you and others stumbling across this.
I am not positive but I'm guessing your resource dict isn't formatted properly. The resources needs to be a list of dictionaries.
Here's a ruby script to do the single api call insert (my preferred language at this time):
# Ruby script to create a package and resource in one api call.
# You can run this in https://repl.it/languages/ruby
# Don't forget to update URLs and API key.
require 'csv'
require 'json'
require 'net/http'
hash_to_json = {
"title" => 'test1',
"name" => 'test1',
"owner_org" => 'bbb9682e-b58c-4826-bf4b-b161581056be',
"resources" => [
{
"url" => 'http://www.resource_domain.com/doc.kml'
}
]
}.to_json
uri = URI('http://ckan_app_domain.com:5000/api/3/action/package_create')
Net::HTTP.start(uri.host, uri.port) do |http|
request = Net::HTTP::Post.new uri
request['Authorization'] = 'user-api-key'
request.body = hash_to_json
response = http.request request
puts response.body
end
And here's a plain python script to do the same thing (thank you CKAN docs for this template I modified)
#!/usr/bin/env python
import urllib2
import urllib
import json
import pprint
# Put the details of the dataset we're going to create into a dict.
dataset_dict = {
'name': 'my_dataset_name',
'notes': 'A long description of my dataset',
'owner_org': 'bbb9682e-b58c-4826-bf4b-b161581056be',
'resources': [
{
'url': 'example.com'
}
]
}
# Use the json module to dump the dictionary to a string for posting.
data_string = urllib.quote(json.dumps(dataset_dict))
# We'll use the package_create function to create a new dataset.
request = urllib2.Request(
'http://ckan_app_domain.com:5000/api/3/action/package_create')
# Creating a dataset requires an authorization header.
# Replace *** with your API key, from your user account on the CKAN site
# that you're creating the dataset on.
request.add_header('Authorization', 'user-api-key')
# Make the HTTP request.
response = urllib2.urlopen(request, data_string)
assert response.code == 200
# Use the json module to load CKAN's response into a dictionary.
response_dict = json.loads(response.read())
assert response_dict['success'] is True
# package_create returns the created package as its result.
created_package = response_dict['result']
pprint.pprint(created_package)

How to upload a binary/video file using Python http.client PUT method?

I am communicating with an API using HTTP.client in Python 3.6.2.
In order to upload a file it requires a three stage process.
I have managed to talk successfully using POST methods and the server returns data as I expect.
However, the stage that requires the actual file to be uploaded is a PUT method - and I cannot figure out how to syntax the code to include a pointer to the actual file on my storage - the file is an mp4 video file.
Here is a snippet of the code with my noob annotations :)
#define connection as HTTPS and define URL
uploadstep2 = http.client.HTTPSConnection("grabyo-prod.s3-accelerate.amazonaws.com")
#define headers
headers = {
'accept': "application/json",
'content-type': "application/x-www-form-urlencoded"
}
#define the structure of the request and send it.
#Here it is a PUT request to the unique URL as defined above with the correct file and headers.
uploadstep2.request("PUT", myUniqueUploadUrl, body="C:\Test.mp4", headers=headers)
#get the response from the server
uploadstep2response = uploadstep2.getresponse()
#read the data from the response and put to a usable variable
step2responsedata = uploadstep2response.read()
The response I am getting back at this stage is an
"Error 400 Bad Request - Could not obtain the file information."
I am certain this relates to the body="C:\Test.mp4" section of the code.
Can you please advise how I can correctly reference a file within the PUT method?
Thanks in advance
uploadstep2.request("PUT", myUniqueUploadUrl, body="C:\Test.mp4", headers=headers)
will put the actual string "C:\Test.mp4" in the body of your request, not the content of the file named "C:\Test.mp4" as you expect.
You need to open the file, read it's content then pass it as body. Or to stream it, but AFAIK http.client does not support that, and since your file seems to be a video, it is potentially huge and will use plenty of RAM for no good reason.
My suggestion would be to use requests, which is a way better lib to do this kind of things:
import requests
with open(r'C:\Test.mp4'), 'rb') as finput:
response = requests.put('https://grabyo-prod.s3-accelerate.amazonaws.com/youruploadpath', data=finput)
print(response.json())
I do not know if it is useful for you, but you can try to send a POST request with requests module :
import requests
url = ""
data = {'title':'metadata','timeDuration':120}
mp3_f = open('/path/your_file.mp3', 'rb')
files = {'messageFile': mp3_f}
req = requests.post(url, files=files, json=data)
print (req.status_code)
print (req.content)
Hope it helps .

Python file upload from url using requests library

I want to upload a file to an url. The file I want to upload is not on my computer, but I have the url of the file. I want to upload it using requests library. So, I want to do something like this:
url = 'http://httpbin.org/post'
files = {'file': open('report.xls', 'rb')}
r = requests.post(url, files=files)
But, only difference is, the file report.xls comes from some url which is not in my computer.
The only way to do this is to download the body of the URL so you can upload it.
The problem is that a form that takes a file is expecting the body of the file in the HTTP POST. Someone could write a form that takes a URL instead, and does the fetching on its own… but that would be a different form and request than the one that takes a file (or, maybe, the same form, with an optional file and an optional URL).
You don't have to download it and save it to a file, of course. You can just download it into memory:
urlsrc = 'http://example.com/source'
rsrc = requests.get(urlsrc)
urldst = 'http://example.com/dest'
rdst = requests.post(urldst, files={'file': rsrc.content})
Of course in some cases, you might always want to forward along the filename, or some other headers, like the Content-Type. Or, for huge files, you might want to stream from one server to the other without downloading and then uploading the whole file at once. You'll have to do any such things manually, but almost everything is easy with requests, and explained well in the docs.*
* Well, that last example isn't quite easy… you have to get the raw socket-wrappers off the requests and read and write, and make sure you don't deadlock, and so on…
There is an example in the documentation that may suit you. A file-like object can be used as a stream input for a POST request. Combine this with a stream response for your GET (passing stream=True), or one of the other options documented here.
This allows you to do a POST from another GET without buffering the entire payload locally. In the worst case, you may have to write a file-like class as "glue code", allowing you to pass your glue object to the POST that in turn reads from the GET response.
(This is similar to a documented technique using the Node.js request module.)
import requests
img_url = "http://...."
res_src = requests.get(img_url)
payload={}
files=[
('files',('image_name.jpg', res_src.content,'image/jpeg'))
]
headers = {"token":"******-*****-****-***-******"}
response = requests.request("POST", url, headers=headers, data=payload, files=files)
print(response.text)
above code is working for me.

Python 3 script to upload a file to a REST URL (multipart request)

I am fairly new to Python and using Python 3.2. I am trying to write a python script that will pick a file from user machine (such as an image file) and submit it to a server using REST based invocation. The Python script should invoke a REST URL and submit the file when the script is called.
This is similar to multipart POST that is done by browser when uploading a file; but here I want to do it through Python script.
If possible do not want to add any external libraries to Python and would like to keep it fairly simple python script using the core Python install.
Can some one guide me? or share some script example that achieve what I want?
Requests library is what you need. You can install with pip install requests.
http://docs.python-requests.org/en/latest/user/quickstart/#post-a-multipart-encoded-file
>>> url = 'http://httpbin.org/post'
>>> files = {'file': open('report.xls', 'rb')}
>>> r = requests.post(url, files=files)
A RESTful way to upload an image would be to use PUT request if you know what the image url is:
#!/usr/bin/env python3
import http.client
h = http.client.HTTPConnection('example.com')
h.request('PUT', '/file/pic.jpg', open('pic.jpg', 'rb'))
print(h.getresponse().read())
upload_docs.py contains an example how to upload a file as multipart/form-data with basic http authentication. It supports both Python 2.x and Python 3.
You could use also requests to post files as a multipart/form-data:
#!/usr/bin/env python3
import requests
response = requests.post('http://httpbin.org/post',
files={'file': open('filename','rb')})
print(response.content)
You can also use unirest . Sample code
import unirest
# consume async post request
def consumePOSTRequestSync():
params = {'test1':'param1','test2':'param2'}
# we need to pass a dummy variable which is open method
# actually unirest does not provide variable to shift between
# application-x-www-form-urlencoded and
# multipart/form-data
params['dummy'] = open('dummy.txt', 'r')
url = 'http://httpbin.org/post'
headers = {"Accept": "application/json"}
# call get service with headers and params
response = unirest.post(url, headers = headers,params = params)
print "code:"+ str(response.code)
print "******************"
print "headers:"+ str(response.headers)
print "******************"
print "body:"+ str(response.body)
print "******************"
print "raw_body:"+ str(response.raw_body)
# post sync request multipart/form-data
consumePOSTRequestSync()
You can check out this post http://stackandqueue.com/?p=57 for more details

Categories