I am trying to convert this curl command in python requests, but I am unsure about how I should pass the data:
curl -X POST -u "apikey:{apikey}" \
--header "Content-Type: text/plain" \
--data "some plain text data" \
"{url}"
I have tried to pass the string directly and to encode it with str.encode('utf-8') but I get an error 415 Unsupported Media Type
This is my code:
text = "some random text"
resp = requests.post(url, data=text, headers={'Content-Type': 'text/plain'}, auth=('apikey', self.apikey))
When using requests library, it is usually a good idea not to set Content-Type header manually using headers= keyword.
requests will set this header for you if it is needed (for example posting JSON will always result in Content-Type: application/json header).
Another reason for not setting this type of header manually is encoding, because sometimes you should specify something like Content-Type: text/plain; charset=utf-8.
One more important thing about Content-Type is that this header is not required for making POST requests. RFC 2616:
Any HTTP/1.1 message containing an entity-body SHOULD include a
Content-Type header field defining the media type of that body. If
and only if the media type is not given by a Content-Type field, the
recipient MAY attempt to guess the media type via inspection of its
content and/or the name extension(s) of the URI used to identify the
resource. If the media type remains unknown, the recipient SHOULD
treat it as type "application/octet-stream".
So depending on the server type you're making request to, this header may be left empty.
Sorry for this explanation being a bit vague. I cannot give you an exact explanation why this approach worked for you unless you provide target URL.
Related
Following instructions on this link using Lambda and API Gateway: https://sookocheff.com/post/api/uploading-large-payloads-through-api-gateway/ I have a setup that allows me to get a pre-signed URL and upload files. I've tested using CURL and it has worked.
But when I try to send raw string (csv format or json format) it fails!
Example of what works
curl --request PUT --upload-file Testing.csv "**pre signed upload url**"
Example of what doesn't work
curl --request PUT -H "Content-Type: text/plain" --data "this is raw data" "**pre signed upload url**"
curl --request PUT --data "this is raw data" "**pre signed upload url**"
Am I making the call incorrectly? Should I be switching to POST and what would the call look like then?
It is not becoz of self signed url, it is becoz of content type with the API Gateway set to,
consumes:
- application/json
produces:
- application/json
You add additional content types, it should make it through.
Hope it helps.
So the solution was specifying the content-type during the pre-signed url generation and then the same one in the CURL put command. Figured out thanks to answer here: S3 PUT doesn't work with pre-signed URL in javascript and pointer from #Kannaiyan in the right direction regarding content-types
I'm having trouble converting curl code to python in order to access a token to an API.
The given code is:
curl -k -d "grant_type=client_credentials&scope=PRODUCTION" -H "Authorization :Basic <long base64 value>, Content-Type: application/x-www-form-urlencoded" https://api-km.it.umich.edu/token
I know that -H indicates a header, however Im not sure what to do with -d. So far I have:
authorizationcode = 'username:password'
authorizationcode = base64.standard_b64encode(authorizationcode)
header = {'Authorization ': 'Basic ' + authorizationcode, 'Content-Type': 'application/x-www-form-' + authorizationcode}
r = requests.post('https://api-km.it.umich.edu/token',
data = 'grant_type=client_credentials&scope=PRODUCTION',
headers = header)
Also, these are the instructions:
Obtain your consumer key and consumer secret from the API Directory. These are generated on the Subscriptions page after an application is successfully subscribed an API.
Combine the consumer key and consumer secret keys in the format: consumer-key:consumer-secret. Encode the combined string using base64. Most programming languages have a method to base64 encode a string. For an example of encoding to base64. Visit the base64encode site for more information.
Execute a POST call to the token API to get an access token.
Our data is correct however we are getting a 415 error from the server.
Assistance would be greatly appreciated.
A 415 Error is described in http://www.checkupdown.com/status/E415.html as "Unsupported media type"
As #krock mentioned, the content-type is not specified as application/x-www-form-urlencoded, rather it is being set to x-www-form- + your auth code.
You are setting an incorrect Content-Type header:
'Content-Type': 'application/x-www-form-' + authorizationcode
That should be 'application/x-www-form-urlencode'. You do not, however, have to set it at all as requests does this for you automatically if you pass in a dictionary to the data argument.
requests will also handle the Authorization header for you; pass in the username and password to the auth argument as a tuple:
auth = ('username', 'password')
params = {'grant_type': 'client_credentials', 'scope': 'PRODUCTION'}
r = requests.post('https://api-km.it.umich.edu/token', data=params, auth=auth)
where user and password are the parts before and after the colon. requests will produce the correct Basic base64-encoded header for you from those two strings.
I hope I can explain myself. with out making an arse of myself.
I am trying to use python 3.4 to send a url to a sparkcore api.
I have managed to use curl direcly from the windows command line:-
curl https://api.spark.io/v1/devices/xxxxxxxxxxxxxxx/led -d access_token=yyyyyyyyyyyyyyyy -d params=l1,HIGH
All works fine. there is a space between the led and -d, but that is not a problem.
I have read that reting to do this within python using libcurl is a big pain and I saw lots of messaged about using Requests, so I though I would give it a go.
So I wrote a small routine:
import requests
r = requests.get('https://api.spark.io/v1/devices/xxxxxxxxxxxxxxxxxx/led -d access_token=yyyyyyyyyyyyyyyyy -d params=l1,HIGH')
print(r.url)
print(r)
I get as return:
<Response [400]>
When I examine the URL which actually got sent out the spaces in the URL are replaced with %20. This seems to be my actual problem, because the %20 being added by requests are confusing the server which fails
"code": 400,
"error": "invalid_request",
"error_description": "The access token was not found"
I have tried reading up on how to inpractice have the spaces with out having a %20 being added by the encoding, but I really could do with a pointer in the right direction.
Thanks
Liam
URLs cannot have spaces. The curl command you are using is actually making a request to the url https://api.spark.io/v1/devices/xxxxxxxxxxxxxxx/led with some command line arguments (using -d)
The curl man (manual) page says this about the -d command line argument
-d, --data
(HTTP) Sends the specified data in a POST request to the HTTP server, in the same way that a browser does when a user has filled in an HTML form and presses the submit button. This will cause curl to pass the data to the server using the content-type application/x-www-form-urlencoded. Compare to -F, --form.
-d, --data is the same as --data-ascii. To post data purely binary, you should instead use the --data-binary option. To URL-encode the value of a form field you may use --data-urlencode.
If any of these options is used more than once on the same command line, the data pieces specified will be merged together with a separating &-symbol. Thus, using '-d name=daniel -d skill=lousy' would generate a post chunk that looks like 'name=daniel&skill=lousy'.
If you start the data with the letter #, the rest should be a file name to read the data from, or - if you want curl to read the data from stdin. Multiple files can also be specified. Posting data from a file named 'foobar' would thus be done with --data #foobar. When --data is told to read from a file like that, carriage returns and newlines will be stripped out.
So that says -d is for sending data to the URL with the POST request using the content-type application/x-www-form-urlencoded
The requests documentation has a good example of how to do that using the requests library: http://docs.python-requests.org/en/latest/user/quickstart/#more-complicated-post-requests
So for your curl command, I think this should work
import requests
payload = {'access_token': 'yyyyyyyyyyyyyyyy', 'params': 'l1,HIGH'}
r = requests.post("https://api.spark.io/v1/devices/xxxxxxxxxxxxxxx/led", data=payload)
print(r.text)
I'm working on an API wrapper. The spec I'm trying to build to has the following request in it:
curl -H "Content-type:application/json" -X POST -d data='{"name":"Partner13", "email":"example#example.com"}' http://localhost:5000/
This request produces the following response from a little test server I setup to see exatly what headers/params etc are sent as. This little script produces:
uri: http://localhost:5000/,
method: POST,
api_key: None,
content_type: application/json,
params: None,
data: data={"name":"Partner13", "email":"example#example.com"}
So that above is the result I want my python script to create when it hits the little test script.
I'm using the python requests module, which is the most beautiful HTTP lib I have ever used. So here is my python code:
uri = "http://localhost:5000/"
headers = {'content-type': 'application/json' }
params = {}
data = {"name":"Partner13", "email":"example#exmaple.com"}
params["data"] = json.dumps(data)
r = requests.post(uri, data=params, headers=headers)
So simple enough stuff. Set the headers, and create a dictionary for the POST parameters. That dictionary has one entry called "data" which is the JSON string of the data I want to send to the server. Then I call the post. However, the result my little test script gives back is:
uri: http://localhost:5000/,
method: POST,
api_key: None,
content_type: application/json,
params: None,
data: data=%7B%22name%22%3A+%22Partner13%22%2C+%22email%22%3A+%22example%40example.com%22%7D
So essentially the json data I wanted to send under the data parameter has been urlendcoded.
Does anyone know how to fix this? I have looked through the requests documentation and cannot seem to find a way to not auto urlencode the send data.
Thanks very much,
Kevin
When creating the object for the data keyword, simply assign a variable the result of json.dumps(data).
Also, because HTTP POST can accept both url parameters as well as data in the body of the request, and because the requests.post function has a keyword argument named "params", it might be better to use a different variable name for readability. The requests docs use the variable name "payload", so thats what I use.
data = {"name":"Partner13", "email":"example#exmaple.com"}
payload = json.dumps(data)
r = requests.post(uri, data=payload, headers=headers)
Requests automatically URL encodes dictionaries passed as data here. John_GG's solution works because rather than posting a dictionary containing the JSON encoded string in the 'data' field it simply passes the JSON encoded string directly: strings are not automatically encoded. I can't say I understand the reason for this behaviour in Requests but regardless, it is what it is. There is no way to toggle this behaviour off that I can find.
Best of luck with it, Kevin.
I cannot for the life of me figure out how to perform an HTTP PUT request with verbatim binary data in Python 2.7 with the standard Python libraries.
I thought I could do it with urllib2, but that fails because urllib2.Request expects its data in application/x-www-form-urlencoded format. I do not want to encode the binary data, I just want to transmit it verbatim, after the headers that include
Content-Type: application/octet-stream
Content-Length: (whatever my binary data length is)
This seems so simple, but I keep going round in circles and can't seem to figure out how.
How can I do this? (aside from open up a raw binary socket and write to it)
I found out my problem. It seems there is some obscure behavior in urllib2.Request / urllib2.urlopen() (at least in Python 2.7)
The urllib2.Request(url, data, headers) constructor seems to expect the same type of string in its url and data parameters.
I was giving the data parameter raw data from a file read() call (which in Python 2.7 returns it in the form of a 'plain' string), but my url was accidentally Unicode because I concatenated a portion of the URL from the result of another function which returned Unicode strings.
Rather than trying to "downcast" url from Unicode -> plain strings, it tried to "upcast" the data parameter to Unicode, and it gave me a codec error. (oddly enough, this happens on the urllib2.urlopen() function call, not the urllib2.Request constructor)
When I changed my function call to
# headers contains `{'Content-Type': 'application/octet-stream'}`
r = urllib2.Request(url.encode('utf-8'), data, headers)
it worked fine.
You're misreading the documentation: urllib2.Request expects the data already encoded, and for POST that usually means the application/x-www-form-urlencoded format. You are free to associate any other, binary data, like this:
import urllib2
data = b'binary-data'
r = urllib2.Request('http://example.net/put', data,
{'Content-Type': 'application/octet-stream'})
r.get_method = lambda: 'PUT'
urllib2.urlopen(r)
This will produce the request you want:
PUT /put HTTP/1.1
Accept-Encoding: identity
Content-Length: 11
Host: example.net
Content-Type: application/octet-stream
Connection: close
User-Agent: Python-urllib/2.7
binary-data
Have you considered/tried using httplib?
HTTPConnection.request(method, url[, body[, headers]])
This will send a request to the server using the HTTP request method
method and the selector url. If the body argument is present, it
should be a string of data to send after the headers are finished.
Alternatively, it may be an open file object, in which case the
contents of the file is sent; this file object should support fileno()
and read() methods. The header Content-Length is automatically set to
the correct value. The headers argument should be a mapping of extra
HTTP headers to send with the request.
This snipped worked for me to PUT an image:
on HTTPS site. If you don't need HTTPS, use
httplib.HTTPConnection(URL) instead.
import httplib
import ssl
API_URL="api-mysight.com"
TOKEN="myDummyToken"
IMAGE_FILE="myimage.jpg"
imageID="myImageID"
URL_PATH_2_USE="/My/image/" + imageID +"?objectId=AAA"
headers = {"Content-Type":"application/octet-stream", "X-Access-Token": TOKEN}
imgData = open(IMAGE_FILE, "rb")
REQUEST="PUT"
conn = httplib.HTTPSConnection(API_URL, context=ssl.SSLContext(ssl.PROTOCOL_TLSv1))
conn.request(REQUEST, URL_PATH_2_USE, imgData, headers)
response = conn.getresponse()
result = response.read()