I have a query string that will return data successfully if I run it in Postman but if I pull the data via Python3 (http.client.HTTPSConnection), the target returns 'bad request'.
The problem:
/v4_6_release/apis/3.0/service/tickets?customFieldConditions=caption="Escalated To" AND value ="Cloud Operations"
I have tried putting +AND+ or %20and%20 or %20AND%20 instead of AND but no success. For example:
/v4_6_release/apis/3.0/service/tickets?customFieldConditions=caption="Escalated To"%20AND%20value="Cloud Operations"
^ This works fine in postman but returns <hr><p>HTTP Error 400. The request is badly formed.</p> when via Python3.
Is there anything else wrong with the url encoding here?
You have to encode all whitespaces in you URL. So the URL should be /v4_6_release/apis/3.0/service/tickets?customFieldConditions=caption="Escalated%20To"%20AND%20value="Cloud%20Operations"
Related
I am trying to use the request with the post method. And one of my parameters has "/"(slash) so the response turns into an error. When I look at the post URL then it seems "/" was written as "%2F". The code and results are like following:
link="https://api.someexchange.com"
sub_url="/open/v1/orders"
stamp = str(int(time.time())*1000)
headers={}
headers["Content-Type"]="application/x-www-form-urlencoded"
parameters={"symbol":"BTC/TRY",
"side":1,
"type":1,
"quantity":0.001,
"price":321000,
"timestamp":stamp,
"api_key":"api_key"}
r=requests.post(link+sub_url,params=parameters,headers=headers)
print(r.url)
'https://api.someexchange.com/open/v1/orders?symbol=BTC%2FTRY&side=1&type=1&quantity=0.001&price=321000×tamp=1669994305000&api_key=api_key'
as you can see, the URL has "BTC%2FTRY" instead of "BTC/TRY" when I try manually from the URL bar, It works fine.
I am using python requests package to get results from a API and the URL contains + sign in it. but when I use requests.get, the request is failing as the API is not able to understand + sign. how ever if I replace + sign with %2B (URI Encoding) the request is successful.
Is there way to encode these characters, so that I encode the URL while passing it to the requests package
Error: test user#gmail.com does not exist
API : https://example.com/test+user#gmail.com
You can use requests.utils.quote (which is just a link to urllib.parse.quote) to convert your text to url encoded format.
>>> import requests
>>> requests.utils.quote('test+user#gmail.com')
'test%2Buser%40gmail.com'
Examples:
url_1 = "http://yinwang.org/blog-cn/2013/04/21/ydiff-%E7%BB%93%E6%9E%84%E5%8C%96%E7%9A%84%E7%A8%8B%E5%BA%8F%E6%AF%94%E8%BE%83/"
url_2 = "http://yinwang.org/blog-cn/2013/04/21/ydiff-%E7%BB%93%E6%9E%84%E5%8C%96%E7%9A%84%E7%A8%8B%E5%BA%8F%E6%AF%94%E8%BE%83"
As you see, if I don't add a / to the last of the URL, when I use urllib2.urlopen(url_2) it returns 400 error because the effective URL should be url_1, if the URL doesn't include any Chinese, the urllib2.urlopen and urllib.urlopen will add a / automatically.
The question is the urllib.urlopen works well all of these situations, but urllib2.urlopen just works well when the URL without Chinese.
So I wonder that if it is a little bug to urllib2.urlopen, or is there another explaination to it?
What actually happens here is a couple of redirections initiated by the server, before the acual error:
Request: http://yinwang.org/blog-cn/2013/04/21/ydiff-%E7%BB%93%E6%9E%84%E5%8C%96%E7%9A%84%E7%A8%8B%E5%BA%8F%E6%AF%94%E8%BE%83
Response: Redirect to http://www.yinwang.org/blog-cn/2013/04/21/ydiff-%E7%BB%93%E6%9E%84%E5%8C%96%E7%9A%84%E7%A8%8B%E5%BA%8F%E6%AF%94%E8%BE%83
Request: http://www.yinwang.org/blog-cn/2013/04/21/ydiff-%E7%BB%93%E6%9E%84%E5%8C%96%E7%9A%84%E7%A8%8B%E5%BA%8F%E6%AF%94%E8%BE%83
Response: Redirect to http://www.yinwang.org/blog-cn/2013/04/21/ydiff-结构化的程序比较/ (actually 'http://www.yinwang.org/blog-cn/2013/04/21/ydiff-\xe7\xbb\x93\xe6\x9e\x84\xe5\x8c\x96\xe7\x9a\x84\xe7\xa8\x8b\xe5\xba\x8f\xe6\xaf\x94\xe8\xbe\x83/', to be precise)
AFAIK, that last redirect is invalid. The address should be plain ASCII (non-ascii characters should be encoded). The correct encoded address would be: http://www.yinwang.org/blog-cn/2013/04/21/ydiff-%E7%BB%93%E6%9E%84%E5%8C%96%E7%9A%84%E7%A8%8B%E5%BA%8F%E6%AF%94%E8%BE%83/
Now, it seems that urllib is playing nice and doing the conversion itself, before requesting the final address, whereas urllib2 simply uses the address it receives.
You can see that if you try to open the final address manually:
urllib
>>> print urllib.urlopen('http://www.yinwang.org/blog-cn/2013/04/21/ydiff-\xe7\xbb\x93\xe6\x9e\x84\xe5\x8c\x96\xe7\x9a\x84\xe7\xa8\x8b\xe5\xba\x8f\xe6\xaf\x94\x
e8\xbe\x83/').geturl()
http://www.yinwang.org/blog-cn/2013/04/21/ydiff-%E7%BB%93%E6%9E%84%E5%8C%96%E7%9A%84%E7%A8%8B%E5%BA%8F%E6%AF%94%E8%BE%83/
urllib2
>>> try:
... urllib2.urlopen('http://www.yinwang.org/blog-cn/2013/04/21/ydiff-\xe7\xbb\x93\xe6\x9e\x84\xe5\x8c\x96\xe7\x9a\x84\xe7\xa8\x8b\xe5\xba\x8f\xe6\xaf\x94\xe8\xbe\x83/')
... except Exception as e:
... print e.geturl()
...
http://www.yinwang.org/blog-cn/2013/04/21/ydiff-š╗ôŠ×äňîľšÜäšĘőň║ĆŠ»öŔżâ/
Solution
If it is your server, you should fix the problem there. Otherwise, I guess it should be possible to write a urllib2.HTTPRedirectHandler which would encode the redirection URLs in urllib2.
I'm doing a simple HTTP requests authentication vs our internal server, getting the cookie back then hitting a Cassandra RESTful server to get data. The requests.get() chokes when returning the cookie.
I have a curl script that extracts the data successfully, I'd rather work with the response JSON data in pure python.
Any clues to what I've doing wrong below? I dump the cookie, it looks fine, very similar to my curl cookie.
Craig
import requests
import rtim
# this makes the auth and gets the cookie returned, save the cookie
myAuth = requests.get(rtim.rcas_auth_url, auth=(rtim.username, rtim.password),verify=False)
print myAuth.status_code
authCookie=myAuth.headers['set-cookie']
IXhost='xInternalHostName.com:9990'
mylink='http:/%s/v1/BONDISSUE?format=JSONARRAY&issue.isin=%s' % (IXhost, 'US3133XK4V44')
# chokes on next line .... doesn't like the Cookie format
r = requests.get(mylink, cookies=authCookie)
(Pdb) next
TypeError: 'string indices must be integers, not str'
I think the problem is on the last line:
r = requests.get(mylink, cookies=authCookie)
requests assumes that the cookies parameter is a dictionary, but you are passing a string object authCookie to it.
The exception raises when requests tries to treat the string authCookie as a dictionary.
I'm making a request in python using pycurl to a URL which returns a reasonably large json formatted response. When I goto the URL in a browser I see the entire contents, but if I use pycurl and print the received data, I only see about half of what I see when I browse to the URL, and I get an error parsing the data using the json library stating :
ValueError: Unterminated string starting at: line 1 column 16078 (char 16078)
The pycurl request is this :
conn = pycurl.Curl()
conn.setopt(pycurl.URL, myUrl)
conn.setopt(pycurl.WRITEFUNCTION, on_receive)
conn.setopt(pycurl.CONNECTTIMEOUT, 30)
conn.setopt(pycurl.TIMEOUT, 30)
conn.setopt(pycurl.NOSIGNAL, 10)
conn.perform()
with the on_receive function currently just printing the data.
Does anybody know why I am only getting part of the response? I have used massive timeouts just for trying to solve this, I had initially not specified any timeouts but was still getting this error.
in pycurl, you could set this,
import pycurl
pycurl.CONTENT_LENGTH_DOWNLOAD
try using
import Curl, pycurl
con = Curl()
con.set_option(pycurl.CONTENT_LENGTH_DOWNLOAD, 9999999999)
con.get('url' ....
also try following until it works:
pycurl.SIZE_DOWNLOAD
pycurl.REQUEST_SIZE
You could try to access those json data with curl tool.
When you're able to get data, just translate curl options to pycurl options.
curl --help | less