I am new to python. I have a script, trying to post something to a site. now how do I disable SSL authentication in the script?
In python2, you can use
requests.get('https://kennethreitz.com', verify=False)
but I don't know how to do it in python 3.
import urllib.parse
import urllib.request
url = 'https://something.com'
headers = { 'APILOGIN' : "user",
'APITOKEN' : "passwd"}
values = {"dba":"Test API Merchant","web":"","mids.mid":"ACH"}
data = urllib.parse.urlencode(values)
data = data.encode('utf-8') # data should be bytes
req = urllib.request.Request(url, data, headers)
with urllib.request.urlopen(req) as response:
the_page = response.read()
See Verifying HTTPS certificates with urllib.request - by not specifying either cafile or capath in your call to urlopen, by default any HTTPS connection is not verified.
Related
How can i load Netscape cookie from a file to auth to a website REST API
with python requests[session], pycurl or something else ?
Similar to curl -b ${home}.cookie
-b, --cookie (HTTP) Pass the data to the HTTP server as a cookie. It is supposedly the data previously received from the server
in a "Set-Cookie:" line. The data should be in the format
"NAME1=VALUE1; NAME2=VALUE2".
import requests
from requests.auth import HTTPDigestAuth
proxi = {'http': 'http://proxy',
'https': 'http://proxy'}
url = 'http://192.196.1.98:8080/a/changes/?q=status:new'
r = requests.get(url, proxies=proxi) #cookies=cookie
print r.status_code
print r.json()
print r.headers
print r.request.headers
print r.text
I'm having an issue converting a working cURL call to an internal API to a python requests call.
Here's the working cURL call:
curl -k -H 'Authorization:Token token=12345' 'https://server.domain.com/api?query=query'
I then attempted to convert that call into a working python requests script here:
#!/usr/bin/env python
import requests
url = 'https://server.domain.com/api?query=query'
headers = {'Authorization': 'Token token=12345'}
r = requests.get(url, headers=headers, verify=False)
print r
I get a HTTP 401 or 500 error depending on how I change the headers variable around. What I do not understand is how my python request is any different then the cURL request. They are both being run from the same server, as the same user.
Any help would be appreciated
Hard to say without knowing your api, but you may have a redirect that curl is honoring that requests is not (or at least isn't send the headers on redirect).
Try using a session object to ensure all requests (and redirects) have your header.
#!/usr/bin/env python
import requests
url = 'https://server.domain.com/api?query=query'
headers = {'Authorization': 'Token token=12345'}
#start a session
s = requests.Session()
#add headers to session
s.headers.update(headers)
#use session to perform a GET request.
r = s.get(url)
print r
I figured it out, it turns out I had to specify the "accept" header value, the working script looks like this:
#!/usr/bin/env python
import requests
url = 'https://server.domain.com/api?query=query'
headers = {'Accept': 'application/app.app.v2+json', 'Authorization': 'Token token=12345'}
r = requests.get(url, headers=headers, verify=False)
print r.json()
Below is the code that i used. I am using latest python requests. I am getting 407 response from the below request using python 2.7.
And strange thing is i am getting 503 response while using https instead of http in the requests.
response = requests.get(query, proxies={'https': "https://username:password#104.247.XX.XX:80"}, headers=headers, timeout=30, allow_redirects=True)
print response
Output: Response [503]
response = requests.get(query, proxies={'http': "http://username:password#104.247.XX.XX:80"}, headers=headers, timeout=30, allow_redirects=True)
print response
Output: Response [407]
But the same code is working on my amazon ec2 instance. Though i am trying to run in local machine.
import urllib2
import urllib
import portalocker
import cookielib
import requests
query = 'http://google.com/search?q=wtf&num=100&tbs=cdr:1,cd_min:2000,cd_max:2015&start=0&filter=0'
headers = {'user-agent': 'Mozilla/5.0 (X11; Linux; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Midori/0.4'}
response = requests.get(query, proxies={'http': "http://username:password#104.247.XX.XX:80"}, headers=headers, timeout=30, allow_redirects=True)
print response
from requests.auth import HTTPProxyAuth
proxyDict = {
'http' : '77.75.105.165',
'https' : '77.75.105.165'
}
auth = HTTPProxyAuth('username', 'mypassword')
r = requests.get("http://www.google.com", proxies=proxyDict, auth=auth)
The status codes might give a clue:
407 Proxy Authentication Required
503 Service Unavailable
These suggest that your proxy isn't running for https and the username/password combination is wrong for the proxy that you are using. Note that it is very unlikely that your local machine needs the same proxy as your the ec2 instance.
Im performing a simple post request with urllib2 on a HTTPS url, i have one parameter and a JSESSIONID from a logged-in user. However when i Post i get 'your browser does not support iframes' error, status HTTP:200
import cookielib
import urllib
import urllib2
url = 'https://.../template/run.do?id=4'
http_header = {
"JSESSIONID": "A4604B1CFA8D2B5A8296AAB3B5EADC0C;",
}
params = {
'id' : 4
}
# setup cookie handler
cookie_jar = cookielib.LWPCookieJar()
cookie = urllib2.HTTPCookieProcessor(cookie_jar)
opener = urllib2.build_opener(cookie)
req = urllib2.Request(url, urllib.urlencode(params), http_header)
res = urllib2.urlopen(req)
print res.read()
I keep trigerring this method using CURL with no problem , but somehow can't via urllib, i DID try using all Request Headers that are used by browser but to no avail.
I fear this might be a stupid misconception, but I'm already wondering for hours!
I am trying to fetch an html site using Python urlopen.
I am getting this error:
HTTPError: HTTP Error 302: The HTTP server returned a redirect error that would lead to an infinite loop
The code:
from urllib2 import Request
request = Request(url)
response = urlopen(request)
I understand that the server redirects to another URL and that it is looking for a cookie.
How do I set the cookie it is looking for so I can read the html?
Here's an example from Python documentation, adjusted to your code:
import cookielib, urllib2
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
request = urllib2.Request(url)
response = opener.open(request)