407 response from proxy using python requests - python

Below is the code that i used. I am using latest python requests. I am getting 407 response from the below request using python 2.7.
And strange thing is i am getting 503 response while using https instead of http in the requests.
response = requests.get(query, proxies={'https': "https://username:password#104.247.XX.XX:80"}, headers=headers, timeout=30, allow_redirects=True)
print response
Output: Response [503]
response = requests.get(query, proxies={'http': "http://username:password#104.247.XX.XX:80"}, headers=headers, timeout=30, allow_redirects=True)
print response
Output: Response [407]
But the same code is working on my amazon ec2 instance. Though i am trying to run in local machine.
import urllib2
import urllib
import portalocker
import cookielib
import requests
query = 'http://google.com/search?q=wtf&num=100&tbs=cdr:1,cd_min:2000,cd_max:2015&start=0&filter=0'
headers = {'user-agent': 'Mozilla/5.0 (X11; Linux; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Midori/0.4'}
response = requests.get(query, proxies={'http': "http://username:password#104.247.XX.XX:80"}, headers=headers, timeout=30, allow_redirects=True)
print response

from requests.auth import HTTPProxyAuth
proxyDict = {
'http' : '77.75.105.165',
'https' : '77.75.105.165'
}
auth = HTTPProxyAuth('username', 'mypassword')
r = requests.get("http://www.google.com", proxies=proxyDict, auth=auth)

The status codes might give a clue:
407 Proxy Authentication Required
503 Service Unavailable
These suggest that your proxy isn't running for https and the username/password combination is wrong for the proxy that you are using. Note that it is very unlikely that your local machine needs the same proxy as your the ec2 instance.

Related

python http client module error / inconsistent

I'm getting the following output
301 Moved Permanently --- when using http.client
200 --- when using requests
URL handling "http://i.imgur.com/fyxDric.jpg" passed as arg through command
What I expect is give me 200 status ok response.
This is the body
if scheme == 'http':
print('Ruuning in the http')
conn = http.client.HTTPConnection("www.i.imgur.com")
conn.request("GET", urlparse(url).path)
conn_resp = conn.getresponse()
body = conn_resp.read()
print(conn_resp.status, conn_resp.reason, body)
When using the requests
headers = {'User-Agent': 'Mozilla/5.0 Chrome/54.0.2840.71 Safari/537.36'}
response = requests.get(url, allow_redirects=False)
print(response.status_code)
You are trying to hit imgur over http, but imgur redirects all its request to process over https.
Due to this redirect the issue is occurring.
http module doesnt inherently handle the redirects you need to handle the redirects, where as requests module handles these redirects by itself.
The documentation on the http module includes in its first sentence "It is normally not used directly." Unlike requests it doesn't action the 301 response and follow the redirection in the headers. It instead returns the 301, which you would have to process yourself.

Python HTTP request through proxy working, but HTTPS request not working

I have some code to request a URL external to my organisation's network:
proxy_string = 'http://proxyIP:proxyport'
s = requests.Session()
s.trust_env=False
s.proxies = {"http": proxy_string , "https": proxy_string}
s.auth = HTTPProxyAuth("ad\user","password")
url = "http://canoeracing.org.uk/marathon/results/results2017.html"
r = s.get(url)
r.status_code
tree = html.fromstring(r.content)
r.content
This works fine if the URL is an HTTP (like the one above), but not if it's an HTTPS URL (so change url to https://stackoverflow.com/questions/894168/programmatically-make-http-requests-through-proxies-with-python). Things I've tried:
Changing the HTTPS_Proxy in the command line
Changing proxy_string to https://...
I know there's some similar questions to this, but I couldn't find any that had had this specific problem of HTTP requests working through a proxy but HTTP requests not.

Python Head request with urrlib and proxy doesn't work

I'm failing to do a HEAD request through my local Tor Proxy
import httplib
host = 'www.heise.de'
inputfilename="/newsticker/classic/"
conn = httplib.HTTPSConnection("127.0.0.1", 9151)
conn.set_tunnel(host, 443)
conn.request("HEAD", inputfilename)
res = conn.getresponse()
print res
I get a lot of error messages, what would be the correct syntax?
Your Tor proxy is a SOCKS proxy, which isn't supported by httplib.
You can use a recent version of requests (which httplib recommends to use instead of itself, anyway).
Install requests and pySocks
Then, you can do:
import requests
proxies = {
'http': 'socks5://127.0.0.1:9050',
'https': 'socks5://127.0.0.1:9050'
}
# You need to use the url, not just the host name
url = 'http://www.heise.de'
response = requests.head(url, proxies=proxies)
print(response.headers)
#{'Vary': 'X-Forwarded-Proto, ... 'Last-Modified': 'Sun, 26 Feb 2017 09:27:45 GMT'}

how to disable SSL authentication in python 3

I am new to python. I have a script, trying to post something to a site. now how do I disable SSL authentication in the script?
In python2, you can use
requests.get('https://kennethreitz.com', verify=False)
but I don't know how to do it in python 3.
import urllib.parse
import urllib.request
url = 'https://something.com'
headers = { 'APILOGIN' : "user",
'APITOKEN' : "passwd"}
values = {"dba":"Test API Merchant","web":"","mids.mid":"ACH"}
data = urllib.parse.urlencode(values)
data = data.encode('utf-8') # data should be bytes
req = urllib.request.Request(url, data, headers)
with urllib.request.urlopen(req) as response:
the_page = response.read()
See Verifying HTTPS certificates with urllib.request - by not specifying either cafile or capath in your call to urlopen, by default any HTTPS connection is not verified.

How do I set cookies using Python urlopen?

I am trying to fetch an html site using Python urlopen.
I am getting this error:
HTTPError: HTTP Error 302: The HTTP server returned a redirect error that would lead to an infinite loop
The code:
from urllib2 import Request
request = Request(url)
response = urlopen(request)
I understand that the server redirects to another URL and that it is looking for a cookie.
How do I set the cookie it is looking for so I can read the html?
Here's an example from Python documentation, adjusted to your code:
import cookielib, urllib2
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
request = urllib2.Request(url)
response = opener.open(request)

Categories