According to the documentation, this is the format:
Request.set_proxy(host, type)
So what I have is this for example:
Request.set_proxy("localhost:8888","http")
What is the format if the proxy requires a username and password?
Two ways to do this if you must use a urllib2.Request object:
Set environment variables http_proxy and/or https_proxy either outside of the Python interpreter, or internally using os.environ['http_proxy'] before you import urllib2.
import os
os.environ['http_proxy'] = 'http://user:password#localhost:8888'
import urllib2
req = urllib2.Request('http://www.blah.com')
f = urllib2.urlopen(req)
Or manually set HTTP request header:
import urllib2
from base64 import urlsafe_b64encode
PROXY_USERNAME = 'user'
PROXY_PASSWORD = 'password'
req = urllib2.Request('http://www.blah.com')
req.set_proxy('localhost:8888', 'http')
proxy_auth = urlsafe_b64encode('%s:%s' % (PROXY_USERNAME, PROXY_PASSWORD))
req.add_header('Proxy-Authorization', 'Basic %s' % proxy_auth)
f = urllib2.urlopen(req)
Related
Alright, so I'm a little outside of my league on this one I think.
I'm attempting to facilitate custom HTTP headers what is noted here:
API-Key = API key
API-Sign = Message signature using HMAC-SHA512 of (URI path + SHA256(nonce + POST data)) and base64 decoded secret API key
from https://www.kraken.com/help/api
I'm trying to work solely out of urllib if at all possible.
Below is one of many attempts to get it encoded like required:
import os
import sys
import time
import datetime
import urllib.request
import urllib.parse
import json
import hashlib
import hmac
import base64
APIKey = 'ThisKey'
secret = 'ThisSecret'
data = {}
data['nonce'] = int(time.time()*1000)
data['asset'] = 'ZUSD'
uripath = '/0/private/TradeBalance'
postdata = urllib.parse.urlencode(data)
encoded = (str(data['nonce']) + postdata).encode()
message = uripath.encode() + hashlib.sha256(encoded).digest()
signature = hmac.new(base64.b64decode(secret),
message, hashlib.sha512)
sigdigest = base64.b64encode(signature.digest())
#this is purely for checking how things are coming along.
print(sigdigest.decode())
headers = {
'API-Key': APIKey,
'API-Sign': sigdigest.decode()
}
The above may be working just fine, where I'm struggling now is appropriately getting it to the site.
This is my most recent attempt:
myBalance = urllib.urlopen('https://api.kraken.com/0/private/TradeBalance', urllib.parse.urlencode({'asset': 'ZUSD'}).encode("utf-8"), headers)
Any help is greatly appreciated.
Thanks!
urlopen doesn't support adding headers, so you need to create a Request object and pass it to urlopen:
url = 'https://api.kraken.com/0/private/TradeBalance'
body = urllib.parse.urlencode({'asset': 'ZUSD'}).encode("utf-8")
headers = {
'API-Key': APIKey,
'API-Sign': sigdigest.decode()
}
request = urllib.request.Request(url, data=body, headers=headers)
response = urllib.request.urlopen(request)
How can I set proxy for the last urllib in Python 3.
I am doing the next
from urllib import request as urlrequest
ask = urlrequest.Request(url) # note that here Request has R not r as prev versions
open = urlrequest.urlopen(req)
open.read()
I tried adding proxy as follows :
ask=urlrequest.Request.set_proxy(ask,proxies,'http')
However I don't know how correct it is since I am getting the next error:
336 def set_proxy(self, host, type):
--> 337 if self.type == 'https' and not self._tunnel_host:
338 self._tunnel_host = self.host
339 else:
AttributeError: 'NoneType' object has no attribute 'type'
You should be calling set_proxy() on an instance of class Request, not on the class itself:
from urllib import request as urlrequest
proxy_host = 'localhost:1234' # host and port of your proxy
url = 'http://www.httpbin.org/ip'
req = urlrequest.Request(url)
req.set_proxy(proxy_host, 'http')
response = urlrequest.urlopen(req)
print(response.read().decode('utf8'))
I needed to disable the proxy in our company environment, because I wanted to access a server on localhost. I could not disable the proxy server with the approach from #mhawke (tried to pass {}, None and [] as proxies).
This worked for me (can also be used for setting a specific proxy, see comment in code).
import urllib.request as request
# disable proxy by passing an empty
proxy_handler = request.ProxyHandler({})
# alertnatively you could set a proxy for http with
# proxy_handler = request.ProxyHandler({'http': 'http://www.example.com:3128/'})
opener = request.build_opener(proxy_handler)
url = 'http://www.example.org'
# open the website with the opener
req = opener.open(url)
data = req.read().decode('utf8')
print(data)
Urllib will automatically detect proxies set up in the environment - so one can just set the HTTP_PROXY variable either in your environment e.g. for Bash:
export HTTP_PROXY=http://proxy_url:proxy_port
or using Python e.g.
import os
os.environ['HTTP_PROXY'] = 'http://proxy_url:proxy_port'
Note from the urllib docs: "HTTP_PROXY[environment variable] will be ignored if a variable REQUEST_METHOD is set; see the documentation on getproxies()"
import urllib.request
def set_http_proxy(proxy):
if proxy == None: # Use system default setting
proxy_support = urllib.request.ProxyHandler()
elif proxy == '': # Don't use any proxy
proxy_support = urllib.request.ProxyHandler({})
else: # Use proxy
proxy_support = urllib.request.ProxyHandler({'http': '%s' % proxy, 'https': '%s' % proxy})
opener = urllib.request.build_opener(proxy_support)
urllib.request.install_opener(opener)
proxy = 'user:pass#ip:port'
set_http_proxy(proxy)
url = 'https://www.httpbin.org/ip'
request = urllib.request.Request(url)
response = urllib.request.urlopen(request)
html = response.read()
html
I can't use the python requests package (long story, let's just assume I've exhausted all possibilities for using that package)
Is there an alternative package that would provide the exact functionality of the following code?
import requests
requests.post(URL, data=DATA, auth=(USERNAME, PASSWORD), headers=HEADER)
One of alternative ways, httplib usage
import httplib
import urllib
from base64 import b64encode
# your form
form_data = {'a':1 'b':2}
params = urllib.urlencode(form_data)
# build authorization
user_and_pass = b64encode(b"username:password").decode("ascii")
# headers
headers = {'Authorization': 'Basic %s' % user_and_pass}
# connection
conn = httplib.HTTPConnection("example.com")
conn.request('POST', '/v3/call_api', params, headers)
# get result
response = conn.getresponse()
print response.status, response.reason
data = response.read()
conn.close()
If your only problem is importing because of the module name, you can always import by full path or alter the module search path and reset it after importing. In order to avoid conflicts you can use import requests as requests2 or something of the like.
See the following question for the first option or the documentation about the search path.
How to import a module given the full path?
https://docs.python.org/2/tutorial/modules.html#the-module-search-path
from urllib2.urlopen
"the HTTP request will be a POST instead of a GET when the data parameter is provided."
So something like (i'll use an image example just because it was so darn obscure to figure out):
import urllib
import urllib2
import numpy as np
import cv2
image_to_send = np.zeros(512, np.uint8)
buffer_image = \
np.array(cv2.imencode('.png', image_to_send)[1]).tostring()
post_data = \
urllib.urlencode((('img_type','.png'),('img_data',buffer_image)))
req = \
urllib2.urlopen('http://www.yourdestination.com/imageupload/', data=post_data)
if you want authentication, you'll need to subclass urllib.FancyURLopener, and override prompt_user_passwd
There is an alternative to python requests you can try,
https://github.com/juancarlospaco/faster-than-requests
What would be easiest way to use MediaWiki cookies in some Python CGI scripts (on the same domain, ofc) for authentication (including MW's OpenID, especially)?
Access from python to MediaWiki database is possible, too.
A very easy way to use Cookies with mediawiki is as follows:
from cookielib import CookieJar
import urllib2
import urllib
import json
cj = CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
Now, requests can be made using opener. For example:
login_data = {
'action': 'login',
'lgname': 'Example',
'lgpassword': 'Foobar',
'format': 'json'
}
data = urllib.urlencode(login_data)
request = opener.open('http://en.wikipedia.org/w/api.php',data)
content = json.load(request)
login_data['token'] = content['login']['token']
data_2 = urllib.urlencode(login_data)
request_2 = opener.open('http://en.wikipedia.org/w/api.php',data_2)
content_2 = json.load(request_2)
print content_2['login']['result']
In the example above, if the Cookiejar was not created, the login wouldn't fully work, asking for another token. Though, it'd be recommended to use a mediawiki wrapper already created such as pywikipedia, mwhair, pytybot, simplemediawiki or wikitools, with hundreds of other mediawiki wrappers in python.
You could connect to and modify the SQL-database without HTTP and cookies using the MySQLdb module, but this is often the wrong solution to do MediaWiki maintenance. Though read-only-access should not be a problem.
The best way to access MediaWiki with a script is to use the api.php.
The best known Python based MediaWiki-API-bot is the Pywikibot (former Pywikipediabot).
The easiest way to save cookies in Python might be to use the http.cookiejar module.
Its documentation contains some simple examples.
I extracted functional example code out of my own MediaWiki-bot:
#!/usr/bin/python3
import http.cookiejar
import urllib.request
import urllib.parse
import json
s_login_name = 'example'
s_login_password = 'secret'
s_api_url = 'http://en.wikipedia.org/w/api.php'
s_user_agent = 'StackOverflowExample/0.0.1.2012.09.26.1'
def api_request(d_post_params):
d_post_params['format'] = 'json'
r_post_params = urllib.parse.urlencode(d_post_params).encode('utf-8')
o_url_request = urllib.request.Request(s_api_url, r_post_params)
o_url_request.add_header('User-Agent', s_user_agent)
o_http_response = o_url_opener.open(o_url_request)
s_reply = o_http_response.read().decode('utf-8')
d_reply = json.loads(s_reply)
return (o_http_response.code, d_reply)
o_cookie_jar = http.cookiejar.CookieJar()
o_http_cookie_processor = urllib.request.HTTPCookieProcessor(o_cookie_jar)
o_url_opener = urllib.request.build_opener(o_http_cookie_processor)
d_post_params = {'action': 'login', 'lgname': s_login_name}
i_code, d_reply = api_request(d_post_params)
print('http code: %d' % (i_code))
print('api reply: %s' % (d_reply))
s_login_token = d_reply['login']['token']
d_post_params = {
'action': 'login',
'lgname': s_login_name,
'lgpassword': s_login_password,
'lgtoken':s_login_token
}
i_code, d_reply = api_request(d_post_params)
print('http code: %d' % (i_code))
print('api reply: %s' % (d_reply))
Classes, error handling and sub-functions have been removed to increase readability.
The cookies saved in o_url_opener can also be used for requests to index.php.
You could also login via index.php (fake a browser request) but this would include parsing of HTML-output.
Variable name legend:
# Unicode string
s_* = 'a'
# Bytes (raw string)
r_* = b'a'
# Dictionary
d_* = {'a':1}
# Integer number
i_* = 4711
# Other objects
o_* = SomeClass()
What's the best way to specify a proxy with username and password for an http connection in python?
This works for me:
import urllib2
proxy = urllib2.ProxyHandler({'http': 'http://
username:password#proxyurl:proxyport'})
auth = urllib2.HTTPBasicAuthHandler()
opener = urllib2.build_opener(proxy, auth, urllib2.HTTPHandler)
urllib2.install_opener(opener)
conn = urllib2.urlopen('http://python.org')
return_str = conn.read()
Use this:
import requests
proxies = {"http":"http://username:password#proxy_ip:proxy_port"}
r = requests.get("http://www.example.com/", proxies=proxies)
print(r.content)
I think it's much simpler than using urllib. I don't understand why people love using urllib so much.
Setting an environment var named http_proxy like this: http://username:password#proxy_url:port
The best way of going through a proxy that requires authentication is using urllib2 to build a custom url opener, then using that to make all the requests you want to go through the proxy. Note in particular, you probably don't want to embed the proxy password in the url or the python source code (unless it's just a quick hack).
import urllib2
def get_proxy_opener(proxyurl, proxyuser, proxypass, proxyscheme="http"):
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, proxyurl, proxyuser, proxypass)
proxy_handler = urllib2.ProxyHandler({proxyscheme: proxyurl})
proxy_auth_handler = urllib2.ProxyBasicAuthHandler(password_mgr)
return urllib2.build_opener(proxy_handler, proxy_auth_handler)
if __name__ == "__main__":
import sys
if len(sys.argv) > 4:
url_opener = get_proxy_opener(*sys.argv[1:4])
for url in sys.argv[4:]:
print url_opener.open(url).headers
else:
print "Usage:", sys.argv[0], "proxy user pass fetchurls..."
In a more complex program, you can seperate these components out as appropriate (for instance, only using one password manager for the lifetime of the application). The python documentation has more examples on how to do complex things with urllib2 that you might also find useful.
Or if you want to install it, so that it is always used with urllib2.urlopen (so you don't need to keep a reference to the opener around):
import urllib2
url = 'www.proxyurl.com'
username = 'user'
password = 'pass'
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
# None, with the "WithDefaultRealm" password manager means
# that the user/pass will be used for any realm (where
# there isn't a more specific match).
password_mgr.add_password(None, url, username, password)
auth_handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
print urllib2.urlopen("http://www.example.com/folder/page.html").read()
Here is the method use urllib
import urllib.request
# set up authentication info
authinfo = urllib.request.HTTPBasicAuthHandler()
proxy_support = urllib.request.ProxyHandler({"http" : "http://ahad-haam:3128"})
# build a new opener that adds authentication and caching FTP handlers
opener = urllib.request.build_opener(proxy_support, authinfo,
urllib.request.CacheFTPHandler)
# install it
urllib.request.install_opener(opener)
f = urllib.request.urlopen('http://www.python.org/')
"""