I have this cURL call that works perfectly:
curl -H 'X-Requested-With: SO demo' -d 'parameter=value' https://username:password#api.domain.com/api/work/
My conversion does not work.
import urllib2
# Create a password manager.
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
# Add the username and password.
top_level_url = 'https://api.server.com'
password_mgr.add_password(None, top_level_url, 'username', 'password')
handler = urllib2.HTTPBasicAuthHandler(password_mgr)
# Create "opener" (OpenerDirector instance).
opener = urllib2.build_opener(handler)
# Install the opener so all calls to urllib2.urlopen use our opener.
urllib2.install_opener(opener)
# Create request.
headers = {'X-Requested-With':'SO demo.'}
uri = 'https://api.domain.com/api/work/'
data='parameter=value'
req = urllib2.Request(uri,data,headers)
# Make request to fetch url.
result = urllib2.urlopen(req)
urllib2.HTTPError: HTTP Error 401: Unauthorized
Here's what I don't get. The same server has a separate API which similar code does work on, where the only thing that has changed is the parameter and uri. Note the cURL call works on both API calls.
Second API cURL call (that works):
curl -H 'X-Requested-With: SO demo' -d 'parameter=value' https://username:password#api.domain.com/api2/call.php
Equivalent code that works below:
import urllib2
# Create a password manager.
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
# Add the username and password.
top_level_url = 'https://api.server.com'
password_mgr.add_password(None, top_level_url, 'username', 'password')
handler = urllib2.HTTPBasicAuthHandler(password_mgr)
# Create "opener" (OpenerDirector instance).
opener = urllib2.build_opener(handler)
# Install the opener.
# Now all calls to urllib2.urlopen use our opener.
urllib2.install_opener(opener)
# Create request.
headers = {'X-Requested-With':'SO demo.'}
uri = 'https://api.server.com/api2/call.php'
data='parameter=value'
req = urllib2.Request(uri,data,headers)
# Make request to fetch url.
result = urllib2.urlopen(req)
# Read results.
result.read()
Why does urllib2 work when the uri ends with a '.php', but not work when the uri ends with a '/'?
In the first request you are setting:
uri = 'https://api.domain.com/api/work/'
But if you were to do it the same as the second run, you probably meant to write it as:
uri = 'https://api.server.com/api/work/'
From Python urllib2 Basic Auth Problem
The problem [is] that the Python libraries, per HTTP-Standard, first send an unauthenticated request, and then only if it's answered with a 401 retry, are the correct credentials sent. If the ... servers don't do "totally standard authentication" then the libraries won't work.
This particular API does not respond with a 401 retry on the first attempt, it responds with an XML response containing the message that credentials were not sent.
Related
I have an Authlib OAuth2Session and I am trying to authenticate with username and password. Here's what I am doing:
self.session = OAuth2Session(
token_endpont=f"{API_URL}/oauth/token",
)
self.session.fetch_token(
url=f"{API_URL}/oauth/token",
username=username,
password=password,
grant_type="password",
)
After looking at the debug logs, it seems that the fetch_token method is passing the username and password as URL parameters. This is how it looks from that method:
send: b'grant_type=password&username=<email>&password=<password>&client_id=None&client_secret='
versus a manual requests call formatted properly which works:
send: b'{"password": "<PW>", "username": "<EMAIL>", "grant_type": "password"}'
Is there a way to make the fetch_token endpoint use a request body formatted with JSON? I have tried adding headers={"Content-Type": "application/json"} as well as passing the username and password into the fetch_token body parameter and neither of those worked.
Below is my python code using urllib2 library and it keeps failing with an Unauthorized error although I am using the correct API key. If I user curl, the POST/GET works just fine.Anyone got ideas? Thanks.
Adding the curl commands below that works just fine
Create Credential
curl -X POST 'https://myurl.com' \
-H 'Content-Type: application/json' \
-u 'XXXXXXXXXX:' \
-d #- << EOF
{
"vendorAccountId": "1234567",
"type": "my_role"
}
EOF
Below is the python code which doesn't work.
Basically, the line of code where it is failing is: response = opener.open(request)
import boto3
import json
import logging
import signal
import requests
from urllib2 import build_opener, HTTPHandler, Request
import urllib2
LOGGER = logging.getLogger()
LOGGER.setLevel(logging.INFO)
def main():
auth_token = "XXXXXXXXXX"
account_id = "1234567"
request_type = "CreateCredentials"
content_type = ""
request_body = json.dumps({})
if request_type == "CreateCredentials":
target_url = 'https://myurl.com'
request_method = "POST"
content_type = "application/json"
request_body = json.dumps({
"vendorAccountId": account_id,
"type": "my_role"
})
handler = urllib2.HTTPHandler()
opener = urllib2.build_opener(handler)
request = urllib2.Request(target_url, data=request_body)
request.add_header("Content-Type", content_type)
request.add_header("Content-Length", len(request_body))
request.add_header("Authorization", auth_token)
request.get_method = lambda: request_method
response = opener.open(request) #*****Fails here******
if __name__ == "__main__":
main()
Finally, I figured out what the issue was. My lack of patience for not reading the vendor manuals. The HTTP request I was sending was missing some parameters that was required and I needed to send the Key in an encrypted format too.
I am trying to delete a git branch from gitlab, using the gitlab API with a personal access token.
If I use curl like this:
curl --request DELETE --header "PRIVATE_TOKEN: somesecrettoken" "deleteurl"
then it works and the branch is deleted.
But if I use requests like this:
token_data = {'private_token': "somesecrettoken"}
requests.Request("DELETE", url, data= token_data)
it doesn't work; the branch is not deleted.
Your requests code is indeed not doing the same thing. You are setting data=token_data, which puts the token in the request body. The curl command-line uses a HTTP header instead, and leaves the body empty.
Do the same in Python:
token_data = {'Private-Token': "somesecrettoken"}
requests.Request("DELETE", url, headers=token_data)
You can also put the token in the URL parameters, via the params argument:
token_data = {'private_token': "somesecrettoken"}
requests.Request("DELETE", url, params=token_data)
This adds ?private_token=somesecrettoken to the URL sent to gitlab.
However, GitLab does accept the private_token value in the request body as well, either as form data or as JSON. Which means that you are using the requests API wrong.
A requests.Request() instance is not going to be sent without additional work. It is normally only needed if you want to access the prepared data before sending.
If you don't need to use this more advanced feature, use the requests.delete() method:
response = requests.delete(url, headers=token_data)
If you do need the feature, use a requests.Session() object, then first prepare the request object, then send it:
with requests.Session() as session:
request = requests.Request("DELETE", url, params=token_data)
prepped = request.prepare()
response = session.send(prepped)
Even without needing to use prepared requests, a session is very helpful when using an API. You can set the token once, on a session:
with requests.Session() as session:
session.headers['Private-Token'] = 'somesecrettoken'
# now all requests via the session will use this header
response = session.get(url1)
response = session.post(url2, json=....)
response = session.delete(url3)
# etc.
I'm using urllib.request in python to try and download some build information from Teamcity. This request used to work without username and password, however a recent security change means I must use a username and password. So I have changed tried each of the two solutions below:
Attempt 1)
url = 'http://<domain>/httpAuth/app/rest/buildTypes/<buildlabel>/builds/running:false?count=1&start=0'
# create a password manager
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
# Add the username and password.
top_level_url = "http://<domain>/httpAuth/app/rest/buildTypes/id:<buildlabel>/builds/running:false?count=1&start=0"
password_mgr.add_password(None, top_level_url, username, password)
handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
# create "opener" (OpenerDirector instance)
opener = urllib.request.build_opener(handler)
# use the opener to fetch a URL
opener.open(url)
Attempt 2
url = 'http://<username>:<password>#<domain>/httpAuth/app/rest/buildTypes/id:buildlabel/builds/running:false?count=1&start=0'
rest_api = urllib.request.urlopen(url)
Both of these return "HTTP Error 401: Unauthorized". However if I was to print 'url' and copy this output into a browser the link works perfectly. But when used through python I get the above error.
I use something very similar in another Perl script and this works perfectly also.
* SOLVED BELOW *
Solved this using.
credentials(url, username, password)
rest_api = urllib2.urlopen(url)
latest_build_info = rest_api.read()
latest_build_info = latest_build_info.decode("UTF-8")
# Then parse this xml for the information I want.
def credentials(self, url, username, password):
p = urllib2.HTTPPasswordMgrWithDefaultRealm()
p.add_password(None, url, username, password)
handler = urllib2.HTTPBasicAuthHandler(p)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
As a side note, I then want to download a file..
credentials(url, username, password)
urllib2.urlretrieve(url, downloaded_file)
Where Url is:
http://<teamcityServer>/repository/download/<build Label>/<BuildID>:id/Filename.zip
What's the best way to specify a proxy with username and password for an http connection in python?
This works for me:
import urllib2
proxy = urllib2.ProxyHandler({'http': 'http://
username:password#proxyurl:proxyport'})
auth = urllib2.HTTPBasicAuthHandler()
opener = urllib2.build_opener(proxy, auth, urllib2.HTTPHandler)
urllib2.install_opener(opener)
conn = urllib2.urlopen('http://python.org')
return_str = conn.read()
Use this:
import requests
proxies = {"http":"http://username:password#proxy_ip:proxy_port"}
r = requests.get("http://www.example.com/", proxies=proxies)
print(r.content)
I think it's much simpler than using urllib. I don't understand why people love using urllib so much.
Setting an environment var named http_proxy like this: http://username:password#proxy_url:port
The best way of going through a proxy that requires authentication is using urllib2 to build a custom url opener, then using that to make all the requests you want to go through the proxy. Note in particular, you probably don't want to embed the proxy password in the url or the python source code (unless it's just a quick hack).
import urllib2
def get_proxy_opener(proxyurl, proxyuser, proxypass, proxyscheme="http"):
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(None, proxyurl, proxyuser, proxypass)
proxy_handler = urllib2.ProxyHandler({proxyscheme: proxyurl})
proxy_auth_handler = urllib2.ProxyBasicAuthHandler(password_mgr)
return urllib2.build_opener(proxy_handler, proxy_auth_handler)
if __name__ == "__main__":
import sys
if len(sys.argv) > 4:
url_opener = get_proxy_opener(*sys.argv[1:4])
for url in sys.argv[4:]:
print url_opener.open(url).headers
else:
print "Usage:", sys.argv[0], "proxy user pass fetchurls..."
In a more complex program, you can seperate these components out as appropriate (for instance, only using one password manager for the lifetime of the application). The python documentation has more examples on how to do complex things with urllib2 that you might also find useful.
Or if you want to install it, so that it is always used with urllib2.urlopen (so you don't need to keep a reference to the opener around):
import urllib2
url = 'www.proxyurl.com'
username = 'user'
password = 'pass'
password_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm()
# None, with the "WithDefaultRealm" password manager means
# that the user/pass will be used for any realm (where
# there isn't a more specific match).
password_mgr.add_password(None, url, username, password)
auth_handler = urllib2.HTTPBasicAuthHandler(password_mgr)
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
print urllib2.urlopen("http://www.example.com/folder/page.html").read()
Here is the method use urllib
import urllib.request
# set up authentication info
authinfo = urllib.request.HTTPBasicAuthHandler()
proxy_support = urllib.request.ProxyHandler({"http" : "http://ahad-haam:3128"})
# build a new opener that adds authentication and caching FTP handlers
opener = urllib.request.build_opener(proxy_support, authinfo,
urllib.request.CacheFTPHandler)
# install it
urllib.request.install_opener(opener)
f = urllib.request.urlopen('http://www.python.org/')
"""