python requests proxies not working for me - python

My code for reference:
header = {"Content-Type": "application/json"}
proxyDict = {
"all_proxy": "http://proxy.com:8080",
"http_proxy": "http://proxy.com:8080",
"https_proxy": "http://proxy.com:8080",
"ftp_proxy": "http://proxy.com:8080",
"ALL_PROXY": "http://proxy.com:8080",
"HTTP_PROXY": "http://proxy.com:8080",
"HTTPS_PROXY": "http://proxy.com:8080",
"FTP_PROXY": "http://proxy.com:8080"
}
try:
res = requests.post('slack_url', json={"text": "text"}, headers=header, proxies=proxyDict, verify=False)
print('Success!')
except:
print("unable to send a slack message")
When I run this code it simply runs as if the proxies were never read and times out. However, when I manually set my environment variables it works perfectly fine.
The issue is that I need this part to run as an airflow service and therefore need the proxy to be set when it is run.
The only thing I can think of is that the requests library requires an actual IP address and can't use proxy.com (just replacing the company URL, not the actual url I am using). In which case I would need a work around without using the IP.
Any ideas?

You're setting up your proxyDict in the wrong way. You only need to set one proxy in that dict.
proxy = "https://proxy.com:8080"
proxies = {'https': proxy}
Then set this proxy in the request:
res = requests.post('slack_url', json={"text": "text"}, headers=header, proxies=proxies, verify=False) #you should never use verify=False as it's not secure
If you want to select a random proxy from a proxy list, just put all of your proxies in an array and randomly select one.
import random
myProxies = ["http://proxy.com:8080","http://proxy.com:8080","http://proxy.com:8080"]
proxies = {'https': random.choice(myProxies)}

Related

how to add authorization headers to bravado-created API client

I am able to create a simple API interface using the requests module that authenticates correctly and receives a response from an API. However, when I attempt to use bravado, to create the client from a swagger file, and manually add an authorization token to the head, it fails with :
bravado.exception.HTTPUnauthorized: 401 Unauthorized: Error(code=u'invalid_credentials', message=u'Missing authorization header',
I believe I am adding the authorization headers correctly.
The code I'm using to create the client is below. As shown, I've tried to add an Authorization token two ways:
in the http_client setup via set_api_key
in the Swagger.from_url(...) step by adding request_headers.
However both options fail.
from bravado.requests_client import RequestsClient
from bravado.client import SwaggerClient
http_client = RequestsClient()
http_client.set_api_key(
'https://api.optimizely.com/v2', 'Bearer <TOKEN>',
param_name='Authorization', param_in='header'
)
headers = {
'Authorization': 'Bearer <TOKEN>',
}
client = SwaggerClient.from_url(
'https://api.optimizely.com/v2/swagger.json',
http_client=http_client,
request_headers=headers
)
My question is, how do I properly add authorization headers to a bravado SwaggerClient?
For reference, a possible solution is to add the _request_options with each request:
from bravado.client import SwaggerClient
headers = {
'Authorization': 'Bearer <YOUR_TOKEN>'
}
requestOptions = {
# === bravado config ===
'headers': headers,
}
client = SwaggerClient.from_url("<SWAGGER_JSON_URL>")
result = client.<ENTITY>.<ACTION>(_request_options=requestOptions).response().result
print(result)
However, a better solution, which I still am unable to get to work, is to have it automatically authenticate with each request.
Try again, fixing the host of the set_api_key line.
from bravado.requests_client import RequestsClient
from bravado.client import SwaggerClient
http_client = RequestsClient()
http_client.set_api_key(
'api.optimizely.com', 'Bearer <TOKEN>',
param_name='api_key', param_in='header'
)
client = SwaggerClient.from_url(
'https://api.optimizely.com/v2/swagger.json',
http_client=http_client,
)
Here you will find documentation about the method : https://github.com/Yelp/bravado/blob/master/README.rst#example-with-header-authentication

request.Request to delete a gitlab branch does not work but works using curl DELETE

I am trying to delete a git branch from gitlab, using the gitlab API with a personal access token.
If I use curl like this:
curl --request DELETE --header "PRIVATE_TOKEN: somesecrettoken" "deleteurl"
then it works and the branch is deleted.
But if I use requests like this:
token_data = {'private_token': "somesecrettoken"}
requests.Request("DELETE", url, data= token_data)
it doesn't work; the branch is not deleted.
Your requests code is indeed not doing the same thing. You are setting data=token_data, which puts the token in the request body. The curl command-line uses a HTTP header instead, and leaves the body empty.
Do the same in Python:
token_data = {'Private-Token': "somesecrettoken"}
requests.Request("DELETE", url, headers=token_data)
You can also put the token in the URL parameters, via the params argument:
token_data = {'private_token': "somesecrettoken"}
requests.Request("DELETE", url, params=token_data)
This adds ?private_token=somesecrettoken to the URL sent to gitlab.
However, GitLab does accept the private_token value in the request body as well, either as form data or as JSON. Which means that you are using the requests API wrong.
A requests.Request() instance is not going to be sent without additional work. It is normally only needed if you want to access the prepared data before sending.
If you don't need to use this more advanced feature, use the requests.delete() method:
response = requests.delete(url, headers=token_data)
If you do need the feature, use a requests.Session() object, then first prepare the request object, then send it:
with requests.Session() as session:
request = requests.Request("DELETE", url, params=token_data)
prepped = request.prepare()
response = session.send(prepped)
Even without needing to use prepared requests, a session is very helpful when using an API. You can set the token once, on a session:
with requests.Session() as session:
session.headers['Private-Token'] = 'somesecrettoken'
# now all requests via the session will use this header
response = session.get(url1)
response = session.post(url2, json=....)
response = session.delete(url3)
# etc.

Symantec End Point Manager API - Authentication issues

The following is my code:
import requests, json
proxyDict = {
"http": "<proxy>",
}
base_url = "https://<host_IP>/sepm/api/v1/identity/authenticate"
# if output is required in JSON format
json_format = True
payload = {
"username" : "<username_here>",
"password" : "<password_here>",
"domain" : ""
}
headers = {
"Content-Type" : "application/json"
}
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS += 'HIGH:!DH:!aNULL' # necessary
r = requests.Session() # Start session in order to store the SessionID Cookie
print json.dumps(payload)
r = requests.post(base_url, proxies=proxyDict, verify=False, headers=headers, data=json.dumps(payload))
strings = r.text
print strings
The SSL certificate has some errors, and I am hence using verify=False and DEFAULT_CIPHERS += 'HIGH:!DH:!aNULL'
The above code is inline with the documentation provided, but the server is refusing my auth request, with the following error:
{"errorCode":"400","errorMessage":"Invalid Username or Password"}
Before you jump the gun, I checked and rechecked my credentials, and then checked a couple of more times. my credentials are correct
Is there an error in what I am doing?
Additional info: SEPM Version: {"version":"12.1.6168.6000"}
Late to this party, but I was having the same issue with almost the although going about it a bit differently. SO, what my issue ended up being was the special characters in my password. The rest method that handles authentication on the Symantec manager side of things doesn't know how to handle certain special characters so it returns a 400 syntax error. Trying pulling special characters from you password, and keep it under 15 characters.
Good luck.
You might want to try putting the below line near the top of your code:
urllib3.disable_warnings()
And ignore the SSL authentication

Proxy settings in requests library Python

I have a Python script used to connect to Parse.com (remote server) and upload a file. The script runs off a server that sits behind a corporate firewall.
import env
import json
import requests
from requests.auth import HTTPProxyAuth
def uploadFile(fileFullPath):
print "Attempting to upload file: " + fileFullPath
proxies = {
"http": "http://10.128.198.14",
"https": "http://10.128.198.14"
}
auth = HTTPProxyAuth('MyDomain\MyUsername', 'MyPassord')
headers = {
"X-Parse-Application-Id": env.X_Parse_APP_ID,
"X-Parse-REST-API-Key": env.X_Parse_REST_API_Key,
"Content-Type": "application/pdf"
}
f = open(fileFullPath, 'r')
files = {'file': f}
r = requests.post(env.PARSE_HOSTNAME + env.PARSE_FILES_ENDPOINT + "/" + env.PARSE_FILE_NAME, files=files, headers=headers, timeout=10, verify=False, proxies=proxies)
print r.text
When I used this module from the command prompt, I got the following message:
ConnectionError thrown. Details: Cannot connect to proxy. Socket error: Tunnel connection failed: 407 Proxy Authentication Required.
I am pretty sure the username and password are both correct.
Any solution? Thanks!
The reason for the 407 error is that the proxy itself needs to be authenticated. So for your proxies dict, do the following:
proxies = {
"http": "http://user:pass#10.128.198.14",
"https": "http://user:pass#10.128.198.14"
}
Fill in the user and pass variables in the proxies urls. Here is a link to the relevant requests documentation on how to build proxy objects and have them authenticated.

How can I open a website with urllib via proxy in Python?

I have this program that check a website, and I want to know how can I check it via proxy in Python...
this is the code, just for example
while True:
try:
h = urllib.urlopen(website)
break
except:
print '['+time.strftime('%Y/%m/%d %H:%M:%S')+'] '+'ERROR. Trying again in a few seconds...'
time.sleep(5)
By default, urlopen uses the environment variable http_proxy to determine which HTTP proxy to use:
$ export http_proxy='http://myproxy.example.com:1234'
$ python myscript.py # Using http://myproxy.example.com:1234 as a proxy
If you instead want to specify a proxy inside your application, you can give a proxies argument to urlopen:
proxies = {'http': 'http://myproxy.example.com:1234'}
print("Using HTTP proxy %s" % proxies['http'])
urllib.urlopen("http://www.google.com", proxies=proxies)
Edit: If I understand your comments correctly, you want to try several proxies and print each proxy as you try it. How about something like this?
candidate_proxies = ['http://proxy1.example.com:1234',
'http://proxy2.example.com:1234',
'http://proxy3.example.com:1234']
for proxy in candidate_proxies:
print("Trying HTTP proxy %s" % proxy)
try:
result = urllib.urlopen("http://www.google.com", proxies={'http': proxy})
print("Got URL using proxy %s" % proxy)
break
except:
print("Trying next proxy in 5 seconds")
time.sleep(5)
Python 3 is slightly different here. It will try to auto detect proxy settings but if you need specific or manual proxy settings, think about this kind of code:
#!/usr/bin/env python3
import urllib.request
proxy_support = urllib.request.ProxyHandler({'http' : 'http://user:pass#server:port',
'https': 'https://...'})
opener = urllib.request.build_opener(proxy_support)
urllib.request.install_opener(opener)
with urllib.request.urlopen(url) as response:
# ... implement things such as 'html = response.read()'
Refer also to the relevant section in the Python 3 docs
Here example code guide how to use urllib to connect via proxy:
authinfo = urllib.request.HTTPBasicAuthHandler()
proxy_support = urllib.request.ProxyHandler({"http" : "http://ahad-haam:3128"})
# build a new opener that adds authentication and caching FTP handlers
opener = urllib.request.build_opener(proxy_support, authinfo,
urllib.request.CacheFTPHandler)
# install it
urllib.request.install_opener(opener)
f = urllib.request.urlopen('http://www.google.com/')
"""
For http and https use:
proxies = {'http':'http://proxy-source-ip:proxy-port',
'https':'https://proxy-source-ip:proxy-port'}
more proxies can be added similarly
proxies = {'http':'http://proxy1-source-ip:proxy-port',
'http':'http://proxy2-source-ip:proxy-port'
...
}
usage
filehandle = urllib.urlopen( external_url , proxies=proxies)
Don't use any proxies (in case of links within network)
filehandle = urllib.urlopen(external_url, proxies={})
Use proxies authentication via username and password
proxies = {'http':'http://username:password#proxy-source-ip:proxy-port',
'https':'https://username:password#proxy-source-ip:proxy-port'}
Note: avoid using special characters such as :,# in username and passwords

Categories