So the program i am developing involves posting documents in bank DMS server. They have provided me server certificate in .cer format which i have inserted in my verify variable in code. They also provided client id and password which i have to embed in the header itself. I generated self signed client certificate and private key and gave them the client certificate in cer format and public key. Also in code i gave path of client certificate and private key in cert tuple.
Upon executing code, i am getting this error:
HTTPSConnectionPool(host='apimuat.xxxbank.com', port=9095): Max retries exceeded with url: /doc-mgmt/v1/uploadDoc (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fb01bd8a160>: Failed to establish a new connection: [Errno 60] Operation timed out'))
File "/Users/fpl_mayank/Documents/FPL/python-virtual-env/uploadDocApi/server.py", line 164, in main
result = requests.post(url,
File "/Users/fpl_mayank/Documents/FPL/python-virtual-env/uploadDocApi/server.py", line 189, in <module>
main()
I have tested it with 'https://postman-echo.com/post' without mentioning cert and verify just to check if my request is going through or not. it is working fine there.
This is my code snippet where i am using request functions.
url='https://apimuat.xxxbank.com:9095/doc-mgmt/v1/uploadDoc'
headers = {"Content-Type": "application/json", "client_id":"af197b22539647fba4db8b971b43e38",
"client_secret":"c1AA406e24074d8887954472C78a924"}
data = req
result = requests.post(url,
data=data,
headers=headers,
cert=('/Users/fpl_mayank/Documents/FPL/python-virtual-
env/uploadDocApi/keystore/dms_csr_certificate_self.cer','/Users/fpl_mayank/Documents/FPL/python-virtual-env/uploadDocApi/keystore/dms_private_key.key'),
verify='/Users/fpl_mayank/Documents/FPL/python-virtual-env/uploadDocApi/truststore/APIM-UAT.cer'
)
res = result.json()
In apidoc it was mentioned, 2-way SSL authentication will be implemented bw client and server. Also i have made virtual-env for this program for that matter. Please help. I am the first one to write an API using python in my company so only way to get my issue resolve is through good ol stackoverflow.
So i solved this. idk exactly what solved it but make sure when working on api's, get the endpoint's ip whitelisted from your network, as per requirement and same goes from their side too. Also i was sending formatted json request having identation and spaces so make sure to keep json in one line.
I am using the Python Requests module (v. 2.19.1) with Python 3.4.3, calling a function on a remote server that generates a .csv file for download. In general, it works perfectly. There is one particular file that takes >6 minutes to complete, and no matter what I set the timeout parameter to, I get an error after exactly 5 minutes trying to generate that file.
import requests
s = requests.Session()
authPayload = {'UserName': 'myloginname','Password': 'password'}
loginURL = 'https://myremoteserver.com/login/authenticate'
login = s.post(loginURL, data=authPayload)
backupURL = 'https://myremoteserver.com/directory/jsp/Backup.jsp'
payload = {'command': fileCommand}
headers = {'Connection': 'keep-alive'}
post = s.post(backupURL, data=payload, headers=headers, timeout=None)
This times out after exactly 5 minutes with the error:
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 330, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 612, in urlopen
raise MaxRetryError(self, url, e)
urllib3.exceptions.MaxRetryError: > HTTPSConnectionPool(host='myremoteserver.com', port=443): Max retries exceeded with url: /directory/jsp/Backup.jsp (Caused by < class 'http.client.BadStatusLine'>: '')
If I set timeout to something much smaller, say, 5 seconds, I get a error that makes perfect sense:
urllib3.exceptions.ReadTimeoutError:
HTTPSConnectionPool(host='myremoteserver.com', port=443): Read
timed out. (read timeout=5)
If I run the process from a browser, it works fine, so it doesn't seem like it's the remote server closing the connection, or a firewall or something in-between closing the connection.
Posted at the request of the OP -- my comments on the original question pointed to a related SO problem
The clue to the problem lies in the http.client.BadStatusLine error.
Take a look at the following related SO Q & A that discusses the impact of proxy servers on HTTP requests and responses.
I'm scraping some internal pages using Python and requests. I've turned off SSL verifications and warnings.
requests.packages.urllib3.disable_warnings()
page = requests.get(url, verify=False)
On certain servers I receive an SSL error I can't get past.
Traceback (most recent call last):
File "scraper.py", line 6, in <module>
page = requests.get(url, verify=False)
File "/cygdrive/c/Users/jfeocco/VirtualEnv/scraping/lib/python3.4/site-packages/requests/api.py", line 71, in get
return request('get', url, params=params, **kwargs)
File "/cygdrive/c/Users/jfeocco/VirtualEnv/scraping/lib/python3.4/site-packages/requests/api.py", line 57, in request
return session.request(method=method, url=url, **kwargs)
File "/cygdrive/c/Users/jfeocco/VirtualEnv/scraping/lib/python3.4/site-packages/requests/sessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "/cygdrive/c/Users/jfeocco/VirtualEnv/scraping/lib/python3.4/site-packages/requests/sessions.py", line 585, in send
r = adapter.send(request, **kwargs)
File "/cygdrive/c/Users/jfeocco/VirtualEnv/scraping/lib/python3.4/site-packages/requests/adapters.py", line 477, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: [SSL: SSL_NEGATIVE_LENGTH] dh key too small (_ssl.c:600)
This happens both in/out of Cygwin, in Windows and OSX. My research hinted at outdated OpenSSL on the server. I'm looking for a fix client side ideally.
Edit:
I was able to resolve this by using a cipher set
import requests
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS += 'HIGH:!DH:!aNULL'
try:
requests.packages.urllib3.contrib.pyopenssl.DEFAULT_SSL_CIPHER_LIST += 'HIGH:!DH:!aNULL'
except AttributeError:
# no pyopenssl support used / needed / available
pass
page = requests.get(url, verify=False)
this is not an extra answer just try to combine the solution code from question with extra information
So others can copy it directly without extra try
It is not only a DH Key issues in server side, but also lots of different libraries are mismatched in python modules.
Code segment below is used to ignore those securitry issues because it may be not able be solved in server side. For example if it is internal legacy server, no one wants to update it.
Besides the hacked string for 'HIGH:!DH:!aNULL', urllib3 module can be imported to disable the warning if it has
import requests
import urllib3
requests.packages.urllib3.disable_warnings()
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS += ':HIGH:!DH:!aNULL'
try:
requests.packages.urllib3.contrib.pyopenssl.util.ssl_.DEFAULT_CIPHERS += ':HIGH:!DH:!aNULL'
except AttributeError:
# no pyopenssl support used / needed / available
pass
page = requests.get(url, verify=False)
This also worked for me:
import requests
import urllib3
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS = 'ALL:#SECLEVEL=1'
openssl SECLEVELs documentation:
https://www.openssl.org/docs/manmaster/man3/SSL_CTX_set_security_level.html
SECLEVEL=2 is the openssl default nowadays, (at least on my setup: ubuntu 20.04, openssl 1.1.1f); SECLEVEL=1 lowers the bar.
Security levels are intended to avoid the complexity of tinkering with individual ciphers.
I believe most of us mere mortals don't have in depth knowledge of the security strength/weakness of individual ciphers, I surely don't.
Security levels seem a nice method to keep some control over how far you are opening the security door.
Note: I got a different SSL error, WRONG_SIGNATURE_TYPE instead of SSL_NEGATIVE_LENGTH, but the underlying issue is the same.
Error:
Traceback (most recent call last):
[...]
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 581, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='somehost.com', port=443): Max retries exceeded with url: myurl (Caused by SSLError(SSLError(1, '[SSL: WRONG_SIGNATURE_TYPE] wrong signature type (_ssl.c:1108)')))
Disabling warnings or certificate validation will not help. The underlying problem is a weak DH key used by the server which can be misused in the Logjam Attack.
To work around this you need to chose a cipher which does not make any use of Diffie Hellman Key Exchange and thus is not affected by the weak DH key. And this cipher must be supported by the server. It is unknown what the server supports but you might try with the cipher AES128-SHA or a cipher set of HIGH:!DH:!aNULL
Using requests with your own cipher set is tricky. See Why does Python requests ignore the verify parameter? for an example.
I had the same issue.
And it was fixed by commenting
CipherString = DEFAULT#SECLEVEL=2
line in /etc/ssl/openssl.cnf .
Someone from the requests python library's core development team has documented
a recipe to keep the changes limited to one or a few servers:
https://lukasa.co.uk/2017/02/Configuring_TLS_With_Requests/
If your code interacts with multiple servers, it makes sense not to lower the security requirements of all connections because one server has a problematic configuration.
The code worked for me out of the box.
That is, using my own value for CIPHERS, 'ALL:#SECLEVEL=1'.
I will package my solution here. I had to modify the python SSL library, which was possible since I was running my code within a docker container, but it's something that probably you don't want to do.
Get the supported cipher of your server. In my case was a third party e-mail server, and I used script described list SSL/TLS cipher suite
check_supported_ciphers.sh
#!/usr/bin/env bash
# OpenSSL requires the port number.
SERVER=$1
DELAY=1
ciphers=$(openssl ciphers 'ALL:eNULL' | sed -e 's/:/ /g')
echo Obtaining cipher list from $(openssl version).
for cipher in ${ciphers[#]}
do
echo -n Testing $cipher...
result=$(echo -n | openssl s_client -cipher "$cipher" -connect $SERVER 2>&1)
if [[ "$result" =~ ":error:" ]] ; then
error=$(echo -n $result | cut -d':' -f6)
echo NO \($error\)
else
if [[ "$result" =~ "Cipher is ${cipher}" || "$result" =~ "Cipher :" ]] ; then
echo YES
else
echo UNKNOWN RESPONSE
echo $result
fi
fi
sleep $DELAY
done
Give it permissions:
chmod +x check_supported_ciphers.sh
And execute it:
./check_supported_ciphers.sh myremoteserver.example.com | grep OK
After some seconds you will see an output similar to:
Testing AES128-SHA...YES (AES128-SHA_set_cipher_list)
So will use "AES128-SHA" as SSL cipher.
Force the error in your code:
Traceback (most recent call last):
File "my_custom_script.py", line 52, in
imap = IMAP4_SSL(imap_host)
File "/usr/lib/python2.7/imaplib.py", line 1169, in init
IMAP4.init(self, host, port)
File "/usr/lib/python2.7/imaplib.py", line 174, in init
self.open(host, port)
File "/usr/lib/python2.7/imaplib.py", line 1181, in open
self.sslobj = ssl.wrap_socket(self.sock, self.keyfile, self.certfile)
File "/usr/lib/python2.7/ssl.py", line 931, in wrap_socket
ciphers=ciphers)
File "/usr/lib/python2.7/ssl.py", line 599, in init
self.do_handshake()
File "/usr/lib/python2.7/ssl.py", line 828, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: DH_KEY_TOO_SMALL] dh key too small (_ssl.c:727)
Get the python SSL library path used, in this case:
/usr/lib/python2.7/ssl.py
Edit it:
cp /usr/lib/python2.7/ssl.py /usr/lib/python2.7/ssl.py.bak
vim /usr/lib/python2.7/ssl.py
And replace:
_DEFAULT_CIPHERS = (
'ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:DH+CHACHA20:ECDH+AES256:DH+AES256:'
'ECDH+AES128:DH+AES:ECDH+HIGH:DH+HIGH:RSA+AESGCM:RSA+AES:RSA+HIGH:'
'!aNULL:!eNULL:!MD5:!3DES'
)
By:
_DEFAULT_CIPHERS = (
'AES128-SHA'
)
I encounter this problem afer upgrading to Ubuntu 20.04 from 18.04, following command works for me .
pip install --ignore-installed pyOpenSSL --upgrade
It's may be safer not to override the default global ciphers, but instead create custom HTTPAdapter with the required ciphers in a specific session:
import ssl
from typing import Any
import requests
class ContextAdapter(requests.adapters.HTTPAdapter):
"""Allows to override the default context."""
def __init__(self, *args: Any, **kwargs: Any) -> None:
self.ssl_context: ssl.SSLContext|None = kwargs.pop("ssl_context", None)
super().__init__(*args, **kwargs)
def init_poolmanager(self, *args: Any, **kwargs: Any) -> Any:
# See available keys in urllib3.poolmanager.SSL_KEYWORDS
kwargs.setdefault("ssl_context", self.ssl_context)
return super().init_poolmanager(*args, **kwargs)
then you need to create custom context, for example:
import ssl
def create_context(
ciphers: str, minimum_version: int, verify: bool
) -> ssl.SSLContext:
"""See https://peps.python.org/pep-0543/."""
ctx = ssl.create_default_context()
# Allow to use untrusted certificates.
if not verify:
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
# Just for example.
if minimum_version == ssl.TLSVersion.TLSv1:
ctx.options &= (
~getattr(ssl, "OP_NO_TLSv1_3", 0)
& ~ssl.OP_NO_TLSv1_2
& ~ssl.OP_NO_TLSv1_1
)
ctx.minimum_version = minimum_version
ctx.set_ciphers(ciphers)
return ctx
and then you need to configure each website with custom context rules:
session = requests.Session()
session.mount(
"https://dh.affected-website.com",
ContextAdapter(
ssl_context=create_context(
ciphers="HIGH:!DH:!aNULL"
),
),
)
session.mount(
"https://only-elliptic.modern-website.com",
ContextAdapter(
ssl_context=create_context(
ciphers="ECDHE+AESGCM"
),
),
)
session.mount(
"https://only-tls-v1.old-website.com",
ContextAdapter(
ssl_context=create_context(
ciphers="DEFAULT:#SECLEVEL=1",
minimum_version=ssl.TLSVersion.TLSv1,
),
),
)
result = session.get("https://only-tls-v1.old-website.com/object")
After reading all the answers, I can say that #bgoeman's answer is close to mine, you can follow their link to learn more.
On CentOS 7, search for the following content in /etc/pki/tls/openssl.cnf:
[ crypto_policy ]
.include /etc/crypto-policies/back-ends/opensslcnf.config
[ new_oids ]
Set 'ALL:#SECLEVEL=1' in /etc/crypto-policies/back-ends/opensslcnf.config.
In docker image you can add the following command in your Dockerfile to get rid of this issue:
RUN sed -i '/CipherString = DEFAULT/s/^#\?/#/' /etc/ssl/openssl.cnf
This automatically comments out the problematic CipherString line.
If you are using httpx library, with this you skip the warning:
import httpx
httpx._config.DEFAULT_CIPHERS += ":HIGH:!DH:!aNULL"
I had the following error:
SSLError: [SSL: DH_KEY_TOO_SMALL] dh key too small (_ssl.c:727)
I solved it(Fedora):
python2.7 -m pip uninstall requests
python2.7 -m pip uninstall pyopenssl
python2.7 -m pip install pyopenssl==yourversion
python2.7 -m pip install requests==yourversion
The order module install cause that:
requests.packages.urllib3.contrib.pyopenssl.util.ssl_.DEFAULT_CIPHERS
AttributeError "pyopenssl" in "requests.packages.urllib3.contrib" when the module did exist.
Based on the answer given by the user bgoeman, the following code, which keeps the default ciphers only adding the security level, works.
import requests
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS += '#SECLEVEL=1'
There is two tries to get response from "working" django server. Working version is hardcoded and not working while unittesting
# working
# a = requests.post('http://localhost:8000/ImportKeys/',
# data=json.dumps({'user_id': key_obj.email,
#'key': self.restore_pubkey(key_obj.fingerprint)}))
# not working
a = requests.post('http://' + request.get_host() + reverse('import_keys'),data=json.dumps({'user_id': key_obj.email,'key': self.restore_pubkey(key_obj.fingerprint)}))
On that version, that I whant to starts working, I've got this(end stacktrace):
File "/home/PycharmProjects/lib/python3.4/site-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/home/PycharmProjects/lib/python3.4/site-packages/requests/adapters.py", line 437, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='testserver', port=80): Max retries exceeded with url: /ImportKeys/ (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -2] Name or service not known',))
And yes, I see that it's trying connect to 80 port, and this is bad.
To test your views in the TestCase classes, use django.test.Client, which is designed specifically for that purpose. If you inherit your test cases from django.test.TestCase, it's already available via the self.client attribute.
class YourTestCase(TestCase):
def test_import_keys_posting(self):
data = {
'user_id': key_obj.email,
'key': self.restore_pubkey(key_obj.fingerprint)
}
response = self.client.post(reverse('import_keys'), data)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.json(), {'result': 'ok'})
And if you use Django Rest Framework, consider using its wonderful APIClient, which simplifies API testing even more.
If you need to send requests to the server during tests (in that case this will probably not be from the test code itself but from some mock or from JS code):
Extend LiveServerTestCase instead TestCase. This will launch an actual server during tests.
If you are using request.build_absolute_uri() in your regular code which is being tested, you need to change the test code to update the HTTP request headers accordingly like this:
checkout_url = '{}{}'.format(self.live_server_url, reverse('checkout', kwargs={'pk': article.id}))
parsed_url = parse.urlparse(self.live_server_url)
# add the info on host and port to the http header to make subsequent
# request.build_absolute_uri() calls work
response = self.client.get(checkout_url, SERVER_NAME=parsed_url.hostname, SERVER_PORT=parsed_url.port)
I keep getting this error everytime I try running my code through proxy. I have gone through every single link available on how to get my code running behind proxy and am simply unable to get this done.
import twython
import requests
TWITTER_APP_KEY = 'key' #supply the appropriate value
TWITTER_APP_KEY_SECRET = 'key-secret'
TWITTER_ACCESS_TOKEN = 'token'
TWITTER_ACCESS_TOKEN_SECRET = 'secret'
t = twython.Twython(app_key=TWITTER_APP_KEY,
app_secret=TWITTER_APP_KEY_SECRET,
oauth_token=TWITTER_ACCESS_TOKEN,
oauth_token_secret=TWITTER_ACCESS_TOKEN_SECRET,
client_args = {'proxies': {'http': 'proxy.company.com:10080'}})
now if I do
t = twython.Twython(app_key=TWITTER_APP_KEY,
app_secret=TWITTER_APP_KEY_SECRET,
oauth_token=TWITTER_ACCESS_TOKEN,
oauth_token_secret=TWITTER_ACCESS_TOKEN_SECRET,
client_args = client_args)
print t.client_args
I get only a {}
and when I try running
t.update_status(status='See how easy this was?')
I get this problem :
Traceback (most recent call last):
File "<pyshell#40>", line 1, in <module>
t.update_status(status='See how easy this was?')
File "build\bdist.win32\egg\twython\endpoints.py", line 86, in update_status
return self.post('statuses/update', params=params)
File "build\bdist.win32\egg\twython\api.py", line 223, in post
return self.request(endpoint, 'POST', params=params, version=version)
File "build\bdist.win32\egg\twython\api.py", line 213, in request
content = self._request(url, method=method, params=params, api_call=url)
File "build\bdist.win32\egg\twython\api.py", line 134, in _request
response = func(url, **requests_args)
File "C:\Python27\lib\site-packages\requests-1.2.3-py2.7.egg\requests\sessions.py", line 377, in post
return self.request('POST', url, data=data, **kwargs)
File "C:\Python27\lib\site-packages\requests-1.2.3-py2.7.egg\requests\sessions.py", line 335, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python27\lib\site-packages\requests-1.2.3-py2.7.egg\requests\sessions.py", line 438, in send
r = adapter.send(request, **kwargs)
File "C:\Python27\lib\site-packages\requests-1.2.3-py2.7.egg\requests\adapters.py", line 327, in send
raise ConnectionError(e)
ConnectionError: HTTPSConnectionPool(host='api.twitter.com', port=443): Max retries exceeded with url: /1.1/statuses/update.json (Caused by <class 'socket.gaierror'>: [Errno 11004] getaddrinfo failed)
I have searched everywhere. Tried everything that I possibly could. The only resources available were :
https://twython.readthedocs.org/en/latest/usage/advanced_usage.html#manipulate-the-request-headers-proxies-etc
https://groups.google.com/forum/#!topic/twython-talk/GLjjVRHqHng
https://github.com/fumieval/twython/commit/7caa68814631203cb63231918e42e54eee4d2273
https://groups.google.com/forum/#!topic/twython-talk/mXVL7XU4jWw
There were no topics I could find here (on Stack Overflow) either.
Please help. Hope someone replies. If you have already done this please help me with some code example.
Your code isn't using your proxy. The example shows, you specified a proxy for plain HTTP but your stackstrace shows a HTTPSConnectionPool. Your local machine probably can't resolve external domains.
Try setting your proxy like this:
client_args = {'proxies': {'https': 'http://proxy.company.com:10080'}}
In combination with #t-8ch's answer (which is that you must use a proxy as he has defined it), you should also realize that as of this moment, requests (the underlying library of Twython) does not support proxying over HTTPS. This is a problem with requests underlying library urllib3. It's a long running issue as far as I'm aware.
On top of that, reading a bit of Twython's source explains why t.client_args returns an empty dictionary. In short, if you were to instead print t.client.proxies, you'd see that indeed your proxies are being processed as they very well should be.
Finally, complaining about your workplace while on StackOverflow and linking to GitHub commits that have your GitHub username (and real name) associated with them in the comments is not the best idea. StackOverflow is indexed quite thoroughly by Google and there is little doubt that someone else might find this and associate it with you as easily as I have. On top of that, that commit has absolutely no effect on Twython's current behaviour. You're running down a rabbit hole with no end by chasing the author of that commit.
It looks like a domain name lookup failed. Assuming your configured DNS server can resolve Twitter's domain name (and surely it can), I would presume your DNS lookup for proxy.company.com failed. Try using a proxy by IP address instead of by hostname.