Error in SSL wrapper while cloning with Mercurial - python

I've recently started working with Bitbucket and mercurial. I can work with git repositories just fine, but when I try to clone a mercurial one, it crashes in /Users/foobar/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/mercurial-3.4.2-py2.7-macosx-10.6-x86_64.egg/mercurial/sslutil.py on line 36, with the error:
ValueError: can't clear options before OpenSSL 0.9.8m
I am running OpenSSL 1.0.2c.
As is apparent from the file path, I downloaded python (and mercurial) through Enthought Canopy
I searched around and I found someone with a similar problem: https://bitbucket.org/durin42/hgsubversion/issues/439/unknown-exception-in-dispatchpy-value. His partial solution (commenting out the offending line) was enough to make my download work, but I confess I am out of my depth in trying to determine whether this will cause any security issues with me using Mercurial. Are there any potential security issues with this? Is there any additional information you'd need to begin to answer this?
The file I edited is below (search for "==========" to find the line I removed).
# sslutil.py - SSL handling for mercurial
#
# Copyright 2005, 2006, 2007, 2008 Matt Mackall <mpm#selenic.com>
# Copyright 2006, 2007 Alexis S. L. Carvalho <alexis#cecm.usp.br>
# Copyright 2006 Vadim Gelfer <vadim.gelfer#gmail.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
import os, sys
from mercurial import util
from mercurial.i18n import _
_canloaddefaultcerts = False
try:
# avoid using deprecated/broken FakeSocket in python 2.6
import ssl
CERT_REQUIRED = ssl.CERT_REQUIRED
try:
ssl_context = ssl.SSLContext
_canloaddefaultcerts = util.safehasattr(ssl_context,
'load_default_certs')
def ssl_wrap_socket(sock, keyfile, certfile, cert_reqs=ssl.CERT_NONE,
ca_certs=None, serverhostname=None):
# Allow any version of SSL starting with TLSv1 and
# up. Note that specifying TLSv1 here prohibits use of
# newer standards (like TLSv1_2), so this is the right way
# to do this. Note that in the future it'd be better to
# support using ssl.create_default_context(), which sets
# up a bunch of things in smart ways (strong ciphers,
# protocol versions, etc) and is upgraded by Python
# maintainers for us, but that breaks too many things to
# do it in a hurry.
sslcontext = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
# ================================================
# LINE I REMOVED
sslcontext.options &= ssl.OP_NO_SSLv2 & ssl.OP_NO_SSLv3
# ================================================
if certfile is not None:
sslcontext.load_cert_chain(certfile, keyfile)
sslcontext.verify_mode = cert_reqs
if ca_certs is not None:
sslcontext.load_verify_locations(cafile=ca_certs)
elif _canloaddefaultcerts:
sslcontext.load_default_certs()
sslsocket = sslcontext.wrap_socket(sock,
server_hostname=serverhostname)
# check if wrap_socket failed silently because socket had been
# closed
# - see http://bugs.python.org/issue13721
if not sslsocket.cipher():
raise util.Abort(_('ssl connection failed'))
return sslsocket
except AttributeError:
def ssl_wrap_socket(sock, keyfile, certfile, cert_reqs=ssl.CERT_NONE,
ca_certs=None, serverhostname=None):
sslsocket = ssl.wrap_socket(sock, keyfile, certfile,
cert_reqs=cert_reqs, ca_certs=ca_certs,
ssl_version=ssl.PROTOCOL_TLSv1)
# check if wrap_socket failed silently because socket had been
# closed
# - see http://bugs.python.org/issue13721
if not sslsocket.cipher():
raise util.Abort(_('ssl connection failed'))
return sslsocket
except ImportError:
CERT_REQUIRED = 2
import socket, httplib
def ssl_wrap_socket(sock, keyfile, certfile, cert_reqs=CERT_REQUIRED,
ca_certs=None, serverhostname=None):
if not util.safehasattr(socket, 'ssl'):
raise util.Abort(_('Python SSL support not found'))
if ca_certs:
raise util.Abort(_(
'certificate checking requires Python 2.6'))
ssl = socket.ssl(sock, keyfile, certfile)
return httplib.FakeSocket(sock, ssl)
def _verifycert(cert, hostname):
'''Verify that cert (in socket.getpeercert() format) matches hostname.
CRLs is not handled.
Returns error message if any problems are found and None on success.
'''
if not cert:
return _('no certificate received')
dnsname = hostname.lower()
def matchdnsname(certname):
return (certname == dnsname or
'.' in dnsname and certname == '*.' + dnsname.split('.', 1)[1])
san = cert.get('subjectAltName', [])
if san:
certnames = [value.lower() for key, value in san if key == 'DNS']
for name in certnames:
if matchdnsname(name):
return None
if certnames:
return _('certificate is for %s') % ', '.join(certnames)
# subject is only checked when subjectAltName is empty
for s in cert.get('subject', []):
key, value = s[0]
if key == 'commonName':
try:
# 'subject' entries are unicode
certname = value.lower().encode('ascii')
except UnicodeEncodeError:
return _('IDN in certificate not supported')
if matchdnsname(certname):
return None
return _('certificate is for %s') % certname
return _('no commonName or subjectAltName found in certificate')
# CERT_REQUIRED means fetch the cert from the server all the time AND
# validate it against the CA store provided in web.cacerts.
#
# We COMPLETELY ignore CERT_REQUIRED on Python <= 2.5, as it's totally
# busted on those versions.
def _plainapplepython():
"""return true if this seems to be a pure Apple Python that
* is unfrozen and presumably has the whole mercurial module in the file
system
* presumably is an Apple Python that uses Apple OpenSSL which has patches
for using system certificate store CAs in addition to the provided
cacerts file
"""
if sys.platform != 'darwin' or util.mainfrozen() or not sys.executable:
return False
exe = os.path.realpath(sys.executable).lower()
return (exe.startswith('/usr/bin/python') or
exe.startswith('/system/library/frameworks/python.framework/'))
def _defaultcacerts():
"""return path to CA certificates; None for system's store; ! to disable"""
if _plainapplepython():
dummycert = os.path.join(os.path.dirname(__file__), 'dummycert.pem')
if os.path.exists(dummycert):
return dummycert
if _canloaddefaultcerts:
return None
return '!'
def sslkwargs(ui, host):
kws = {}
hostfingerprint = ui.config('hostfingerprints', host)
if hostfingerprint:
return kws
cacerts = ui.config('web', 'cacerts')
if cacerts == '!':
pass
elif cacerts:
cacerts = util.expandpath(cacerts)
if not os.path.exists(cacerts):
raise util.Abort(_('could not find web.cacerts: %s') % cacerts)
else:
cacerts = _defaultcacerts()
if cacerts and cacerts != '!':
ui.debug('using %s to enable OS X system CA\n' % cacerts)
ui.setconfig('web', 'cacerts', cacerts, 'defaultcacerts')
if cacerts != '!':
kws.update({'ca_certs': cacerts,
'cert_reqs': CERT_REQUIRED,
})
return kws
class validator(object):
def __init__(self, ui, host):
self.ui = ui
self.host = host
def __call__(self, sock, strict=False):
host = self.host
cacerts = self.ui.config('web', 'cacerts')
hostfingerprint = self.ui.config('hostfingerprints', host)
if not getattr(sock, 'getpeercert', False): # python 2.5 ?
if hostfingerprint:
raise util.Abort(_("host fingerprint for %s can't be "
"verified (Python too old)") % host)
if strict:
raise util.Abort(_("certificate for %s can't be verified "
"(Python too old)") % host)
if self.ui.configbool('ui', 'reportoldssl', True):
self.ui.warn(_("warning: certificate for %s can't be verified "
"(Python too old)\n") % host)
return
if not sock.cipher(): # work around http://bugs.python.org/issue13721
raise util.Abort(_('%s ssl connection error') % host)
try:
peercert = sock.getpeercert(True)
peercert2 = sock.getpeercert()
except AttributeError:
raise util.Abort(_('%s ssl connection error') % host)
if not peercert:
raise util.Abort(_('%s certificate error: '
'no certificate received') % host)
peerfingerprint = util.sha1(peercert).hexdigest()
nicefingerprint = ":".join([peerfingerprint[x:x + 2]
for x in xrange(0, len(peerfingerprint), 2)])
if hostfingerprint:
if peerfingerprint.lower() != \
hostfingerprint.replace(':', '').lower():
raise util.Abort(_('certificate for %s has unexpected '
'fingerprint %s') % (host, nicefingerprint),
hint=_('check hostfingerprint configuration'))
self.ui.debug('%s certificate matched fingerprint %s\n' %
(host, nicefingerprint))
elif cacerts != '!':
msg = _verifycert(peercert2, host)
if msg:
raise util.Abort(_('%s certificate error: %s') % (host, msg),
hint=_('configure hostfingerprint %s or use '
'--insecure to connect insecurely') %
nicefingerprint)
self.ui.debug('%s certificate successfully verified\n' % host)
elif strict:
raise util.Abort(_('%s certificate with fingerprint %s not '
'verified') % (host, nicefingerprint),
hint=_('check hostfingerprints or web.cacerts '
'config setting'))
else:
self.ui.warn(_('warning: %s certificate with fingerprint %s not '
'verified (check hostfingerprints or web.cacerts '
'config setting)\n') %
(host, nicefingerprint))

ValueError: can't clear options before OpenSSL 0.9.8m
I am running OpenSSL 1.0.2c.
Just a guess, but it sounds like the script is using Apple's version of OpenSSL, which is 0.9.8 (and not OpenSSL 1.0.2 that you built things against).
To ensure you use your OpenSSL 1.0.2, use DYLD_LIBRARY_PATH. Its similar to LD_LIBRARY_PATH on Linux.
The other option is to omit shared from the OpenSSL Configure, so that your gear can only perform static linking. Apple's linker always uses the *.dylib if available (even on iOS, where its not allowed), so you have to be careful with it.

Assuming OS X.
Try checking what python you're using:
which python
/usr/local/bin/python
Then see where that's pointing
ls -al /usr/local/bin/python
/usr/local/bin/python -> /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python
Make sure you're using /System/Library/Frameworks python, if not, delete the /usr/local/bin/python symlink and recreate it:
ln -s /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python /usr/local/bin/python

The line commented out was one that disabled use of some older and weaker encryption protocols; commenting it out allows you to use those older protocols. Possibly the server doesn't support the more recent protocols. Unless you are worried about the NSA getting their hands on your code, this is probably not that big a deal.

Related

Getting different ssl certificate from the same host [duplicate]

I'm working with pyOpenSSL lately, however I came across some urls that use SNI to present multiple certificates for the same IP address. Here's my code:
from OpenSSL import SSL
from socket import socket
from sys import argv, stdout
import re
from urlparse import urlparse
def callback(conn, cert, errno, depth, result):
if depth == 0 and (errno == 9 or errno == 10):
return False # or raise Exception("Certificate not yet valid or expired")
return True
def main():
if len(argv) < 2:
print 'Usage: %s <hostname>' % (argv[0],)
return 1
o = urlparse(argv[1])
host_name = o.netloc
context = SSL.Context(SSL.TLSv1_METHOD) # Use TLS Method
context.set_options(SSL.OP_NO_SSLv2) # Don't accept SSLv2
context.set_verify(SSL.VERIFY_PEER | SSL.VERIFY_FAIL_IF_NO_PEER_CERT,
callback)
# context.load_verify_locations(ca_file, ca_path)
sock = socket()
ssl_sock = SSL.Connection(context, sock)
ssl_sock.connect((host_name, 443))
ssl_sock.do_handshake()
cert = ssl_sock.get_peer_certificate()
common_name = cert.get_subject().commonName.decode()
print "Common Name: ", common_name
print "Cert number: ", cert.get_serial_number()
regex = common_name.replace('.', r'\.').replace('*',r'.*') + '$'
if re.match(regex, host_name):
print "matches"
else:
print "invalid"
if __name__ == "__main__":
main()
For example, let's say I have the following url:
https://example.com
When I get the following output:
python sni.py https://example.com/
Common Name: *.example.com
Cert number: 63694395280496902491340707875731768741
invalid
which is the same certificate for https://another.example.com:
python sni.py https://another.example.com/
Common Name: *.example.com
Cert number: 63694395280496902491340707875731768741
matches
However, let's say, the certificate for https://another.example.com is expired, the connection will be accepted anyways, since it's using the *.example.com certificate, which is valid. However I want to be able to use https://another.example.com/ and if it's not valid, reject the connection outright. How can I accomplish that?
You need to use set_tlsext_host_name. From the documentation:
Connection.set_tlsext_host_name(name)
Specify the byte string to send as the server name in the client hello message.
New in version 0.13.
Apart from that your hostname validation is wrong since it only compares against the CN and not the subject alternative names. Also it allows wildcards on any place which is against the rule that wildcards should only be allowed in the leftmost label: *.example.com is fine while www.*.com or even *.*.* is not allowed but accepted by your code.

python ssl (eqivalent of openssl s_client -showcerts ) How to get list of CAs for client certs from server

I have a group of nginx servers, that accept client certificates.
They have the ssl_client_certificate option with a file containing one or more CAs
If I use a web browser, then the web browser seems to receive a list of valid CAs for client certs. The browser shows only client certs signed by one of these CAs.
Following openssl command gives me a list of CA certs:
openssl s_client -showcerts -servername myserver.com -connect myserver.com:443 </dev/null
The lines I am interested in look following way:
---
Acceptable client certificate CA names
/C=XX/O=XX XXXX
/C=YY/O=Y/OU=YY YYYYYL
...
Client Certificate Types: RSA sign, DSA sign, ECDSA sign
How can I get the same information with python?
I do have following code snippet, that allows to obtain a server's certificate, but this code does not return the list of CAs for client certs.
import ssl
def get_server_cert(hostname, port):
conn = ssl.create_connection((hostname, port))
context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
sock = context.wrap_socket(conn, server_hostname=hostname)
cert = sock.getpeercert(True)
cert = ssl.DER_cert_to_PEM_cert(cert)
return cerft
I expected to find a functional equivalent of getpeercert(), something like getpeercas() but didn't find anything.
Current workaround:
import os
import subprocess
def get_client_cert_cas(hostname, port):
"""
returns a list of CAs, for which client certs are accepted
"""
cmd = [
"openssl",
"s_client",
"-showcerts",
"-servername", hostname,
"-connect", hostname + ":" + str(port),
]
stdin = open(os.devnull, "r")
stderr = open(os.devnull, "w")
output = subprocess.check_output(cmd, stdin=stdin, stderr=stderr)
ca_signatures = []
state = 0
for line in output.decode().split("\n"):
print(state, line)
if state == 0:
if line == "Acceptable client certificate CA names":
state = 1
elif state == 1:
if line.startswith("Client Certificate Types:"):
break
ca_signatures.append(line)
return ca_signatures
Update:Solution with pyopenssl (Thanks Steffen Ullrich)
#Steffen Ulrich suggested to use pyopenssl, which has a method get_client_ca_list() and this helped me to write a small code snippet.
Below code seems to work. Not sure if it can be improved or whether there are any pit falls.
If nobody is answering within the next days I will post this as a potential answer.
import socket
from OpenSSL import SSL
def get_client_cert_cas(hostname, port):
ctx = SSL.Context(SSL.SSLv23_METHOD)
# If we don't force to NOT use TLSv1.3 get_client_ca_list() returns
# an empty result
ctx.set_options(SSL.OP_NO_TLSv1_3)
sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM))
# next line for SNI
sock.set_tlsext_host_name(hostname.encode("utf-8"))
sock.connect((hostname, port))
# without handshake get_client_ca_list will be empty
sock.do_handshake()
return sock.get_client_ca_list()
Update: 2021-03-31
Above suggested solution using pyopenssl works in most cases.
However sock.get_client_ca_list()) cannot be called immediately after performing a sock.connect((hostname, port))
Some actions seem to be required in between these two commands.
Initially I used sock.send(b"G"), but now I use sock.do_handshake(), which seems a little cleaner.
Even stranger, the solution doesn't work with TLSv1.3 so I had to exclude it.
As a generic example in python
first you need to contact the server to learn which issuer CA subjects it accepts:
from socket import socket, AF_INET, SOCK_STREAM
from OpenSSL import SSL
from OpenSSL.crypto import X509Name
from certifi import where
import idna
def get_server_expected_client_subjects(host :str, port :int = 443) -> list[X509Name]:
expected_subjects = []
ctx = SSL.Context(method=SSL.SSLv23_METHOD)
ctx.verify_mode = SSL.VERIFY_NONE
ctx.check_hostname = False
conn = SSL.Connection(ctx, socket(AF_INET, SOCK_STREAM))
conn.connect((host, port))
conn.settimeout(3)
conn.set_tlsext_host_name(idna.encode(host))
conn.setblocking(1)
conn.set_connect_state()
try:
conn.do_handshake()
expected_subjects :list[X509Name] = conn.get_client_ca_list()
except SSL.Error as err:
raise SSL.Error from err
finally:
conn.close()
return expected_subjects
This did not have the client certificate, so the TLS connection would fail. There are a lot of bad practices here, but unfortunately they are necessary and the only way to gather the message from the server before we actually want to attempt client authentication using hte correct certificate.
Next you load the cert based on the server:
from pathlib import Path
from OpenSSL.crypto import load_certificate, FILETYPE_PEM
from pathlib import Path
def check_client_cert_issuer(client_pem :str, expected_subjects :list) -> str:
client_cert = None
if len(expected_subjects) > 0:
client_cert_path = Path(client_pem)
cert = load_certificate(FILETYPE_PEM, client_cert_path.read_bytes())
issuer_subject = cert.get_issuer()
for check in expected_subjects:
if issuer_subject.commonName == check.commonName:
client_cert = client_pem
break
if client_cert is None or not isinstance(client_cert, str):
raise Exception('X509_V_ERR_SUBJECT_ISSUER_MISMATCH') # OpenSSL error code 29
return client_cert
In a real app (not an example snippet) you would have a database of some sort to take the server subject and lookup the location of the cert to load - this example does it in reverse for demonstration only.
Make the TLS connection, and capture any OpenSSL errors:
from socket import socket, AF_INET, SOCK_STREAM
from OpenSSL import SSL
from OpenSSL.crypto import X509, FILETYPE_PEM
from certifi import where
import idna
def openssl_verifier(conn :SSL.Connection, server_cert :X509, errno :int, depth :int, preverify_ok :int):
ok = 1
verifier_errors = conn.get_app_data()
if not isinstance(verifier_errors, list):
verifier_errors = []
if errno in OPENSSL_CODES.keys():
ok = 0
verifier_errors.append((server_cert, OPENSSL_CODES[errno]))
conn.set_app_data(verifier_errors)
return ok
client_pem = '/path/to/client.pem'
client_issuer_ca = '/path/to/ca.pem'
host = 'example.com'
port = 443
ctx = SSL.Context(method=SSL.SSLv23_METHOD) # will negotiate TLS1.3 or lower protocol, what every is highest possible during negotiation
ctx.load_verify_locations(cafile=where())
if client_pem is not None:
ctx.use_certificate_file(certfile=client_pem, filetype=FILETYPE_PEM)
if client_issuer_ca is not None:
ctx.load_client_ca(cafile=client_issuer_ca)
ctx.set_verify(SSL.VERIFY_NONE, openssl_verifier)
ctx.check_hostname = False
conn = SSL.Connection(ctx, socket(AF_INET, SOCK_STREAM))
conn.connect((host, port))
conn.settimeout(3)
conn.set_tlsext_host_name(idna.encode(host))
conn.setblocking(1)
conn.set_connect_state()
try:
conn.do_handshake()
verifier_errors = conn.get_app_data()
except SSL.Error as err:
raise SSL.Error from err
finally:
conn.close()
# handle your errors in your main app
print(verifier_errors)
Just make sure you handle those OPENSSL_CODES errors if any are encountered, the lookup dictionary is here.
Many validations occur pre verification inside OpenSSL itself and all PyOpenSSL will do is a basic validation. so we need to access these codes from OpenSSL if we want to do Client Authentication, i.e. on the client and throw away the response from an untrusted server if it fails any authentication checks on the client side, per Client Authorisation or rather mutual-TLS dictates
#Stof's solution is more complete than this one.
So I selected his answer as 'official' answer.
This answer predates his, but might still be of some interest.
With #Steffen Ullrich's help I found following solution,
which works for all the (nginx with a ssl_client_certificate setting) servers that I tested with.
It requires to install an external package
pip install pyopenssl
Then following work will work:
import socket
from OpenSSL import SSL
def get_client_cert_cas(hostname, port):
ctx = SSL.Context(SSL.SSLv23_METHOD)
# If we don't force to NOT use TLSv1.3 get_client_ca_list() returns
# an empty result
ctx.set_options(SSL.OP_NO_TLSv1_3)
sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM))
# next line for SNI
sock.set_tlsext_host_name(hostname.encode("utf-8"))
sock.connect((hostname, port))
# without handshake get_client_ca_list will be empty
sock.do_handshake()
return sock.get_client_ca_list()
The line sock.do_handshake() is required to trigger enough of the SSL protocol. Otherwise client_ca_list information doesn't seem to be populated.
At least for the servers, that I tested I had to make sure TLSv1.3 is not used. I don't know whether this is a bug, a feature or whether with TLSv1.3 another function has to be called prior to calling get_client_ca_list()
I am no pyopenssl expert, but could imagine, that there is a more elegant / more explicit way to get the same behavior.
but so far this works for me for all the servers, that I encountered.

Python SSL certificate check hostname according to common name

I want to check if an hostname and a port according to a SSL certificate. I created this function :
#staticmethod
def common_name_check(hostname, port):
try:
ctx = ssl.create_default_context()
s = ctx.wrap_socket(socket.socket(), server_hostname=hostname)
s.connect((hostname, int(port)))
cert = s.getpeercert()
ssl.match_hostname(cert, hostname)
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
print(exc_type, fname, exc_tb.tb_lineno)
return False
else:
return True
My problem is : When a certificate is expired the verification failed. But the exception is ambiguous :
<class 'ssl.SSLError'>
I can't know if error is due to certificate expired or bad common name
How can I only check if hostname/port is valid for certificate ?
Exactly. SSL Error is too generic. I struggled with this too. So you might want to checkout this link.
Verifying SSL certificates in python
Here is a code only answer.
from OpenSSL import SSL
from socket import socket
context = SSL.Context(SSL.TLSv1_METHOD) # Use TLS Method
context.set_options(SSL.OP_NO_SSLv2) # Don't accept SSLv2
context.set_verify(SSL.VERIFY_NONE, callback)
context.load_verify_locations(ca_file, ca_path)
host = "google.com"
port = 443 #ssl
sock = socket()
ssl_sock = SSL.Connection(context, sock)
ssl_sock.connect((host, port))
ssl_sock.do_handshake()
def callback(self, conn, certificate, err, depth, valid):
# err here is the error code ([List of error codes and what they mean][2].
if err == 62:
print("HOSTNAME MISMATCH!!!")
I'm using something like this. certfile is PEM format.
Not quite the same, as I'm comparing against a certificate file..
# Check if the certificate commonName is a match to MYSTRING
certpath = 'c:\cert.cer'
from cryptography import x509
from cryptography.hazmat.backends import default_backend
cfile = open(certpath,"r")
cert_file = cfile.read()
cert = x509.load_pem_x509_certificate(cert_file, default_backend())
for fields in cert.subject:
current = str(fields.oid)
if "commonName" in current:
if fields.value == MYSTRING:
print 'certificate (' + fields.value + ') matches'
else:
print 'certificate (' + fields.value + ') does NOT match'

Handling SNI with pyOpenSSL - Python

I'm working with pyOpenSSL lately, however I came across some urls that use SNI to present multiple certificates for the same IP address. Here's my code:
from OpenSSL import SSL
from socket import socket
from sys import argv, stdout
import re
from urlparse import urlparse
def callback(conn, cert, errno, depth, result):
if depth == 0 and (errno == 9 or errno == 10):
return False # or raise Exception("Certificate not yet valid or expired")
return True
def main():
if len(argv) < 2:
print 'Usage: %s <hostname>' % (argv[0],)
return 1
o = urlparse(argv[1])
host_name = o.netloc
context = SSL.Context(SSL.TLSv1_METHOD) # Use TLS Method
context.set_options(SSL.OP_NO_SSLv2) # Don't accept SSLv2
context.set_verify(SSL.VERIFY_PEER | SSL.VERIFY_FAIL_IF_NO_PEER_CERT,
callback)
# context.load_verify_locations(ca_file, ca_path)
sock = socket()
ssl_sock = SSL.Connection(context, sock)
ssl_sock.connect((host_name, 443))
ssl_sock.do_handshake()
cert = ssl_sock.get_peer_certificate()
common_name = cert.get_subject().commonName.decode()
print "Common Name: ", common_name
print "Cert number: ", cert.get_serial_number()
regex = common_name.replace('.', r'\.').replace('*',r'.*') + '$'
if re.match(regex, host_name):
print "matches"
else:
print "invalid"
if __name__ == "__main__":
main()
For example, let's say I have the following url:
https://example.com
When I get the following output:
python sni.py https://example.com/
Common Name: *.example.com
Cert number: 63694395280496902491340707875731768741
invalid
which is the same certificate for https://another.example.com:
python sni.py https://another.example.com/
Common Name: *.example.com
Cert number: 63694395280496902491340707875731768741
matches
However, let's say, the certificate for https://another.example.com is expired, the connection will be accepted anyways, since it's using the *.example.com certificate, which is valid. However I want to be able to use https://another.example.com/ and if it's not valid, reject the connection outright. How can I accomplish that?
You need to use set_tlsext_host_name. From the documentation:
Connection.set_tlsext_host_name(name)
Specify the byte string to send as the server name in the client hello message.
New in version 0.13.
Apart from that your hostname validation is wrong since it only compares against the CN and not the subject alternative names. Also it allows wildcards on any place which is against the rule that wildcards should only be allowed in the leftmost label: *.example.com is fine while www.*.com or even *.*.* is not allowed but accepted by your code.

Python + Twisted + FtpClient + SOCKS

I just started using Twisted. I want to connect to an FTP server and perform some basic operations (use threading if possible). I am using this example.
Which does the job quite well. The question is how to add a SOCKS4/5 proxy usage to the code? Can somebody please provide a working example? I have tried this link too.
But,
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
An example of using the FTP client
"""
# Twisted imports
from twisted.protocols.ftp import FTPClient, FTPFileListProtocol
from twisted.internet.protocol import Protocol, ClientCreator
from twisted.python import usage
from twisted.internet import reactor, endpoints
# Socks support test
from socksclient import SOCKSv4ClientProtocol, SOCKSWrapper
from twisted.web import client
# Standard library imports
import string
import sys
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
class BufferingProtocol(Protocol):
"""Simple utility class that holds all data written to it in a buffer."""
def __init__(self):
self.buffer = StringIO()
def dataReceived(self, data):
self.buffer.write(data)
# Define some callbacks
def success(response):
print 'Success! Got response:'
print '---'
if response is None:
print None
else:
print string.join(response, '\n')
print '---'
def fail(error):
print 'Failed. Error was:'
print error
def showFiles(result, fileListProtocol):
print 'Processed file listing:'
for file in fileListProtocol.files:
print ' %s: %d bytes, %s' \
% (file['filename'], file['size'], file['date'])
print 'Total: %d files' % (len(fileListProtocol.files))
def showBuffer(result, bufferProtocol):
print 'Got data:'
print bufferProtocol.buffer.getvalue()
class Options(usage.Options):
optParameters = [['host', 'h', 'example.com'],
['port', 'p', 21],
['username', 'u', 'webmaster'],
['password', None, 'justapass'],
['passive', None, 0],
['debug', 'd', 1],
]
# Socks support
def wrappercb(proxy):
print "connected to proxy", proxy
pass
def run():
def sockswrapper(proxy, url):
dest = client._parse(url) # scheme, host, port, path
endpoint = endpoints.TCP4ClientEndpoint(reactor, dest[1], dest[2])
return SOCKSWrapper(reactor, proxy[1], proxy[2], endpoint)
# Get config
config = Options()
config.parseOptions()
config.opts['port'] = int(config.opts['port'])
config.opts['passive'] = int(config.opts['passive'])
config.opts['debug'] = int(config.opts['debug'])
# Create the client
FTPClient.debug = config.opts['debug']
creator = ClientCreator(reactor, FTPClient, config.opts['username'],
config.opts['password'], passive=config.opts['passive'])
#creator.connectTCP(config.opts['host'], config.opts['port']).addCallback(connectionMade).addErrback(connectionFailed)
# Socks support
proxy = (None, '1.1.1.1', 1111, True, None, None)
sw = sockswrapper(proxy, "ftp://example.com")
d = sw.connect(creator)
d.addCallback(wrappercb)
reactor.run()
def connectionFailed(f):
print "Connection Failed:", f
reactor.stop()
def connectionMade(ftpClient):
# Get the current working directory
ftpClient.pwd().addCallbacks(success, fail)
# Get a detailed listing of the current directory
fileList = FTPFileListProtocol()
d = ftpClient.list('.', fileList)
d.addCallbacks(showFiles, fail, callbackArgs=(fileList,))
# Change to the parent directory
ftpClient.cdup().addCallbacks(success, fail)
# Create a buffer
proto = BufferingProtocol()
# Get short listing of current directory, and quit when done
d = ftpClient.nlst('.', proto)
d.addCallbacks(showBuffer, fail, callbackArgs=(proto,))
d.addCallback(lambda result: reactor.stop())
# this only runs if the module was *not* imported
if __name__ == '__main__':
run()
I know the code is wrong. I need Solution.
Okay, so here's a solution (gist) that uses python's built-in ftplib, as well as the open source SocksiPy module.
It doesn't use twisted, and it doesn't explicitly use threads, but using and communicting between threads is pretty easily done with threading.Thread and threading.Queue in python's standard threading module
Basically, we need to subclass ftplib.FTP to support substituting our own create_connection method and add proxy configuration semantics.
The "main" logic just configures an FTP client that connects via a localhost socks proxy, such as one created by ssh -D localhost:1080 socksproxy.example.com, and downloads a source snapshot for GNU autoconf to the local disk.
import ftplib
import socket
import socks # socksipy (https://github.com/mikedougherty/SocksiPy)
class FTP(ftplib.FTP):
def __init__(self, host='', user='', passwd='', acct='',
timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
proxyconfig=None):
"""Like ftplib.FTP constructor, but with an added `proxyconfig` kwarg
`proxyconfig` should be a dictionary that may contain the following
keys:
proxytype - The type of the proxy to be used. Three types
are supported: PROXY_TYPE_SOCKS4 (including socks4a),
PROXY_TYPE_SOCKS5 and PROXY_TYPE_HTTP
addr - The address of the server (IP or DNS).
port - The port of the server. Defaults to 1080 for SOCKS
servers and 8080 for HTTP proxy servers.
rdns - Should DNS queries be preformed on the remote side
(rather than the local side). The default is True.
Note: This has no effect with SOCKS4 servers.
username - Username to authenticate with to the server.
The default is no authentication.
password - Password to authenticate with to the server.
Only relevant when username is also provided.
"""
self.proxyconfig = proxyconfig or {}
ftplib.FTP.__init__(self, host, user, passwd, acct, timeout)
def connect(self, host='', port=0, timeout=-999):
'''Connect to host. Arguments are:
- host: hostname to connect to (string, default previous host)
- port: port to connect to (integer, default previous port)
'''
if host != '':
self.host = host
if port > 0:
self.port = port
if timeout != -999:
self.timeout = timeout
self.sock = self.create_connection(self.host, self.port)
self.af = self.sock.family
self.file = self.sock.makefile('rb')
self.welcome = self.getresp()
return self.welcome
def create_connection(self, host=None, port=None):
host, port = host or self.host, port or self.port
if self.proxyconfig:
phost, pport = self.proxyconfig['addr'], self.proxyconfig['port']
err = None
for res in socket.getaddrinfo(phost, pport, 0, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socks.socksocket(af, socktype, proto)
sock.setproxy(**self.proxyconfig)
if self.timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(self.timeout)
sock.connect((host, port))
return sock
except socket.error as _:
err = _
if sock is not None:
sock.close()
if err is not None:
raise err
else:
raise socket.error("getaddrinfo returns an empty list")
else:
sock = socket.create_connection((host, port), self.timeout)
return sock
def ntransfercmd(self, cmd, rest=None):
size = None
if self.passiveserver:
host, port = self.makepasv()
conn = self.create_connection(host, port)
try:
if rest is not None:
self.sendcmd("REST %s" % rest)
resp = self.sendcmd(cmd)
# Some servers apparently send a 200 reply to
# a LIST or STOR command, before the 150 reply
# (and way before the 226 reply). This seems to
# be in violation of the protocol (which only allows
# 1xx or error messages for LIST), so we just discard
# this response.
if resp[0] == '2':
resp = self.getresp()
if resp[0] != '1':
raise ftplib.error_reply, resp
except:
conn.close()
raise
else:
raise Exception("Active transfers not supported")
if resp[:3] == '150':
# this is conditional in case we received a 125
size = ftplib.parse150(resp)
return conn, size
if __name__ == '__main__':
ftp = FTP(host='ftp.gnu.org', user='anonymous', passwd='guest',
proxyconfig=dict(proxytype=socks.PROXY_TYPE_SOCKS5, rdns=False,
addr='localhost', port=1080))
with open('autoconf-2.69.tar.xz', mode='w') as f:
ftp.retrbinary("RETR /gnu/autoconf/autoconf-2.69.tar.xz", f.write)
To elaborate why I asked some of my original questions:
1) Do you need to support active transfers or will PASV transfers be sufficient?
Active transfers are much harder to do via a socks proxy because they require the use of the PORT command. With the PORT command, your ftp client tells the FTP server to connect to you on a specific port (e.g., on your PC) in order to send the data. This is likely to not work for users behind a firewall or NAT/router. If your SOCKS proxy server is not behind a firewall, or has a public IP, it is possible to support active transfers, but it is complicated: It requires your SOCKS server (ssh -D does support this) and client library (socksipy does not) to support remote port binding. It also requires the appropriate hooks in the application (my example throws an exception if passiveserver = False) to do a remote BIND instead of a local one.
2) Does it have to use twisted?
Twisted is great, but I'm not the best at it, and I haven't found a really great SOCKS client implementation. Ideally there would be a library out there that allowed you to define and/or chain proxies together, returning an object that implements the IReactorTCP interface, but I have not yet found anything like this just yet.
3) Is your socks proxy behind a VIP or just a single host directly connected to the Internet?
This matters because of the way PASV transfer security works. In a PASV transfer, the client asks the server to provide a port to connect in order to start a data transfer. When the server accepts a connection on that port, it SHOULD verify the client is connected from the same source IP as the connection that requested the transfer. If your SOCKS server is behind a VIP, it is less likely that the outbound IP of the connection made for the PASV transfers will match the outbound IP of the primary communication connection.

Categories