How to replicate python's `ssl.get_default_context()` in Windows C++? - python

My final goal is to port over a simple mqtt-paho-python script to C++ for integration within a large application.
The python example using paho is quite simple:
client = mqtt.Client(transport="websockets")
client.username_pw_set(settings['username'], password=settings['password'])
client.tls_set_context(context=ssl.create_default_context())
They set up the default TLS context, authenticate with a username and password, and then connect. This works great!
However, now I want to try to get the same secure configuration using paho-mqtt-cpp. The basic example, borrowing from their async examples, goes like this:
mqtt::connect_options connOpts;
connOpts.set_keep_alive_interval(20);
connOpts.set_clean_session(true);
connOpts.set_user_name("username");
connOpts.set_password("password123");
mqtt::ssl_options sslOpts;
connOpts.set_ssl(sslOpts);
mqtt::async_client client("wss://test.mosquitto.org:8081", "myClient");
callback cb(client, connOpts);
client.set_callback(cb);
However, ssl.get_default_context() in python's ssl library seems to do quite a bit of setup for me that isn't replicated in C++; from python's own documentation:
"For client use, if you don’t have any special requirements for your security policy, it is highly recommended that you use the create_default_context() function to create your SSL context. It will load the system’s trusted CA certificates, enable certificate validation and hostname checking, and try to choose reasonably secure protocol and cipher settings."
Most WSS connections I've tried require a certificate, and create_default_context() seems to be able to provide the proper certificates without me generating any myself.
So my questions:
(1) Where are Windows' System Default Certificates that I can use for secure connections? and
(2) What other settings do I need to manually configure that create_default_context() might be setting up for me under the hood?
I've tried looking at the source, but it's not easily discernible where the OS-specific options are.

Related

How to safely authenticate a user using LDAP?

For context: I am developing a web application where users need to authenticate to view internal documents. I neither need any detailed info on users nor special permission management, two states are sufficient: Either a session belongs to an authenticated user (→ documents can be accessed) or it does not (→ documents cannot be accessed). A user authenticates by providing a username and a password, which I want to check against an LDAP server.
I am using Python 3.10 and the ldap3 Python library.
The code
I am currently using the following code to authenticate a user:
#!/usr/bin/env python3
import ssl
from ldap3 import Tls, Server, Connection
from ldap3.core.exceptions import LDAPBindError, LDAPPasswordIsMandatoryError
def is_valid(username: str, password: str) -> bool:
tls_configuration = Tls(validate=ssl.CERT_REQUIRED)
server = Server("ldaps://ldap.example.com", tls=tls_configuration)
user_dn = f"cn={username},ou=ops,dc=ldap,dc=example,dc=com"
try:
with Connection(server, user=user_dn, password=password):
return True
except (LDAPBindError, LDAPPasswordIsMandatoryError):
return False
Demo instance
If you want to run this code, you could try using the FreeIPA's project demo LDAP server.
Replace CERT_REQUIRED with CERT_NONE because the server only provides a self-signed cert (this obviously is a security flaw, but required to use this particular demo – the server I want to use uses a Let's Encrypt certificate).
Replace "ldaps://ldap.example.com" with ldaps://ipa.demo1.freeipa.org
Replace the user_dn with f"uid={username},cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org"
After doing so, you could try running the following commands:
>>> is_valid("admin", "Secret123")
True
>>> is_valid("admin", "Secret1234")
False
>>> is_valid("admin", "")
False
>>> is_valid("admin", None)
False
>>> is_valid("nonexistent", "Secret123")
False
My question(s)
Does the code above safely determine if a user has provided valid credentials?
Notably, I am concerned about the following particular aspects:
Is attempting to bind to the LDAP server enough to verify credentials?
The body of the with statement should only be executed if binding was successful and therefore returns True without further ado. Is this safe? Or could it be possible that binding succeeds but the password provided would still be considered wrong and not sufficient to authenticate the user against the web app.
Am I opening myself up to injection attacks? If so, how to properly mitigate them?
user_dn = f"cn={username},ou=ops,dc=ldap,dc=example,dc=com" uses the untrusted username (that came directly from the web form) to build a string. That basically screams LDAP injection.
Is TLS properly configured?
The connection should use modern TLS encryption and verify the certificate presented by the server, just like a normal browser would do.
Also, of course, if there is anything else unsafe about my code, I'd be happy to know what it is.
Resources I've already found
I've already searched for answers to the particular aspects. Sadly, I have found nothing definite (i.e. no one definitely saying something I do here is bad or good), but I wanted to provide them as a starting point for a potential answer:
Probably yes.
“How to bind (authenticate) a user with ldap3 in python3” uses a similar code snippet to bind, and no one explicitly says that that's bad.
Auth0 uses this method in their blog post “Using LDAP and Active Directory with C# 101” and they probably know what they're doing.
Probably not, so no mitigation is needed.
There are a few questions on LDAP injection (like “How to prevent LDAP-injection in ldap3 for python3”) but they always only mention filtering and search, not binding.
The OWASP Cheat Sheet on LDAP Injection mentions enabling bind authentication as a way to mitigate LDAP injection when filtering, but say nothing about sanitization needed for the bind DN.
I suppose you could even argue that this scenario is not susceptible to injection attacks, because we are indeed processing untrusted input, but only where untrusted input is expected. Anyone can type anything into a login form, but they can also put anything into a request to bind to an LDAP server (without even bothering with the web app). As long as I don't put untrusted input somewhere where trusted input is expected (e.g. using a username in a filter query after binding with an LDAP admin account), I should be safe.
However, the ldap3 documentation of the Connection object does mention one should use escape_rdn when binding with an untrusted username. This is at odds with my suppositions, who's right?
Probably yes.
At least an error was thrown when I tried to use this code with a server that only presented a self-signed certificate, so I suppose I should be safe.
Is attempting to bind to the LDAP server enough to verify credentials?
From the LDAP protocol side, yes, and many systems already rely on this behavior (e.g. pam_ldap for Linux OS-level authentication against an LDAP server). I've never heard of any server where the bind result would be deferred until another operation.
From the ldap3 module side I'd be more worried, as in my experience initializing a Connection did not attempt to connect – much less bind – to the server until I explicitly called .bind() (or unless I specified auto_bind=True), but if your example works then I assume that using a with block does this correctly.
In old code (which holds a persistent connection, no 'with') I've used this, but it may be outdated:
conn = ldap3.Connection(server, raise_exceptions=True)
conn.bind()
(For some apps I use Apache as a reverse proxy and its mod_auth_ldap handles LDAP authentication for me, especially when "is authenticated" is sufficient.)
Am I opening myself up to injection attacks? If so, how to properly mitigate them?
Well, kind of, but not in a way that would be easily exploitable. The bind DN is not a free-form query – it's only a weird-looking "user name" field and it must exactly match an existing entry; you can't put wildcards in it.
(It's in the LDAP server's best interests to be strict about what the "bind" operation accepts, because it's literally the user-facing operation for logging into an LDAP server before anything else is done – it's not just a "password check" function.)
For example, if you have some users at OU=Ops and some at OU=Superops,OU=Ops, then someone could specify Foo,OU=Superops as their username resulting in UID=Foo,OU=Superops,OU=Ops, as the DN – but they'd still have to provide the correct password for that account anyway; they cannot trick the server into using one account's privileges while checking another account's password.
However, it's easy to avoid injection regardless. DN component values can be escaped using:
ldap3: ldap3.utils.dn.escape_rdn(string)
python-ldap: ldap.dn.escape_dn_chars(string)
That being said, I dislike "DN template" approach for a completely different reason – its rather limited usefulness; it only works when all of your accounts are under the same OU (flat hierarchy) and only when they're named after the uid attribute.
That may be the case for a purpose-built LDAP directory, but on a typical Microsoft Active Directory server (or, I believe, on some FreeIPA servers as well) the user account entries are named after their full name (the cn attribute) and can be scattered across many OUs. A two-step approach is more common:
Bind using your app's service credentials, then search the directory for any "user" entries that have the username in their uid attribute, or similar, and verify that you found exactly one entry;
Unbind (optional?), then bind again with the user's found DN and the provided password.
When searching, you do have to worry about LDAP filter injection attacks a bit more, as a username like foo)(uid=* might give undesirable results. (But requiring the results to match exactly 1 entry – not "at least 1" – helps with mitigating this as well.)
Filter values can be escaped using:
ldap3: ldap3.utils.conv.escape_filter_chars(string)
python-ldap: ldap.filter.escape_filter_chars(string)
(python-ldap also has a convenient wrapper ldap.filter.filter_format around this, but it's basically just the_filter % tuple(map(escape_filter_chars, args)).)
The escaping rules for filter values are different from those for RDN values, so you need to use the correct one for the specific context. But at least unlike SQL, they are exactly the same everywhere, so the functions that come with your LDAP client module will work with any server.
Is TLS properly configured?
ldap3/core/tls.py looks good to me – it uses ssl.create_default_context() when supported, loads the system default CA certificates, so no extra configuration should be needed. Although it does implement custom hostname checking instead of relying on the ssl module's check_hostname so that's a bit weird. (Perhaps the LDAP-over-TLS spec defines wildcard matching rules that are slightly incompatible with the usual HTTP-over-TLS ones.)
An alternative approach instead of manually escaping DN templates:
dn = build_dn({"CN": f"{last}, {first} ({username})"},
{"OU": "Faculty of Foo and Bar (XYZF)"},
{"OU": "Staff"},
ad.BASE_DN)
def build_dn(*args):
components = []
for rdn in args:
if isinstance(rdn, dict):
rdn = [(a, ldap.dn.escape_dn_chars(v))
for a, v in rdn.items()]
rdn.sort()
rdn = "+".join(["%s=%s" % av for av in rdn])
components.append(rdn)
elif isinstance(rdn, str):
components.append(rdn)
else:
raise ValueError("Unacceptable RDN type for %r" % (rdn,))
return ",".join(components)

Access a page that require Safenet USB Token from urllib2 ot httplib

When I have a software certificate I do like this.
import httplib
CLIENT_CERT_FILE = '/path/to/certificate.pem'
connection = httplib.HTTPSConnection('url-to-open', cert_file=CLIENT_CERT_FILE)
connection.request('GET', '/')
response = connection.getresponse()
print response.status
data = response.read()
print data
How can I do the same with a Safenet USB Token ?
TL;DR there are significant caveats and security issues with doing this in Python.
A working "solution" involves using a PKCS#11 library to read the certificate from the key, then somehow persisting the certificate on the disk, and finally passing the resulting file path to the request object.
There will also be differences with each security stick's particularities. Some sticks do not offer to store a certificate along with its private key (aka a .pfx or .p12 file) which will essentially make this solution unworkable. I didn't have access to a Safenet stick, so used my own, please bear this in mind.
A solution for this requires quite a bit of work. Your use of a security dongle means that your client certificates are located onto the dongle itself. So, in order to achieve the same level of functionality, you need to write code to extract the certificate from there and feed it to your request object.
1. HTTPS-capable libraries in Python
Your requirement of using httplib (http.client for python 3.x) or urllib introduces a big caveat that the certificate used in the request has to be a file on the disk (and the same can be said of all libraries building in top of them, e.g. requests). See cnelson's answer to How to open ssl socket using certificate stored in string variables in python for the reason (in short: it's because python's ssl library makes use of a native C library which does not offer passing in-memory objects as the certificate). Also see the next answer from Dima Tisnek detailing possible workarounds with varying degrees of hackmanship.
If writing your certificate (even temporarily) on the disk is a non-starter for you, as it may very well be since you use a security stick, then it's not starting off looking good.
2. Getting the certificate from the security stick
Your biggest challenge is to get your hand on the certificate, which is currently nestled inside the security stick. Safenet sticks, like many others, are at the core a PKCS#11 capable SmartCard. I suggest you familiarise yourself with the concepts, but in essence, SmartCard is a standardised chip design, and PKCS#11 is a standardised protocol to interface with it. "Standardised" comes with caveats of course since many vendors come up with their own implementations, but it could probably be standardised enough for your purpose. The trick here will be to use available PKCS#11 interfaces on the stick to extract the certificate's attributes. This is what web browsers essentially do when using the stick to authenticate on websites using the stored certificate, so you need to have your python program do a similar thing.
2.1 Selecting a PKCS#11 library
Unfortunately, there are only a few libraries that come up when searching for "python pkcs11". I have no vested interest in either of them, and there may exist other less prominent ones.
python-pkcs11 (pypi, github, reference) offers a "high level, pythonic implementation of PKCS#11". It may be easier to use overall, but may lack compatibility and/or features depending on what you want to do, however I suspect simply retrieving certificates may be alright.
PyKCS11 (pypi, github, reference) on the other hand is a wrapper around a native PKCS#11 library, to which it will defer the calls. This one is lower-level, but looks more complete, plus may have the advantage to offer using your particular vendor's implementation if relevant.
2.2 Example code
For the example, I'll be using the user-friendlier API of python-pkcs11. Please bear in mind that this code is not thoroughly tested (and has been simplified in parts) and serves as illustrating the general idea.
import pkcs11
import asn1crypto.pem
import urllib.request
import tempfile
import ssl
import os
# this is OpenSC's implementation of PKCS#11
# other security sticks may come with another implementation.
# choose the most appropriate one
lib = pkcs11.lib('/usr/lib/pkcs11/opensc-pkcs11.so')
# tokens may be identified with various names, ids...
# it's probably rare that more than one at a time would be plugged in
token = lib.get_token(token_serial='<token_serial_value>')
pem = None
with token.open() as sess:
pkcs11_certificates = sess.get_objects(
{
pkcs11.Attribute.CLASS: pkcs11.ObjectClass.CERTIFICATE,
pkcs11.Attribute.LABEL: "Cardholder certificate"
})
# hopefully the selector above is sufficient
assert len(pkcs11_certificates) == 1
pkcs11_cert = pkcs11_certificates[0]
der_encoded_certificate = pkcs11_cert.__getitem__(pkcs11.Attribute.VALUE)
# the ssl library expects to be given PEM armored certificates
pem_armored_certificate = asn1crypto.pem.armor("CERTIFICATE",
der_encoded_certificate)
# this is the ugly part: persisting the certificate on disk
# i deliberately did not go with a sophisticated solution here since it's
# such a big caveat to have to do this...
certfile = tempfile.mkstemp()
with open(certfile[1], 'w') as certfile_handle:
certfile_handle.write(pem_armored_certificate.decode("utf-8"))
# this will instruct the ssl library to provide the certificate
# if asked by the server.
sslctx = ssl.create_default_context()
sslctx.load_cert_chain(certfile=certfile[1])
# if your certificate does not contain the private key, find it elsewhere
# sslctx.load_cert_chain(certfile=certfile[1],
# keyfile="/path/to/privatekey.pem",
# password="<private_key_password_if_applicable>")
response = urllib.request.urlopen("https://ssl_website", context=sslctx)
# Cleanup and delete the "temporary" certificate from disk
os.remove(certfile[1])
3. Conclusion
I'd say that Python is not going to be the best bet for doing ssl client authentication using security sticks. The fact that most ssl libraries require the certificate to be present on the disk works directly against the benefits (and sometimes, requirements) of the use of a security stick in the first place. I'm well aware that this answer does not provide a full solution to this problem, but hopefully exposes the challenges in enough detail to make an educated decision on whether to pursue this further or to find another way.
In any case, good luck.

Kerberos Delegation (Double-Hop) with pymssql

The pymssql module claims to support Kerberos Authentication (and delegation) and yet I can't seem to enable it.
The client I am running is on Windows. I need to connect with a double-hop through a reverse database proxy. The client, the proxy, and the database are all part of the domain. And when I try to connect with SQL Server Manager I am successful. But when I try to connect with the pymssql module in Python it doesn't work. If I connect directly from the client to the database I am able to get the Kerberos Authentication to work. But again, when I try to go through the proxy it fails.
This leads me to believe that the Kerberos Authentication works, but that the Delegation (double-hop) does not.
According to the section on FreeTDS I should be able to create a file at C:/freetds.conf and it should read connection information from there. I don't seem to be able to verify this in any meaningful way. Additionally, according to the freetds schema I should be able add a parameter enable gssapi delegation which when enabled (off by default) allows Kerberos Delegation.
Bottom Line:
I am looking to enable Kerberos Delegation (so that the double-hop will work) for pymssql on windows.
At the moment I have created a file at C:/freetds.conf and have tried a few ways to configure it.
[global]
enable gssapi delegation = on
and
[global]
enable gssapi delegation = true
This is pretty easy to answer and is rooted in a shortcoming in FreeTDS. You did nothing wrong.
If we take a look at the GSS-API C code of FreeTDS, we see in lines 307 to 308
if (tds->login->gssapi_use_delegation)
gssapi_flags |= GSS_C_DELEG_FLAG;
that your config parameter is read in the delegation flag is set.
Since you are on Windows and Windows uses its own flavor of GSS-API, namely SSPI, we have a look at that C code: lines 273 to 278 do
status = sec_fn->InitializeSecurityContext(&auth->cred, NULL, auth->sname,
ISC_REQ_CONFIDENTIALITY | ISC_REQ_REPLAY_DETECT
| ISC_REQ_CONNECTION | ISC_REQ_ALLOCATE_MEMORY,
0, SECURITY_NETWORK_DREP,NULL, 0, &auth->cred_ctx, &desc, &attrs, &ts);
as you can see, the context flags are not in a variable but passed directly. Neither the config param is evaluated nor ISC_REQ_DELEGATE is passed.
This is the problem you are seeing. You have two options now:
Raise a bug and wait for a fix.
Clone from GitHub, fix yourself and issue a pull request.
Side note: there is several stuff I do not like about both code parts at all:
SSPI does not perform mutual auth as GSS-API does but should.
Context flags are passed pointlessly but features are never used in that C file, e.g., ISC_REQ_CONFIDENTIALITY | ISC_REQ_REPLAY_DETECT and GSS_C_REPLAY_FLAG | GSS_C_INTEG_FLAG. There are only necessary if you require further transport security which is not employed here.
There are probably more stuff to fix but I did not code review it.
I highly recommend to raise some issues here too.

Python FTP-SSL / FTP-TLS: Verifying Public Certificate?

I'm using Python 2.7.5 (not 3.X) and I need to verify a FTPS (FTP-TLS) public certificate. That is, I want to verify it against the standard certificate authority, not a custom key. (Similar to HTTPS.)
I see some options but I cannot get them to work:
The FTP_TLS() class doesn't seem to offer the ability to verify certificates, unless I'm mistaken:
class ftplib.FTP_TLS([host[, user[, passwd[, acct[, keyfile[, certfile[, timeout]]]]]]])
I've read into the certifi and also M2Crypto, but while I can connect and transfer using FTP/TLS, I can't seem to find a way to verify the certificate.
Also, I don't think I will be able to use the CURL libraries in this case :( Just a note.
Let's try to make it into a possible answer: http://heikkitoivonen.net/blog/2008/10/14/ssl-in-python-26
The resource referenced by mcepl is no longer available over http, but only using https.
https://heikkitoivonen.net/blog/2008/10/14/ssl-in-python-26
So much for 301 redirects.

How to add authentication to a (Python) twisted xmlrpc server

I am trying to add authentication to a xmlrpc server (which will be running on nodes of a P2P network) without using user:password#host as this will reveal the password to all attackers. The authentication is so to basically create a private network, preventing unauthorised users from accessing it.
My solution to this was to create a challenge response system very similar to this but I have no clue how to add this to the xmlrpc server code.
I found a similar question (Where custom authentication was needed) here.
So I tried creating a module that would be called whenever a client connected to the server. This would connect to a challenge-response server running on the client and if the client responded correctly would return True. The only problem was that I could only call the module once and then I got a reactor cannot be restarted error. So is there some way of having a class that whenever the "check()" function is called it will connect and do this?
Would the simplest thing to do be to connect using SSL? Would that protect the password? Although this solution would not be optimal as I am trying to avoid having to generate SSL certificates for all the nodes.
Don't invent your own authentication scheme. There are plenty of great schemes already, and you don't want to become responsible for doing the security research into what vulnerabilities exist in your invention.
There are two very widely supported authentication mechanisms for HTTP (over which XML-RPC runs, therefore they apply to XML-RPC). One is "Basic" and the other is "Digest". "Basic" is fine if you decide to run over SSL. Digest is more appropriate if you really can't use SSL.
Both are supported by Twisted Web via twisted.web.guard.HTTPAuthSessionWrapper, with copious documentation.
Based on your problem description, it sounds like the Secure Remote Password Protocol might be what you're looking for. It's a password-based mechanism that provides strong, mutual authentication without the complexity of SSL certificate management. It may not be quite as flexible as SSL certificates but it's easy to use and understand (the full protocol description fits on a single page). I've often found it a useful tool for situations where a trusted third party (aka Kerberos/CA authorities) isn't appropriate.
For anyone that was looking for a full example below is mine (thanks to Rakis for pointing me in the right direction). In this the user and password is stored in a file called 'passwd' (see the first useful link for more details and how to change it).
Server:
#!/usr/bin/env python
import bjsonrpc
from SRPSocket import SRPSocket
import SocketServer
from bjsonrpc.handlers import BaseHandler
import time
class handler(BaseHandler):
def time(self):
return time.time()
class SecureServer(SRPSocket.SRPHost):
def auth_socket(self, socket):
server = bjsonrpc.server.Server(socket, handler_factory=handler)
server.serve()
s = SocketServer.ForkingTCPServer(('', 1337), SecureServer)
s.serve_forever()
Client:
#! /usr/bin/env python
import bjsonrpc
from bjsonrpc.handlers import BaseHandler
from SRPSocket import SRPSocket
import time
class handler(BaseHandler):
def time(self):
return time.time()
socket, key = SRPSocket.SRPSocket('localhost', 1337, 'dht', 'testpass')
connection = bjsonrpc.connection.Connection(socket, handler_factory=handler)
test = connection.call.time()
print test
time.sleep(1)
Some useful links:
http://members.tripod.com/professor_tom/archives/srpsocket.html
http://packages.python.org/bjsonrpc/tutorial1/index.html

Categories