The pymssql module claims to support Kerberos Authentication (and delegation) and yet I can't seem to enable it.
The client I am running is on Windows. I need to connect with a double-hop through a reverse database proxy. The client, the proxy, and the database are all part of the domain. And when I try to connect with SQL Server Manager I am successful. But when I try to connect with the pymssql module in Python it doesn't work. If I connect directly from the client to the database I am able to get the Kerberos Authentication to work. But again, when I try to go through the proxy it fails.
This leads me to believe that the Kerberos Authentication works, but that the Delegation (double-hop) does not.
According to the section on FreeTDS I should be able to create a file at C:/freetds.conf and it should read connection information from there. I don't seem to be able to verify this in any meaningful way. Additionally, according to the freetds schema I should be able add a parameter enable gssapi delegation which when enabled (off by default) allows Kerberos Delegation.
Bottom Line:
I am looking to enable Kerberos Delegation (so that the double-hop will work) for pymssql on windows.
At the moment I have created a file at C:/freetds.conf and have tried a few ways to configure it.
[global]
enable gssapi delegation = on
and
[global]
enable gssapi delegation = true
This is pretty easy to answer and is rooted in a shortcoming in FreeTDS. You did nothing wrong.
If we take a look at the GSS-API C code of FreeTDS, we see in lines 307 to 308
if (tds->login->gssapi_use_delegation)
gssapi_flags |= GSS_C_DELEG_FLAG;
that your config parameter is read in the delegation flag is set.
Since you are on Windows and Windows uses its own flavor of GSS-API, namely SSPI, we have a look at that C code: lines 273 to 278 do
status = sec_fn->InitializeSecurityContext(&auth->cred, NULL, auth->sname,
ISC_REQ_CONFIDENTIALITY | ISC_REQ_REPLAY_DETECT
| ISC_REQ_CONNECTION | ISC_REQ_ALLOCATE_MEMORY,
0, SECURITY_NETWORK_DREP,NULL, 0, &auth->cred_ctx, &desc, &attrs, &ts);
as you can see, the context flags are not in a variable but passed directly. Neither the config param is evaluated nor ISC_REQ_DELEGATE is passed.
This is the problem you are seeing. You have two options now:
Raise a bug and wait for a fix.
Clone from GitHub, fix yourself and issue a pull request.
Side note: there is several stuff I do not like about both code parts at all:
SSPI does not perform mutual auth as GSS-API does but should.
Context flags are passed pointlessly but features are never used in that C file, e.g., ISC_REQ_CONFIDENTIALITY | ISC_REQ_REPLAY_DETECT and GSS_C_REPLAY_FLAG | GSS_C_INTEG_FLAG. There are only necessary if you require further transport security which is not employed here.
There are probably more stuff to fix but I did not code review it.
I highly recommend to raise some issues here too.
Related
For context: I am developing a web application where users need to authenticate to view internal documents. I neither need any detailed info on users nor special permission management, two states are sufficient: Either a session belongs to an authenticated user (→ documents can be accessed) or it does not (→ documents cannot be accessed). A user authenticates by providing a username and a password, which I want to check against an LDAP server.
I am using Python 3.10 and the ldap3 Python library.
The code
I am currently using the following code to authenticate a user:
#!/usr/bin/env python3
import ssl
from ldap3 import Tls, Server, Connection
from ldap3.core.exceptions import LDAPBindError, LDAPPasswordIsMandatoryError
def is_valid(username: str, password: str) -> bool:
tls_configuration = Tls(validate=ssl.CERT_REQUIRED)
server = Server("ldaps://ldap.example.com", tls=tls_configuration)
user_dn = f"cn={username},ou=ops,dc=ldap,dc=example,dc=com"
try:
with Connection(server, user=user_dn, password=password):
return True
except (LDAPBindError, LDAPPasswordIsMandatoryError):
return False
Demo instance
If you want to run this code, you could try using the FreeIPA's project demo LDAP server.
Replace CERT_REQUIRED with CERT_NONE because the server only provides a self-signed cert (this obviously is a security flaw, but required to use this particular demo – the server I want to use uses a Let's Encrypt certificate).
Replace "ldaps://ldap.example.com" with ldaps://ipa.demo1.freeipa.org
Replace the user_dn with f"uid={username},cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org"
After doing so, you could try running the following commands:
>>> is_valid("admin", "Secret123")
True
>>> is_valid("admin", "Secret1234")
False
>>> is_valid("admin", "")
False
>>> is_valid("admin", None)
False
>>> is_valid("nonexistent", "Secret123")
False
My question(s)
Does the code above safely determine if a user has provided valid credentials?
Notably, I am concerned about the following particular aspects:
Is attempting to bind to the LDAP server enough to verify credentials?
The body of the with statement should only be executed if binding was successful and therefore returns True without further ado. Is this safe? Or could it be possible that binding succeeds but the password provided would still be considered wrong and not sufficient to authenticate the user against the web app.
Am I opening myself up to injection attacks? If so, how to properly mitigate them?
user_dn = f"cn={username},ou=ops,dc=ldap,dc=example,dc=com" uses the untrusted username (that came directly from the web form) to build a string. That basically screams LDAP injection.
Is TLS properly configured?
The connection should use modern TLS encryption and verify the certificate presented by the server, just like a normal browser would do.
Also, of course, if there is anything else unsafe about my code, I'd be happy to know what it is.
Resources I've already found
I've already searched for answers to the particular aspects. Sadly, I have found nothing definite (i.e. no one definitely saying something I do here is bad or good), but I wanted to provide them as a starting point for a potential answer:
Probably yes.
“How to bind (authenticate) a user with ldap3 in python3” uses a similar code snippet to bind, and no one explicitly says that that's bad.
Auth0 uses this method in their blog post “Using LDAP and Active Directory with C# 101” and they probably know what they're doing.
Probably not, so no mitigation is needed.
There are a few questions on LDAP injection (like “How to prevent LDAP-injection in ldap3 for python3”) but they always only mention filtering and search, not binding.
The OWASP Cheat Sheet on LDAP Injection mentions enabling bind authentication as a way to mitigate LDAP injection when filtering, but say nothing about sanitization needed for the bind DN.
I suppose you could even argue that this scenario is not susceptible to injection attacks, because we are indeed processing untrusted input, but only where untrusted input is expected. Anyone can type anything into a login form, but they can also put anything into a request to bind to an LDAP server (without even bothering with the web app). As long as I don't put untrusted input somewhere where trusted input is expected (e.g. using a username in a filter query after binding with an LDAP admin account), I should be safe.
However, the ldap3 documentation of the Connection object does mention one should use escape_rdn when binding with an untrusted username. This is at odds with my suppositions, who's right?
Probably yes.
At least an error was thrown when I tried to use this code with a server that only presented a self-signed certificate, so I suppose I should be safe.
Is attempting to bind to the LDAP server enough to verify credentials?
From the LDAP protocol side, yes, and many systems already rely on this behavior (e.g. pam_ldap for Linux OS-level authentication against an LDAP server). I've never heard of any server where the bind result would be deferred until another operation.
From the ldap3 module side I'd be more worried, as in my experience initializing a Connection did not attempt to connect – much less bind – to the server until I explicitly called .bind() (or unless I specified auto_bind=True), but if your example works then I assume that using a with block does this correctly.
In old code (which holds a persistent connection, no 'with') I've used this, but it may be outdated:
conn = ldap3.Connection(server, raise_exceptions=True)
conn.bind()
(For some apps I use Apache as a reverse proxy and its mod_auth_ldap handles LDAP authentication for me, especially when "is authenticated" is sufficient.)
Am I opening myself up to injection attacks? If so, how to properly mitigate them?
Well, kind of, but not in a way that would be easily exploitable. The bind DN is not a free-form query – it's only a weird-looking "user name" field and it must exactly match an existing entry; you can't put wildcards in it.
(It's in the LDAP server's best interests to be strict about what the "bind" operation accepts, because it's literally the user-facing operation for logging into an LDAP server before anything else is done – it's not just a "password check" function.)
For example, if you have some users at OU=Ops and some at OU=Superops,OU=Ops, then someone could specify Foo,OU=Superops as their username resulting in UID=Foo,OU=Superops,OU=Ops, as the DN – but they'd still have to provide the correct password for that account anyway; they cannot trick the server into using one account's privileges while checking another account's password.
However, it's easy to avoid injection regardless. DN component values can be escaped using:
ldap3: ldap3.utils.dn.escape_rdn(string)
python-ldap: ldap.dn.escape_dn_chars(string)
That being said, I dislike "DN template" approach for a completely different reason – its rather limited usefulness; it only works when all of your accounts are under the same OU (flat hierarchy) and only when they're named after the uid attribute.
That may be the case for a purpose-built LDAP directory, but on a typical Microsoft Active Directory server (or, I believe, on some FreeIPA servers as well) the user account entries are named after their full name (the cn attribute) and can be scattered across many OUs. A two-step approach is more common:
Bind using your app's service credentials, then search the directory for any "user" entries that have the username in their uid attribute, or similar, and verify that you found exactly one entry;
Unbind (optional?), then bind again with the user's found DN and the provided password.
When searching, you do have to worry about LDAP filter injection attacks a bit more, as a username like foo)(uid=* might give undesirable results. (But requiring the results to match exactly 1 entry – not "at least 1" – helps with mitigating this as well.)
Filter values can be escaped using:
ldap3: ldap3.utils.conv.escape_filter_chars(string)
python-ldap: ldap.filter.escape_filter_chars(string)
(python-ldap also has a convenient wrapper ldap.filter.filter_format around this, but it's basically just the_filter % tuple(map(escape_filter_chars, args)).)
The escaping rules for filter values are different from those for RDN values, so you need to use the correct one for the specific context. But at least unlike SQL, they are exactly the same everywhere, so the functions that come with your LDAP client module will work with any server.
Is TLS properly configured?
ldap3/core/tls.py looks good to me – it uses ssl.create_default_context() when supported, loads the system default CA certificates, so no extra configuration should be needed. Although it does implement custom hostname checking instead of relying on the ssl module's check_hostname so that's a bit weird. (Perhaps the LDAP-over-TLS spec defines wildcard matching rules that are slightly incompatible with the usual HTTP-over-TLS ones.)
An alternative approach instead of manually escaping DN templates:
dn = build_dn({"CN": f"{last}, {first} ({username})"},
{"OU": "Faculty of Foo and Bar (XYZF)"},
{"OU": "Staff"},
ad.BASE_DN)
def build_dn(*args):
components = []
for rdn in args:
if isinstance(rdn, dict):
rdn = [(a, ldap.dn.escape_dn_chars(v))
for a, v in rdn.items()]
rdn.sort()
rdn = "+".join(["%s=%s" % av for av in rdn])
components.append(rdn)
elif isinstance(rdn, str):
components.append(rdn)
else:
raise ValueError("Unacceptable RDN type for %r" % (rdn,))
return ",".join(components)
My final goal is to port over a simple mqtt-paho-python script to C++ for integration within a large application.
The python example using paho is quite simple:
client = mqtt.Client(transport="websockets")
client.username_pw_set(settings['username'], password=settings['password'])
client.tls_set_context(context=ssl.create_default_context())
They set up the default TLS context, authenticate with a username and password, and then connect. This works great!
However, now I want to try to get the same secure configuration using paho-mqtt-cpp. The basic example, borrowing from their async examples, goes like this:
mqtt::connect_options connOpts;
connOpts.set_keep_alive_interval(20);
connOpts.set_clean_session(true);
connOpts.set_user_name("username");
connOpts.set_password("password123");
mqtt::ssl_options sslOpts;
connOpts.set_ssl(sslOpts);
mqtt::async_client client("wss://test.mosquitto.org:8081", "myClient");
callback cb(client, connOpts);
client.set_callback(cb);
However, ssl.get_default_context() in python's ssl library seems to do quite a bit of setup for me that isn't replicated in C++; from python's own documentation:
"For client use, if you don’t have any special requirements for your security policy, it is highly recommended that you use the create_default_context() function to create your SSL context. It will load the system’s trusted CA certificates, enable certificate validation and hostname checking, and try to choose reasonably secure protocol and cipher settings."
Most WSS connections I've tried require a certificate, and create_default_context() seems to be able to provide the proper certificates without me generating any myself.
So my questions:
(1) Where are Windows' System Default Certificates that I can use for secure connections? and
(2) What other settings do I need to manually configure that create_default_context() might be setting up for me under the hood?
I've tried looking at the source, but it's not easily discernible where the OS-specific options are.
I have a existing Website deployed in Google App Engine for Python. Now I have setup the local development server in my System. But I don't know how to get the updated DataBase from live server. There is no Export option in Google's developer console.
And, I don't want to read the data for each request from Production Datastore, I want to set it up locally for once. The google manual says that it stores the local datastore in sqlite file.
Any hint would be appreciated.
First, make sure your app.yaml enables the "remote" built-in, with a stanza such as:
builtins:
- remote_api: on
This app.yaml of course must be the one deployed to your appspot.com (or whatever) "production" GAE app.
Then, it's a job for /usr/local/google_appengine/bulkloader.py or wherever you may have installed the bulkloader component. Run it with -h to get a list of the many, many options you can pass.
You may need to generate an application-specific password for this use on your google accounts page. Then, the general use will be something like:
/usr/local/google_appengine/bulkloader.py --dump --url=http://your_app.appspot.com/_ah/remote_api --filename=allkinds.sq3
You may not (yet) be able to use this "all kinds" query -- the server only generates the needed statistics for the all-kinds query "periodically", so you may get an error message including info such as:
[ERROR ] Unable to download kind stats for all-kinds download.
[ERROR ] Kind stats are generated periodically by the appserver
[ERROR ] Kind stats are not available on dev_appserver.
If that's the case, then you can still get things "one kind at a time" by adding the option --kind=EntityKind and running the bulkloader repeatedly (with separate sqlite3 result files) for each kind of entity.
Once you've dumped (kind by kind if you have to, all at once if you can) the production datastore, you can use the bulkloader again, this time with --restore and addressing your localhost dev_appserver instance, to rebuild the latter's datastore.
It should be possible to explicitly list kinds in the --kind flag (by separating them with commas and putting them all in parentheses) but unfortunately I think I've found a bug stopping that from working -- I'll try to get it fixed but don't hold your breath. In any case, this feature is not documented (I just found it by studying the open-source release of bulkloader.py) so it may be best not to rely on it!-)
More info about the then-new bulkloader can be found in a blog post by Nick Johnson at http://blog.notdot.net/2010/04/Using-the-new-bulkloader (though it doesn't cover newer functionalities such as the sqlite3 format of results in the "zero configuration" approach I outlined above). There's also a demo, with plenty of links, at http://bulkloadersample.appspot.com/ (also a bit outdated, alas).
Check out the remote API. This will tunnel your database calls over HTTP to the production database.
I am attempting to run a script written in Python 2.7.5 (not using Django). When it tries to connect to a remote mysql server with the MySQLdb.connect() method it throws the following error:
_mysql_exceptions.OperationalError: (2049, "Connection using old (pre-4.1.1) authentication protocol refused (client option 'secure_auth' enabled)")
I have done reading about this issue:
Django/MySQL-python - Connection using old (pre-4.1.1) authentication protocol refused (client option 'secure_auth' enabled)
mysql error 2049 connection using old (pre-4-1-1) authentication from mac
Is there a way to set a parameter in the MySQLdb.connect() method to set secure_auth to false? Without having to change any passwords or running the command from the cmd line. I have looked at the official docs and there does not appear to be anything in there.
I have tried adding secure_auth=False to the parameters but throws an error (shown in the code below).
Python:
def get_cursor():
global _cursor
if _cursor is None:
try:
db = MySQLdb.connect(user=_usr, passwd=_pw, host='external.website.com', port=3306, db=_usr, charset="utf8")
# tried this but it doesnt work (as expect but tried anyway) which throws this error
# TypeError: 'secure_auth' is an invalid keyword argument for this function
# db = MySQLdb.connect(user=_usr, passwd=_pw, host='external.website.com', port=3306, db=_usr, charset="utf8", secure_auth=false)
_cursor = db.cursor()
except MySQLdb.OperationalError:
print "error connecting"
raise
return _cursor
I spent an inordinate amount of time working through the MySQLdb source code and determined that this simply cannot be done without patching the MySQLdb's C wrapping code. Theoretically, you should be able to pass the SECURE_CONNECTION flag to specify that do not want to use the insecure old passwords:
MySQLdb.connect(..., client_flags=MySQLdb.constant.CLIENT.SECURE_CONNECTION)
But the MySQLdb code never actually checks that flag, and never configures the secure_connection option when calling the MySQL connection code, so it always defaults to requiring new-style passwords.
Possible fixes include:
Patch the MySQLdb code
Use an old version of the MySQL client libraries
Update the passwords on the MySQL server
Create a single new user with a new-style password
Sorry I don't have a better answer. I just ran into this problem myself!
I know Moses answer as been validated but I wanted to offer my work around based on what he suggested.
I had previously installed mysql_python for my python and had the brew version of mysql installed.
I deteleted all of that.
I look for a way to install MySQLdb by looking for it last stable version with the source.
I compiled them (followed the isntructions here), installed them and then I looked for a stable version of MySQL client (MySQL website is the best place for that) and install the 5.5 version which was perfectly fitting my requirements.
I made mysql to launch itself automatically and then restarted my computer (but you can just restart apache) and check that all path were correct and the right includes are in the right places (you can check that against the link above).
And now it all works fine!
Hope it helps.
SSL is a separate paramter that you can set in the connection paramter...Here is a note from the source code...Try checking mysql_ssl_set() documentation.
ssl
dictionary or mapping, contains SSL connection parameters;
see the MySQL documentation for more details
(mysql_ssl_set()). If this is set, and the client does not
support SSL, NotSupportedError will be raised.
This document talks about all the secure parameters - http://dev.mysql.com/doc/refman/5.0/en/mysql-ssl-set.html...
I don't see anything to disable secure auth in glance..
I'm in a situation where I need to pass some texts to a prompt generate by a API (seems for API it's a pretty weird behavior, this is the first time I ran into this), like below:
kvm_cli = libvirt.open("qemu+ssh://han#10.0.10.8/system")
then a prompt shows up asking for the ssh password (password for 10.0.10.8 is:), I have to manually type it there in order to move on and yield the kvm_cli object I needed.
I tried to use the pexpect module to deal with this however it's for OS command line instead of API.
It's also possible to work around this by using ssh certification files but it's not a favorable authentication approach in our scenario.
Since our wrapper to the 'open' method is not interactive, we cannot ask the user to input the password, do you guys have any thought how could I address it?
I am not a libvirt user, but I believe that the problem is not in the library, but in the connection method. You seem to be connecting via ssh, so you need to authenticate yourself.
I've been reading the libvirt page on ArchWiki, and I think that you could try:
setting up the simple (TCP/IP socket) connection method, or
setting up key-based, password-less SSH login for your virtual host.