Protect Password in Python Script - python

I have a python script that I'm running locally which has a password for another application embedded in the os.system call. I obfuscated my password by storing it in a DB that only I have access to and then using windows auth to connect to the DB (Because the script needs to be automated I cant have a prompt for the PW).
With the above said, it occurred to me, couldn't someone just modify my script and print the 'pw' var to obtain my password? I'm working in a shared cloud environment where other developers would have access to my script. Is there any way to abstract it further so someone couldnt just modify my script and get the pw?
import os
import sqlalchemy as sa
import urllib
import pandas as pd
#Specify the databases and servers used for reading and writing data.
read_server = '~~SERVER_HERE~~'
read_database = '~~DATABASE_HERE~~'
#Establish DB Connection
read_params = urllib.quote_plus("DRIVER={SQL Server};SERVER="+read_server+";DATABASE="+read_database+";TRUSTED_CONNECTION=Yes")
read_engine = sa.create_engine("mssql+pyodbc:///?odbc_connect=%s" % read_params)
#Read PW from DB and set to variable
pw_query = """ SELECT DISTINCT PW FROM ~~TABLENAME_HERE~~ """
pw = pd.read_sql_query(pw_query,con=read_engine,index_col=None)
pw = pw.values.tolist()
pw = str(pw[0])
pw = pw.lstrip("['").rstrip("]'")
#Establish connection to server
os.chdir(r"C:\tabcmd\Command Line Utility")
os.system(r'tabcmd login -s https://~~myURL~~ -u tabadmin -p {mypw}'.format(mypw = str(pw)))
#Make sure you update the below workbook, site names, and destination directory.
os.system(r'tabcmd get "~~FILE_Location~~" -f "~~Destination_DIR~~"')
I'm using standard python (Cpython) and MS SQL Server.

There's no real way to protect your password if someone can modify the script.
However, if the shared cloud environment has separate users (i.e logging in via ssh where each person has their own user on the server), then you can change the permissions to restrict access to your code. If not, then I don't think this is possible.

Given you are also hardcoding your database address and access code, nothing prevents others from just connecting to your database for example.
There are ways of obsfuscating your code, but in the end, there is no secure way for storing your password, just ways which require more effort to extract it.
Also see https://crypto.stackexchange.com/questions/19959/is-python-a-secure-programming-language-for-cryptography
TLDR; As long as somebody has access to your program or even source code, the hardcoded password can be extracted - So in your case it would make sense to restrict access to that program.

Related

Learning Python fast how can I protect some private connections from been exposed

Hi I'm new to the community and new to Python, experienced but rusty on other high level languages, so my question is simple.
I made a simple script to connect to a private ftp server, and retrieve daily information from it.
from ftplib import FTP
#Open ftp connection
#Connect to server to retrieve inventory
#Open ftp connection
def FTPconnection(file_name):
ftp = FTP('ftp.serveriuse.com')
ftp.login('mylogin', 'password')
#List the files in the current directory
print("Current File List:")
file = ftp.dir()
print(file)
# # #Get the latest csv file from server
# ftp.cwd("/pub")
gfile = open(file_name, "wb")
ftp.retrbinary('RETR '+ file_name, gfile.write)
gfile.close()
ftp.quit()
FTPconnection('test1.csv')
FTPconnection('test2.csv')
That's the whole script, it passes my credentials, and then calls the function FTPconnection on two different files I'm retrieving.
Then my other script that processes them has an import statement, as I tried to call this script as a module, what my import does it's just connect to the FTP server and fetch information.
import ftpconnect as ftpc
This is the on the other Python script, that does the processing.
It works but I want to improve it, so I need some guidance on best practices about how to do this, because in Spyder 4.1.5 I get an 'Module ftpconnect called but unused' warning ... so probably I am missing something here, I'm developing on MacOS using Anaconda and Python 3.8.5.
I'm trying to build an app, to automate some tasks, but I couldn't find anything about modules that guided me to better code, it simply says you have to import whatever .py file name you used and that will be considered a module ...
and my final question is how can you normally protect private information(ftp credentials) from being exposed? This has nothing to do to protect my code but the credentials.
There are a few options for storing passwords and other secrets that a Python program needs to use, particularly a program that needs to run in the background where it can't just ask the user to type in the password.
Problems to avoid:
Checking the password in to source control where other developers or even the public can see it.
Other users on the same server reading the password from a configuration file or source code.
Having the password in a source file where others can see it over your shoulder while you are editing it.
Option 1: SSH
This isn't always an option, but it's probably the best. Your private key is never transmitted over the network, SSH just runs mathematical calculations to prove that you have the right key.
In order to make it work, you need the following:
The database or whatever you are accessing needs to be accessible by SSH. Try searching for "SSH" plus whatever service you are accessing. For example, "ssh postgresql". If this isn't a feature on your database, move on to the next option.
Create an account to run the service that will make calls to the database, and generate an SSH key.
Either add the public key to the service you're going to call, or create a local account on that server, and install the public key there.
Option 2: Environment Variables
This one is the simplest, so it might be a good place to start. It's described well in the Twelve Factor App. The basic idea is that your source code just pulls the password or other secrets from environment variables, and then you configure those environment variables on each system where you run the program. It might also be a nice touch if you use default values that will work for most developers. You have to balance that against making your software "secure by default".
Here's an example that pulls the server, user name, and password from environment variables.
import os
server = os.getenv('MY_APP_DB_SERVER', 'localhost')
user = os.getenv('MY_APP_DB_USER', 'myapp')
password = os.getenv('MY_APP_DB_PASSWORD', '')
db_connect(server, user, password)
Look up how to set environment variables in your operating system, and consider running the service under its own account. That way you don't have sensitive data in environment variables when you run programs in your own account. When you do set up those environment variables, take extra care that other users can't read them. Check file permissions, for example. Of course any users with root permission will be able to read them, but that can't be helped. If you're using systemd, look at the service unit, and be careful to use EnvironmentFile instead of Environment for any secrets. Environment values can be viewed by any user with systemctl show.
Option 3: Configuration Files
This is very similar to the environment variables, but you read the secrets from a text file. I still find the environment variables more flexible for things like deployment tools and continuous integration servers. If you decide to use a configuration file, Python supports several formats in the standard library, like JSON, INI, netrc, and XML. You can also find external packages like PyYAML and TOML. Personally, I find JSON and YAML the simplest to use, and YAML allows comments.
Three things to consider with configuration files:
Where is the file? Maybe a default location like ~/.my_app, and a command-line option to use a different location.
Make sure other users can't read the file.
Obviously, don't commit the configuration file to source code. You might want to commit a template that users can copy to their home directory.
Option 4: Python Module
Some projects just put their secrets right into a Python module.
# settings.py
db_server = 'dbhost1'
db_user = 'my_app'
db_password = 'correcthorsebatterystaple'
Then import that module to get the values.
# my_app.py
from settings import db_server, db_user, db_password
db_connect(db_server, db_user, db_password)
One project that uses this technique is Django. Obviously, you shouldn't commit settings.py to source control, although you might want to commit a file called settings_template.py that users can copy and modify.
I see a few problems with this technique:
Developers might accidentally commit the file to source control. Adding it to .gitignore reduces that risk.
Some of your code is not under source control. If you're disciplined and only put strings and numbers in here, that won't be a problem. If you start writing logging filter classes in here, stop!
If your project already uses this technique, it's easy to transition to environment variables. Just move all the setting values to environment variables, and change the Python module to read from those environment variables.

Where to store tns, username and password for a oracle DB connection in Unix to use with Python and R

i'm quite new to use unix and i'm stuck with this problem.
I have a linux machine with R/RStudio and Python/Anaconda installed.
I have given this machine access to hostname, port and service of my DB.
Now i have to create some sort of configuration file where i can store the username and password of the schema i want this machine to use to get access to db and query it through python or R.
This configuration file must secure the password so noone will know it outside the creator of the config file, other users will use r and python to connect to db via some library usinfg the credentials in this config file.
How can i achieve this? Sorry if i have sayd something wrong.
If you know some other methods to achieve this kind of security level please explain

How to avoid storing the username and password as part of the connection string

Situation
My team works with a NFS. I've written a python script that connects to a database. People execute my python script and based some values associated with their userid in our backend mongodb database the script performs certain actions for them
My question
I have a mongodb connection string like so
database_client = MongoClient("mongodb://<myusername>:<mypassword>#serverip:port/DBName")
The problem is that anyone can simply read the py script and figure out what my username and password are.
What is not an option
Making the Python script and opaque executable is not allowed. Not to mention it doesn't protect my credentials from leaking to other members of the tram who work on the same codebase. Nor is making the db world editable/readable. All requests to edit the db must go through an authorized account and only through the API provided by the script.
Is there a way to do this/what should I be doing instead?
Edit: I am NOT using Heroku. I'm behind a company firewall and the mongoDB server is a machine with an IP on the company network
One idea is to read the username and password from a configuration file1 such as
[topsecret]
user = foo
password = bar
This way you can share the code with anyone and they will be able to execute it only if they have the valid configuration file.
1 You can use the configparser module https://docs.python.org/3.4/library/configparser.html to parse configuration files such as above.

How to securely store database credentials in windows?

I use python and SQL-server to manage a database, but I do not know "good practices" about database management and know few about security information.
Is it secure to save Database credentials in Windows as a environment variable and use it into scripts with os.environ? Like this:
import os
DB_HOST = os.environ['DBHOST']
DB_USER = os.environ['DBUSER']
...
How is the proper way to store credentials to automate uses of databases?
If you are asking if you should permanently set environment variables for your laptop - I’d avoid that because any process could list all environment variables on the PC and the associated stored values quite easily.
Instead - I’d recommend checking out Keyring. This will use the Windows Credential Locker (or other OS specific keyring services).
Usually secure credentials are stored in a .env file that relates to your current environment and then are grabbed from within your code. E.g DB_HOST = env('DBHOST').
Basically what you're doing right now but stored in a file (as secure as you need it, possibly encrypted) rather than directly as environment variables as they're accessible from the entire machine.
By using Encryptedbypassphrase('key','Your_Password') method in sqlserver,
Example,
create table #temp(id int identity(1,1),Password varbinary(max))
insert into #temp(Password) values(encryptbypassphrase('12','Passw0rd'))
select * from #temp
In that code we are provide the original password but it stored in the database
table by encrypted value.
Screenshot of my output:

Using fabric to manipulate database connection

I just started using Fabric to better control the specific settings for test and deployment environments, and I'm trying to get an idea of the best approach to swapping configurations.
Let's say I have a module in my application that defines a simple database connection and some constants for authentication by default:
host = 'db.host.com'
user = 'someuser'
passw = 'somepass'
db = 'somedb'
class DB():
def __init__(self,host=host,user=user,passw=passw,db=db,cursor='DictCursor'):
#make a database connection here and all that jazz
Before I found fabric, I would use the getfqdn() function from the socket library to check the domain name of the host the system was being pushed to and then conditionalize the authentication credentials.
if getfqdn() == 'test.somedomain.com':
host = 'db.host.com'
user = 'someuser'
passw = 'somepass'
db = 'somedb'
elif getfqdn() == 'test.someotherdomain.com':
host = 'db.other.com'
user = 'otherguy'
passw = 'otherpass'
db = 'somedb'
This, for obvious reasons, is really not that great. What I would like to know is what's the smartest way of adapting something like this in Fabric, so that when the project gets pushed to a certain test/deployment server, these values are changed at post-push.
I can think of a few approaches just from looking through the docs. Should I have a file that just defines the constants that Fabric could output to using the shell commands based off what the deployment was, and then the file defining the database handler could import them? Does it makes sense to run open and write from within the fabfile like this? I assumed I'd also have to .gitignore these kinds of files so they don't get committed into the repo and just rely on Fabric to deploy them.
I plan on adapting whatever approach is the best suggested to all the configuration settings that I currently either swap using getfqdn or adjust manually. Thanks!
You can do all of that off of the env.host and then use something like the contrib template function to render the conf file and push it up. But templates are best in these instances (see: puppet and other config managers as well)

Categories