Folks,
I believe there are two questions I have: one python specific and the other NFS.
The basic point is that my program gets the 'username', 'uid', NFS server IP and exported_path as input from the user. It now has to verify that the NFS exported path is readable/writable by this user/uid.
My program is running as root on the local machine. The straight-forward approach is to 'useradd' a user with the given username and uid, mount the NFS exported path (run as root for mount) on some temporary mount_point and then execute 'su username -c touch /mnt_pt/tempfile'. IF the username and userid input were correct (and the NFS server was setup correctly) this touch of tempfile will succeed creating tempfile on the NFS remote directory. This is the goal.
Now the two questions are:
(i) Is there a simpler way to do this than creating a new unix user, mounting and touching a file to verify the NFS permissions?
(ii) If this is what needs to be done, then I wonder if there are any python modules/packages that will help me execute 'useradd', 'userdel' related commands? I currently intend to use the respective binaries(/usr/sbin/useradd etc) and then invoke subprocess.Popen to execute the command and get the output.
Thank you for any insight.
i) You could do something more arcane, but short of actually touching the file you probably aren't going to be testing exactly what you need to test, so I think I'd probably do it the way you suggest.
ii) You might want to check out the python pwd module if you want to verify user existance or the like, but you'll probably need to leverage the useradd/userdel programs themselves to do the dirty work.
You might want to consider leveraging sudo for your program so the entire thing doesn't have to run as root, it seems like a pretty risky proposition.
There is a python suite to test NFS server functionality.
git://git.linux-nfs.org/projects/bfields/pynfs.git
While it's for NFSv4 you can simply adopt it for v3 as well.
Related
Hi I'm new to the community and new to Python, experienced but rusty on other high level languages, so my question is simple.
I made a simple script to connect to a private ftp server, and retrieve daily information from it.
from ftplib import FTP
#Open ftp connection
#Connect to server to retrieve inventory
#Open ftp connection
def FTPconnection(file_name):
ftp = FTP('ftp.serveriuse.com')
ftp.login('mylogin', 'password')
#List the files in the current directory
print("Current File List:")
file = ftp.dir()
print(file)
# # #Get the latest csv file from server
# ftp.cwd("/pub")
gfile = open(file_name, "wb")
ftp.retrbinary('RETR '+ file_name, gfile.write)
gfile.close()
ftp.quit()
FTPconnection('test1.csv')
FTPconnection('test2.csv')
That's the whole script, it passes my credentials, and then calls the function FTPconnection on two different files I'm retrieving.
Then my other script that processes them has an import statement, as I tried to call this script as a module, what my import does it's just connect to the FTP server and fetch information.
import ftpconnect as ftpc
This is the on the other Python script, that does the processing.
It works but I want to improve it, so I need some guidance on best practices about how to do this, because in Spyder 4.1.5 I get an 'Module ftpconnect called but unused' warning ... so probably I am missing something here, I'm developing on MacOS using Anaconda and Python 3.8.5.
I'm trying to build an app, to automate some tasks, but I couldn't find anything about modules that guided me to better code, it simply says you have to import whatever .py file name you used and that will be considered a module ...
and my final question is how can you normally protect private information(ftp credentials) from being exposed? This has nothing to do to protect my code but the credentials.
There are a few options for storing passwords and other secrets that a Python program needs to use, particularly a program that needs to run in the background where it can't just ask the user to type in the password.
Problems to avoid:
Checking the password in to source control where other developers or even the public can see it.
Other users on the same server reading the password from a configuration file or source code.
Having the password in a source file where others can see it over your shoulder while you are editing it.
Option 1: SSH
This isn't always an option, but it's probably the best. Your private key is never transmitted over the network, SSH just runs mathematical calculations to prove that you have the right key.
In order to make it work, you need the following:
The database or whatever you are accessing needs to be accessible by SSH. Try searching for "SSH" plus whatever service you are accessing. For example, "ssh postgresql". If this isn't a feature on your database, move on to the next option.
Create an account to run the service that will make calls to the database, and generate an SSH key.
Either add the public key to the service you're going to call, or create a local account on that server, and install the public key there.
Option 2: Environment Variables
This one is the simplest, so it might be a good place to start. It's described well in the Twelve Factor App. The basic idea is that your source code just pulls the password or other secrets from environment variables, and then you configure those environment variables on each system where you run the program. It might also be a nice touch if you use default values that will work for most developers. You have to balance that against making your software "secure by default".
Here's an example that pulls the server, user name, and password from environment variables.
import os
server = os.getenv('MY_APP_DB_SERVER', 'localhost')
user = os.getenv('MY_APP_DB_USER', 'myapp')
password = os.getenv('MY_APP_DB_PASSWORD', '')
db_connect(server, user, password)
Look up how to set environment variables in your operating system, and consider running the service under its own account. That way you don't have sensitive data in environment variables when you run programs in your own account. When you do set up those environment variables, take extra care that other users can't read them. Check file permissions, for example. Of course any users with root permission will be able to read them, but that can't be helped. If you're using systemd, look at the service unit, and be careful to use EnvironmentFile instead of Environment for any secrets. Environment values can be viewed by any user with systemctl show.
Option 3: Configuration Files
This is very similar to the environment variables, but you read the secrets from a text file. I still find the environment variables more flexible for things like deployment tools and continuous integration servers. If you decide to use a configuration file, Python supports several formats in the standard library, like JSON, INI, netrc, and XML. You can also find external packages like PyYAML and TOML. Personally, I find JSON and YAML the simplest to use, and YAML allows comments.
Three things to consider with configuration files:
Where is the file? Maybe a default location like ~/.my_app, and a command-line option to use a different location.
Make sure other users can't read the file.
Obviously, don't commit the configuration file to source code. You might want to commit a template that users can copy to their home directory.
Option 4: Python Module
Some projects just put their secrets right into a Python module.
# settings.py
db_server = 'dbhost1'
db_user = 'my_app'
db_password = 'correcthorsebatterystaple'
Then import that module to get the values.
# my_app.py
from settings import db_server, db_user, db_password
db_connect(db_server, db_user, db_password)
One project that uses this technique is Django. Obviously, you shouldn't commit settings.py to source control, although you might want to commit a file called settings_template.py that users can copy and modify.
I see a few problems with this technique:
Developers might accidentally commit the file to source control. Adding it to .gitignore reduces that risk.
Some of your code is not under source control. If you're disciplined and only put strings and numbers in here, that won't be a problem. If you start writing logging filter classes in here, stop!
If your project already uses this technique, it's easy to transition to environment variables. Just move all the setting values to environment variables, and change the Python module to read from those environment variables.
Problem:
Customer would like to make sure that the script I've developed in Python, running under CentOS7, can sufficiently obscure the credentials required to access a protected web service (that only supports Basic Auth) such that someone with access to the CentOS login cannot determine those credentials.
Discussion:
I have a Python script that needs to run as a specific CentOS user, say "joe".
That script needs to access a protected Web Service.
In order to attempt to make the credentials external to the code I have put them into a configuration file.
I have hidden the file (name starts with a period "."), and base64 encoded the credentials for obscurity, but the requirement is to only allow the Python script to be able to "see" and open the file, vs anyone with access to the CentOS account.
Even though the file is hidden, "joe" can still do an ls -a and see the file, and then cat the contents.
As a possible solution, is there a way to set the file permissions in CentOS such that they will only allow that Python script to see and open the file, but still have the script run under the context of a CentOS account?
Naive solution
For this use-case I would probably create with a script (sh or zsh or whatever, but I guess u use the default one here) a temporal user iamtemporal-and-wontstayafterifinish. Then creating the config file for being able to read ONLY by specifically this user (and none permission for all the others). Read here for the how: https://www.thegeekdiary.com/understanding-basic-file-permissions-and-ownership-in-linux/
Getting harder
If the problem still raises in case someone would have root-rights (for any such reason), then just simply forget everything above, and start planning for a vacation, cuz' this will be a lot longer then anyone would think.
Is not anymore a simple python problem, but needs a different business logic. The best u could do is to implement (at least this credentials handling part) in a low-level language so could handle memory in a customized way and ask for them runtime only, don't store them...
Or maybe if u could limit the scope of this user accesses towards the protected Web Service as u say.
Bonus
Even tho it wasn't explicitly asked, I would discourage you from storing credentials with using a simple base64...
For this purpose a simple solution could be the following one at least (without the knowledge of the whole armada of cryptography):
encrypt the passw with a asymmetric cryptographic algorithm (probably RSA with a huge key
inject the key for decryption as a env var while you have an open ssh session to the remote terminal
ideally u use this key only while u decrypt and send it, afterwards make sure u delete the references to the variables
Sidenote: it's still filled with 'flaws'. If security is really a problem, I would consider changing technology or using some sort of lib that handles these stuff more securely. I would start probably here: Securely Erasing Password in Memory (Python)
Not to mention memory dumps can be read 'easily' (if u know what u are looking for...): https://cyberarms.wordpress.com/2011/11/04/memory-forensics-how-to-pull-passwords-from-a-memory-dump/
So yeah, having a key-server which sends you the private key to decrypt is not enough, if you read these last two web entries...
I'm developing a service which has to copy multiple files from a central node to remote servers.
The problem is that each time the service is executed, there are new servers and new files to dispatch to these servers. I mean, in each execution, I have the information of which files have to be copied to each server and in which directory.
Obviously, this information is very dynamically changing, so I would like to be able to automatize this task. I tried to get a solution with Ansible, FTP and SCP over Python.
I think Ansible is very difficult to automatize every scp task in each execution.
SCP is ok but I need to build each SCP command in Python to launch it.
FTP Is too much for this problem because there are not many files to dispatch to a single server.
Is there any better solution than what I thinked about?
In case you send some the same file (or files) to different destinations (that can be organized as sets), you could profit from solutions as dsh or parallel-scp.
If this will make sense depends on your use-case.
Parallel-SSH Documentation
from __future__ import print_function
from pssh.pssh_client import ParallelSSHClient
hosts = ['myhost1', 'myhost2']
client = ParallelSSHClient(hosts)
output = client.run_command('uname')
for host, host_output in output.items():
for line in host_output.stdout:
print(line)
How about rsync ?
Example: rsync -rave
Where source or destination could be:
user#IP:/path/to/dest
It knows incremental + you can Cron it or trigger a small script when anything changes
For a git repository that is shared with others, is it a vulnerability to expose your database password in the settings.py file? (My initial thought was no, since you still need the ssh password.)
It depends on who has read access to the repository, but it's generally a good idea not to put passwords into version control. It's probably better to put it in a seperate file like password.py with only the password in it, like this:
password = 'asdasd'
and import or execfile this in your settings.py. You can then add the password.py to your .gitignore.
That assumes your database is only accessible from one specific host, and even then, why would you want to give a potential attacker another piece of information? Suppose you deploy this to a shared host and I have an account on there, I could connect to your database just by logging into my account on that box.
Also, depending on who you are writing this for and what kind of auditing they need to go through (PCI, state audits, etc), this might just not be allowed.
I would try to find a way around checking in the password.
In my Django project I need to be able to check whether a host on the LAN is up using an ICMP ping. I found this SO question which answers how to ping something in Python and this SO question which links to resources explaining how to use the sodoers file.
The Setting
A Device model stores an IP address for a host on the LAN, and after adding a new Device instance to the DB (via a custom view, not the admin) I envisage checking to see if the device responds to a ping using an AJAX call to an API which exposes the capability.
The Problem
However (from the docstring of a library suggested in the the first SO question) "Note that ICMP messages can only be sent from processes running as root."
I don't want to run Django as the root user, since it is bad practice. However this part of the process (sending and ICMP ping) needs to run as root. If with a Django view I wish to send off a ping packet to test the liveness of a host then Django itself is required to be running as root since that is the process which would be invoking the ping.
Solutions
These are the solutions I can think of, and my question is are there any better ways to only execute select parts of a Django project as root, other than these:
Run Django as root (please no!)
Put a "ping request" in a queue that another processes -- run as root -- can periodically check and fulfil. Maybe something like celery.
Is there not a simpler way?
I want something like a "Django run as root" library, is this possible?
Absolutely no way, do not run the Django code as root!
I would run a daemon as root (written in Python, why not) and then IPC between the Django instance and your daemon. As long as you're sure to validate the content and properly handle it (e.g. use subprocess.call with an array etc) and only pass in data (not commands to execute) it should be fine.
Here is an example client and server, using web.py
Server: http://gist.github.com/788639
Client: http://gist.github.com/788658
You'll need to install webpy.org but it's worth having around anyway. If you can hard-wire the IP (or hostname) into the server and remove the argument, all the better.
What's your OS here? You might be able to write a little program that does what you want given a parameter, and stick that in the sudoers file, and give your django user permission to run it as root.
/etc/sudoers
I don't know what kind of system you're on, but on any box I've encountered, one does not have to be root to run the command-line ping program (it has the suid bit set, so it becomes root as necessary). So you could just invoke that. It's a bit more overhead, but probably negligible compared to network latency.