I am reading through the uWSGI documentation and it warns to always avoid running your uWSGI instances as root. What is the reason behind this?
Does it matter if it is the only process (besides nginx) running in a docker container, serving up a flask application?
In general, security reasoning says that running as root as bad. If there were any kind of bug, for example a code execution bug that can allow anybody to execute arbitrary code they would be able to destroy your entire system.
If you don't run the process as root, any code execution vulnerabilities would need to be paired with a secondary privilege escalation vulnerability in order to destroy your system.
In a docker container, this is mitigated slightly in that you'll be able to recover your old system relatively easily, however, it is generally still a bad practice or habit to allow processes to run as root as a malicious attacker can and will steal the information that may exist on your server or turn your server into a malware delivery mechanism.
Related
My Python/Django code behaves different in the heroku production code, than on my development machine.
I would like to debug/trace it.
Since it runs on Heroku. AFAIK I can't insert import pydevd_pycharm; pydevd_pycharm.settrace(... into the code.
I use PyCharm.
But I don't need a fancy GUI. A command-line tool would be fine, too.
I would be happy if I could see all lines which get executed during a particular http request.
How to solve this for production systems?
In order to understand the difference between your local development environment and the Heroku production I would first deploy the application on another Heroku Dyno, for example a Free Dyno which you can easily create and manage.
You can then integrate the tools you want and add the log statements as needed.
Even if you are able to debug/inspect the production runtime it is very important to be able to test on production-like systems to capture problems early and investigate problems without guessing.
On the Prod system you have limited options to debug the application:
consider code changes (i.e. add logging stamements) but as you have pointed out this involves PRs and a new release
debugger: connect your favourite debugger (i.e. PyCharm) to the remote application. This is something that (almost) no one does (given the security aspects and the likely impact on the application performance) and I doubt your system admins/DevOps would agree
I don't know of any tool which can do that, but you shouldn't run into this problem very often. So I wouldn't bother trying to solve this generally, but instead just add logging statements where you think they can be handy to debug this one problem.
I am about to decide on programming language for the project.
The requirements are that some of customers want to run application on isolated servers without external internet access.
To do that I need to distribute application to them and cannot use SaaS approach running on, for example, my cloud (what I'd prefer to do...).
The problem is that if I decide to use Python for developing this, I would need to provide customer with easy readable code which is not really what I'd like to do (of course, I know about all that "do you really need to protect your source code" kind of questions but it's out of scope for now).
One of my colleagues told me about Docker. I can find dozen of answers about Docker container security. Problem is all that is about protecting (isolating) host from code running in container.
What I need is to know if the Python source code in the Docker Image and running in Docker Container is secured from access - can user in some way (doesn't need to be easy) access that Python code?
I know I can't protect everything, I know it is possible to decompile/crack everything. I just want to know the answer just to decide whether the way to access my code inside Docker is hard enough that I can take the risk.
Docker images are an open and documented "application packaging" format. There are countless ways to inspect the image contents, including all of the python source code shipped inside of them.
Running applications inside of a container provides isolation from the application escaping the container to access the host. They do not protect you from users on the host inspecting what is occurring inside of the container.
Python programs are distributed as source code. If it can run on a client machine, then the code is readable on that machine. A docker container only contains the application and its libraries, external binaries and files, not a full OS. As the security can only be managed at OS level (or through encryption) and as the OS is under client control, the client can read any file on the docker container, including your Python source.
If you really want to go that way, you should consider providing a full Virtual Machine to your client. In that case, the VM contains a full OS with its account based security (administrative account passwords on the VM can be different from those of the host). Is is far from still waters, because it means that the client will be enable to setup or adapt networking on the VM among other problems...
And you should be aware the the client security officer could emit a strong NO when it comes to running a non controlled VM on their network. I would never accept it.
Anyway, as the client has full access to the VM, really securing it will be hard if ever possible (disable booting from an additional device may even not be possible). It is admitted in security that if the attacker has physical access, you have lost.
TL/DR: It in not the expected answer but just don't. It you sell your solution you will have a legal contract with your customer, and that kind of problem should be handled at a legal level, not a technical one. You can try, and I have even given you a hint, but IMHO the risks are higher than the gain.
I know that´s been more than 3 years, but... looking for the same kind of solution I think that including compiled python code -not your source code- inside the container would be a challenging trial for someone trying to access your valuable source code.
If you run pyinstaller --onefile yourscript.py you will get a compiled single file that can be run as an executable. I have only tested it in Raspberry, but as far as I know it´s the same for, say, Windows.
Of course anything can be reverse engineered, but hopefully it won´t be worth the effort to the regular end user.
I think it could be a solution as using a "container" to protect our code from the person we wouldn't let them access. the problem is docker is not a secure container. As the root of the host machine has the most powerful control of the Docker container, we don't have any method to protect the root from accessing inside of the container.
I just have some ideas about a secure container:
Build a container with init file like docker file, a password must be set when the container is created;
once the container is built, we have to use a password to access inside, including
reading\copy\modify files
all the files stored on the host machine should be encypt。
no "retrieve password" or “--skip-grant-” mode is offered. that means nobody can
access the data inside the container if u lost the password.
If we have a trustable container where we can run tomcat or Django server, code obfuscation will not be necessary.
I want my django app to communicate by using a TCP/IP socket with a remote computer and I would like that socket to be available at all times. I would like to use the library tornado. Since I'm only familiar with writing views and models and such, I'm not entirely sure where to fit that into my codebase.
I was thinking about writing a management command that would run the tornado's server (see http://tornado.readthedocs.io/en/latest/tcpserver.html), but how could I call .stop() on my server once the management command quits? I wouldn't want it to spawn any threads that wouldn't exit upon my management command exiting, or end up with multiple open sockets, because I just want one.
Ofcourse I would also like the listener to reside somewhere in my django program and have access to it, not only within the management command code. I was thinking about importing a class from django's settings.
Am I thinking in the right direction, or is there a different, better approach?
EDIT: As to why would I want to do this:
I've got a microcontroller I want to communicate with, and I wouldn't want to go implementing/parsing HTTP on it, and I would also like to periodically send some indication of the connection being alive, and HTTP doesn't seem like the way to go
Management command is a nice approach, but I'd be reluctant to launch a server using it. A tornado server is a complex thing with a lot of state (including state outside of your codebase, like nginx, apache or HAProxy) and varying health. Management commands aren't designed to deal with all this.
It's probably a nice thing for development, and in this case you can easily make your management command not exit before the server by calling IOLoop.current().start() right inside the command.
For production environment I would advise to use contemporary orchestrating tools like Docker Compose, or if you plan to span your system over several machines, Docker Swarm or Kubernetes. These tools will allow you to start, shut down, scale and check health of individual components in a reliable manner, without reinventing the wheel with a set of management commands.
Either way, if your Tornado code lives in the same place with Django, then you're able to access the database using your Django models and reuse other parts of the project. Besides that, something launched from a management command doesn't get any advantages in using the running Django server.
I'm thinking about building a web app that would involve users writing small segments of python and the server testing that code. However, this presents a ton of security concerns. Would docker be a good isolation tool for running this potentially malicious code? From what I've read, checking system calls with ptrace is a possibility, but I would prefer to use a preexisting tool.
Docker is indeed very suitable for this kind of usage. However, please note that docker is NOT yet ready for production usage.
I would recommend to create a new container and give non-root privileges to your users to this container. One container per user.
This way, you can prepare your docker image and prepare the environment and control precisely what your users are doing :)
So I've been working on my first Django / Python project and I got my production server up and running. I was wondering if it's possible to make Python/FastCGI (not really sure which is responsible for the task) to recompile my code. As of right now, when I upload updated code, I need to restart the server for the changes to take place. I read that you can add some kind of mysite.fcgi file to lighttpd so it see's that you've updated the code, can you do the same for Nginx / FastCGI?
for anyone else that was interested in my question.. this is only a partial solution, but I ended up finding my answer here: How to gracefully restart django running fcgi behind nginx?
You can just run the script (I'm going to modify it a bit), everytime you edit your code and it will gracefully restart everything without dropping connections.
This is a general guide from the mod_wsgi project that outlines how you can monitor code changes from your app_wsgi.py and restart the current process if any of the modules have changed. You need to restart the Python process, because threads contending over modules could mean that a freshly reloaded module has outdated references from other modules that are still waiting to get discovered for reload.
If you want something that works nicely with nginx, Django and wsgi apps in general, take a peek at Spawning as your wsgi server. It's approach to code reloading is about as graceful as it gets.
It has great documentation, well documented request handling model and it just works, which makes it such a no-brainer to configure. You'd need less than five minutes from now to having your Django instance running on Spawning. Here's another topical blog to get your juices running.