Managing build and running of docker containers on one host - python

I have one server which runs multiple containers
Nginx
Portainer
Several custom HTTP servers
RabbitMQ
I have a folder structure like this in the home directoty
/docker/dockerfiles/nginx/Dockerfile
/docker/dockerfiles/nginx/README
/docker/dockerfiles/nginx/NOTES
/docker/dockerfiles/portainer/Dockerfile
...
/docker/dockerfiles/rabbitmq/Dockerfile
/docker/volumes/nginx/sites/...
/docker/volumes/nginx/logs/...
/docker/volumes/portainer/
...
/docker/volumes/rabbitmq/
/docker/volumes/ contains all the files which the docker containers use, they are mapped into the containers, the containers don't use real Docker volumes and I really want to avoid using them.
I also have 3 Python files:
containers_info.py
containers_build.py
containers_run.py
containers_info.py is basically a dictionary holding rudimentary information about the containers, like the version of the container and the build date, if it should be excluded/included in a build pass, if it should get included/excluded in a run pass
containers_build.py imports containers_info.py and checks which containers should be built, reads the corresponding Dockerfile from /docker/dockerfiles/.../Dockerfile and then builds the container(s) with the Docker Python API, collects some stats and creates summaries, notifies of failures and the like.
containers_run.py also imports containers_info.py and checks which containers should be run. It contains the information of which volumes to map to, which ports to use, basically all the stuff that would go in a YAML file to describe the container and a bit of management of the currently running container along with it.
It contains multiple snippets like
def run_websites(info):
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
container_name = 'websites'
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
new_container_name = container_name
if info['auto-run']: rename_and_stop_container(container_name)
else: new_container_name = container_name + '-prep'
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
container = client.containers.run(
detach=True,
name=new_container_name,
hostname='docker-websites',
image='myscope/websites:{}'.format(versions['websites']),
command='python -u server.py settings:docker-lean app:websites id:hp-1 port:8080 domain:www.example.com',
ports={'8080/tcp': ('172.17.0.1', 10001)},
working_dir='/home/user/python/server/app',
volumes={
'/home/user/docker/volumes/websites': {'bind': '/home/user/python/server', 'mode': 'rw'},
}
)
#patch = 'sed -i.bak s/raise\ ImportError/name\ =\ \\"libc.so.6\\"\ #\ raise\ ImportError/g /usr/lib/python2.7/site-packages/twisted/python/_inotify.py'
#print container.exec_run(patch, user='root')
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if info['auto-run'] and info['auto-delete-old']: remove_container(container_name + '-old')
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Now I want to move away from this custom solution and use something open source, which will allow me to scale this approach to multiple machines. Currently I can copy the ~/docker/ among servers and execute the modified scripts to obtain the machines I need, but I think that Docker Swarm or Kubernetes is designed to solve these issues. At least somehow that's the impression I have.
My Python solution was born while I was learning Docker, automating it via the Docker Python API helped me a lot with learning Dockerfiles, since I could automate the entire process and mistakes in the Dockerfiles would only mean a little bit of lost time.
Another important benefit of this Python script approach was that I was able to automate the creation of dozens if instances of the webserver on the same machine (assuming that this would make sense to do) and have Nginx adapt perfectly to this change (adding/removing proxies dynamically, reloading configuration).
So, which technology should I start looking into, in order to replace my current system with it? Also, I don't intend to run many machines, initially only two (main+backup), but would, at any point in time, like to be able to add more machines and distribute the load among them, and that by just changing some settings in a configuration file.
Which is the current approach to solve these issues?

There are a number of tools you could use in this scenario. If you just plan on using a single machine, docker-compose could be the solution you are looking for. It uses a yaml style makefile and supports the same build-context (as do standard Docker and kubernetes). It is really easy to get multiple instances of a pod or container running, just using the --scale flag eliminates a lot of the headache
If you are planning on running this on multiple machines, I'd say kubernetes is probably going to be your best bet. It's really well set up for it. Admittedly, I don't have a lot of experience in Swarm, but it's analogous from what I understand. The benefit there is that kubernetes can also handle the load-balancing for you, whereas docker-compose does not, and you'd have to use some sort of proxy (like Nginx) for that. It's not horrible, but also not the most straightforward if you haven't done something like that before

Related

Python compiler call another python compiler to execute a script (execute a script from one independent machine to another)

I know the question title is weird!.
I have two virtual machines. First one has limited resources, while the second one has enough resources just like normal machine. The first machine will receive a signal from an external device. This signal will trigger a python compiler to execute a script. The script is big and the first machine does not have enough resources to execute it.
I can copy the script to the second machine to run it there, but I can't make the second machine receive the external signal. I am wondering if there is a way to make the compiler on the first machine ( once the external signal received) call the compiler on the second machine, so the compiler on the second machine executes the script? so the second compiler should use the second machine resources. check the attached image please.
Assume that the connection is established between the two machines and they can see each other, and the second machine has a copy from the script. I just need the commands that pass ( the execution ) to the second machine and make it use its own resources.
You should look into the microservice architecture to do this.
You can achieve this either by using flask and sending server requests between each machine, or something like nameko, which will allow you to create a "bridge" between machines and call functions between them (seems like what you are more interested in). Example for nameko:
Machine 2 (executor of resource-intensive script):
from nameko.rpc import rpc
class Stuff(object):
#rpc
def example(self):
return "Function running on Machine 2."
You would run the above script through the Nameko shell, as detailed in the docs.
Machine 1:
from nameko.standalone.rpc import ClusterRpcProxy
# This is the amqp server that machine 2 would be running.
config = {
'AMQP_URI': AMQP_URI # e.g. "pyamqp://guest:guest#localhost"
}
with ClusterRpcProxy(config) as cluster_rpc:
cluster_rpc.Stuff.example() # Function running on Machine 2.
More info here.
Hmm, there's many approaches to this problem.
If you want a python only solution, you can check out
dispy http://dispy.sourceforge.net/
Or Dask. https://dask.org/
If you want a robust solution (what I use on my home computing cluster but imo overkill for your problem) you can use
SLURM. SLURM is basically a way to string multiple computers together into a "supercomputer". https://slurm.schedmd.com/documentation.html
For a semi-quick, hacky solution. You can write a microservice. Essentially, your "weak" computer will receive the message then send a http request to your "strong" computer. Your strong computer will contain the actual program, compute results, and pass back the result to your "weak" computer.
Flask is an easy and lightweight solution for this.
All of these solutions require some type of networking. At the least, the computers need to be on the same LAN or both have access over the web.
There are many other approaches not mentioned. For example, you can export a NFS (netowrk file storage) and have one computer put a file in the shared folder and the other computer perform work on the file. I'm sure there are plenty other contrived ways to accomplish this task :). I'd be happy to expand on a particular method if you want.

How to schedule r and python script on unix server used by many users in a centralized way

I am finalizing a new Unix Server with R/Rstudio Server and Python/ANAconda/Jupyter Lab to be able to let users (kinda 15 end-user of different skill levels) run self analysis, save output and maybe schedule some job and run.
I need a way to schedule R and python script and job (the best would be a single tool that can handle both) in a easy and trasparent way (so also the low-level skill users could use it) that it is centralized for all the users that get access to it.
I have seen cronR for R but it seems to be for single users and not centralizable.
There is a way or maybe some other open source tool that can be used so to make transparent scheduling ?
Ideally the scheduling will be time-based, but also trigger/event based.
What firewall port, bash abilitation and command should i have installed and configured?

How can I safely run untrusted python code?

Here is the scenario, my website has some unsafe code, which is generated by website users, to run on my server.
I want to disable some reserved words for python to protect my running environment, such as eval, exec, print and so on.
Is there a simple way (without changing the python interpreter, my python version is 2.7.10) to implement the feature I described before?
Many thanks.
Disabling names on python level won't help as there are numerous ways around it. See this and this post for more info. This is what you need to do:
For CPython, use RestrictedPython to define a restricted subset of Python.
For PyPy, use sandboxing. It allows you to run arbitrary python code in a special environment that serializes all input/output so you can check it and decide which commands are allowed before actually running them.
Since version 3.8 Python supports audit hooks so you can completely prevent certain actions:
import sys
def audit(event, args):
if event == 'compile':
sys.exit('nice try!')
sys.addaudithook(audit)
eval('5')
Additionally, to protect your host OS, use
either virtualization (safer) such as KVM or VirtualBox
or containerization (much lighter) such as lxd or docker
In the case of containerization with docker you may need to add AppArmor or SELinux policies for extra safety. lxd already comes with AppArmor policies by default.
Make sure you run the code as a user with as little privileges as possible.
Rebuild the virtual machine/container for each user.
Whichever solution you use, don't forget to limit resource usage (RAM, CPU, storage, network). Use cgroups if your chosen virtualization/containerization solution does not support these kinds of limits.
Last but not least, use timeouts to prevent your users' code from running forever.
One way is to shadow the methods:
def not_available(*args, **kwargs):
return 'Not allowed'
eval = not_available
exec = not_available
print = not_available
However, someone smart can always do this:
import builtins
builtins.print('this works!')
So the real solution is to parse the code and not allow the input if it has such statements (rather than trying to disable them).

Starting and stopping processes in a cluster

I'm writing software that runs a bunch of different programs (via twisted's twistd); that is N daemons of various kinds must be started across multiple machines. If I did this manually, I would be running commands like twistd foo_worker, twistd bar_worker and so on on the machines involved.
Basically there will be a list of machines, and the daemon(s) I need them to run. Additionally, I need to shut them all down when the need arises.
If I were to program this from scratch, I would write a "spawner" daemon that would run permanently on each machine in the cluster with the following features accessible through the network for an authenticated administrator client:
Start a process with a given command line. Return a handle to manage it.
Kill a process given a handle.
Optionally, query stuff like cpu time given a handle.
It would be fairly trivial to program the above, but I cannot imagine this is a new problem. Surely there are existing solutions to doing exactly this? I do however lack experience with server administration, and don't even know what the related terms are.
What existing ways are there to do this on a linux cluster, and what are some of the important terms involved? Python specific solutions are welcome, but not necessary.
Another way to put it: Given a bunch of machines in a lan, how do I programmatically work with them as a cluster?
The most familiar and universal way is just to use ssh. To automate you could use fabric.
To start foo_worker on all hosts:
$ fab all_hosts start:foo_worker
To stop bar_worker on a particular list of hosts:
$ fab -H host1,host2 stop:bar_worker
Here's an example fabfile.py:
from fabric.api import env, run, hide # pip install fabric
def all_hosts():
env.hosts = ['host1', 'host2', 'host3']
def start(daemon):
run("twistd --pid %s.pid %s" % (daemon, daemon))
def stop(daemon):
run("kill %s" % getpid(daemon))
def getpid(daemon):
with hide('stdout'):
return run("cat %s.pid" % daemon)
def ps(daemon):
"""Get process info for the `daemon`."""
run("ps --pid %s" % getpid(daemon))
There are a number of ways to configure host lists in fabric, with scopes varying from global to per-task, and it’s possible mix and match as needed..
To streamline the process management on a particular host you could write initd scripts for the daemons (and run service daemon_name start/stop/restart) or use supervisord (and run supervisorctl e.g., supervisorctl stop all). To control "what installed where" and to push configuration in a centralized manner something like puppet could be used.
The usual tool is a batch queue system, such as SLURM, SGE, Torque/Moab, LSF, and so on.
Circus :
Documentation :
http://docs.circus.io/en/0.5/index.html
Code:
http://pypi.python.org/pypi/circus/0.5
Summary from the documentation :
Circus is a process & socket manager. It can be used to monitor and control processes and sockets.
Circus can be driven via a command-line interface or programmatically trough its python API.
It shares some of the goals of Supervisord, BluePill and Daemontools. If you are curious about what Circus brings compared to other projects, read Why should I use Circus instead of X ?.
Circus is designed using ZeroMQ http://www.zeromq.org/. See Design for more details.

Execute arbitrary python code remotely - can it be done?

I'm working on a grid system which has a number of very powerful computers. These can be used to execute python functions very quickly. My users have a number of python functions which take a long time to calculate on workstations, ideally they would like to be able to call some functions on a remote powerful server, but have it appear to be running locally.
Python has an old function called "apply" - it's mostly useless these days now that python supports the extended-call syntax (e.g. **arguments), however I need to implement something that works a bit like this:
rapply = Rapply( server_hostname ) # Set up a connection
result = rapply( fn, args, kwargs ) # Remotely call the function
assert result == fn( *args, **kwargs ) #Just as a test, verify that it has the expected value.
Rapply should be a class which can be used to remotely execute some arbitrary code (fn could be literally anything) on a remote server. It will send back the result which the rapply function will return. The "result" should have the same value as if I had called the function locally.
Now let's suppose that fn is a user-provided function I need some way of sending it over the wire to the execution server. If I could guarantee that fn was always something simple it could could just be a string containing python source code... but what if it were not so simple?
What if fn might have local dependencies: It could be a simple function which uses a class defined in a different module, is there a way of encapsulating fn and everything that fn requires which is not standard-library? An ideal solution would not require the users of this system to have much knowledge about python development. They simply want to write their function and call it.
Just to clarify, I'm not interested in discussing what kind of network protocol might be used to implement the communication between the client & server. My problem is how to encapsulate a function and its dependencies as a single object which can be serialized and remotely executed.
I'm also not interested in the security implications of running arbitrary code on remote servers - let's just say that this system is intended purely for research and it is within a heavily firewalled environment.
Take a look at PyRO (Python Remote objects) It has the ability to set up services on all the computers in your cluster, and invoke them directly, or indirectly through a name server and a publish-subscribe mechanism.
It sounds like you want to do the following.
Define a shared filesystem space.
Put ALL your python source in this shared filesystem space.
Define simple agents or servers that will "execfile" a block of code.
Your client then contacts the agent (REST protocol with POST methods works well for
this) with the block of code.
The agent saves the block of code and does an execfile on that block of code.
Since all agents share a common filesystem, they all have the same Python library structure.
We do with with a simple WSGI application we call "batch server". We have RESTful protocol for creating and checking on remote requests.
Stackless had ability to pickle and unpickle running code. Unfortunately current implementation doesn't support this feature.
You could use a ready-made clustering solution like Parallel Python. You can relatively easily set up multiple remote slaves and run arbitrary code on them.
You could use a SSH connection to the remote PC and run the commands on the other machine directly. You could even copy the python code to the machine and execute it.
Syntax:
cat ./test.py | sshpass -p 'password' ssh user#remote-ip "python - script-arguments-if-any for test.py script"
1) here "test.py" is the local python script.
2) sshpass used to pass the ssh password to ssh connection

Categories