Can I allow my server process to restart without killing existing connections? - python

In an attempt to make my terminal based program survive longer I was told to look into forking the process off of system. I can't find much specifying a PID to which I want to spawn a new process off of.
is this possible in Linux? I am a Windows guy mainly.
My program is going to be dealing with sockets and if my application crashed then I would lose lots of information. I was under the impression that if it was forked from system the sockets would stay alive?
EDIT: Here is what I am trying to do. I have multiple computers that I want to communicate with. So I am building a program that lets me listen on a socket(simple). Then I will connect to it from each of my remote computers(simple).
Once I have a connection I want to open a new terminal, and use my program to start interacting with the remote computer(simple).
The questions came from this portion.. The client shell will send all traffic to the main shell who will then send it out to the remote computer. When a response is received it goes to main shell and forwards it to client shell.
The issue is keeping each client shell in the loop. I want all client shells to know who is connected to who on each client shell. So client shell 1 should tell me if I have a client shell 2, 3, 4, 5, etc and who is connected to it. This jumped into sharing resources between different processes. So I was thinking about using local sockets to send data between all these client shells. But then I ran into a problem if the main shell were to die, everything is lost. So I wanted a way to try and secure it.
If that makes sense.

So, you want to be able to reload a program without losing your open socket connections?
The first thing to understand is that when a process exits, all open file descriptors are closed. This includes socket connections. Running as a daemon does not change that. A process becomes a daemon by becoming independent of your terminal sesssion, so that it will continue to run when your terminal sesssion ends. But, like any other process, when a daemon terminates for any reason (normal exit, crashed, killed, machine is restarted, etc), then all connections to it cease to exist. BTW this is not specific to unix, Windows is the same.
So, the short answer to your question is NO, there's no way to tell unix/linux to not close your sockets when your process stops, it will close them and that's that.
The long answer is, there are a few ways to re-engineer things to get around this:
1) You can have your program exec() itself when you send it a special message or signal (eg SIGHUP). In unix, exec (or its several variants), does not end or start any process, it simply loads code into the current process and starts execution. The new code takes the place of the old within the same process. Since the process remains the same, any open files remain open. However you will lose any data that you had in memory, so the sockets will be open, but your program will know nothing about them. On startup you'd have to use various system calls to discover which descriptors are open in your process and whether any of them are socket connections to clients. One way to get around this would be to pass critical information as command line arguments or environment variables which can be passed through the exec() call and thus preserved for use of the new code when it starts executing.
Keep in mind that this only works when the process calls exec ITSELF while it is still running. So you cannot recover from a crash or any other cause of your process ending.. your connections will be gone. But this method does solve the problem of you wanting to load new code without losing your connections.
2) You can bypass the issue by dividing your server (master) into two processes. The first (call it the "proxy") accepts the TCP connections from the clients and keeps them open. The proxy can never exit, so it should be kept so simple that you'll rarely want to change that code. The second process runs the "worker", which is the code that implements your application logic. All the code you might want to change often should go in the worker. Now all you need do establish interprocess communication from the proxy to the worker, and make sure that if the worker exits, there's enough information in the proxy to re-establish your application state when the worker starts up again. In a really simple, low volume application, the mechanism can be as simple as the proxy doing a fork() + exec() of the worker each time it needs to do something. A fancier way to do this, which I have used with good results, is a unix domain datagram (SOCK_DGRAM) socket. The proxy receives messages from the clients, forwards them to the worker through the datagram socket, the worker does the work, and responds with the result back to the proxy, which in turn forwards it back to the client. This works well because as long as the proxy is running and has opened the unix domain socket, the worker can restart at will. Shared memory can also work as a way to communicate between proxy and worker.
3) You can use the unix domain socket along with the sendmesg() and recvmsg() functions along with the SCM_RIGHTS flag to pass not the client data itself, but to actually send the open socket file descriptors from the old instance to the new. This is the only way to pass open file descriptors between unrelated processes. Using this mechanism, there are all sorts of strategies you can implement.. for example, you could start a new instance of your master program, and have it connect (via a unix domain socket) to the old instance and transfer all the sockets over. Then your old instance can exit. Or, you can use the proxy/worker model, but instead of passing messages through the proxy, you can just have the proxy hand the socket descriptor to the worker via the unix domain socket between them, and then the worker can talk directly to the client using that descriptor. Or, you could have your master send all its socket file descriptors to another "stash" process that holds on to them in case the master needs to restart. There are all sorts of architectures possible. Keeping in mind that the operating system just provides the ability to ship the descriptors around, all the other logic you have to code for yourself.
4) You can accept that no matter how careful you are, inevitably connections will be lost. Networks are unreliable, programs crash sometimes, machines are restarted. So rather than going to significant effort to make sure your connections don't close, you can instead engineer your system to recover when they inevitably do.
The simplest approach to this would be: Since your clients know who they wish to connect to, you could have your client processes run a loop where, if the connection to the master is lost for any reason, they periodically try to reconnect (let's say every 10-30 seconds), until they succeed. So all the master has to do is to open up the rendezvous (listening) socket and wait, and the connections will be re-established from every client that is still out there running. The client then has to re-send any information it has which is necessary to re-establish proper state in the master.
The list of connected computers can be kept in the memory of the master, there is no reason to write it to disk or anywhere else, since when the master exits (for any reason), those connections don't exist anymore. Any client can then connect to your server (master) process and ask it for a list of clients that are connected.
Personally, I would take this last approach. Since it seems that in your system, the connections themselves are much more valuable than the state of the master, being able to recover them in the event of a loss would be the first priority.
In any case, since it seems that the role of the master is to simply pass data back and forth among clients, this would be a good application of "asynchronous" socket I/O using the select() or poll() functions, this allows you to communicate between multiple sockets in one process without blocking. Here's a good example of a poll() based server that accepts multiple connections:
https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzab6/poll.htm
As far as running your process "off System".. in Unix/Linux this is referred to running as a daemon. In *ix, these processes are children of process id 1, the init process.. which is the first process that starts when the system starts. You can't tell your process to become a child of init, this happens automatically when the existing parent exits. All "orphaned" processes are adopted by init. Since there are many easily found examples of writing a unix daemon (at this point the code you need to write to do this has become pretty standardized), I won't paste any code here, but here's one good example I found: http://web.archive.org/web/20060603181849/http://www.linuxprofilm.com/articles/linux-daemon-howto.html#ss4.1
If your linux distribution uses systemd (a recent replacement for init in some distributions), then you can do it as a systemd service, which is systemd's idea of a daemon but they do some of the work for you (for better or for worse.. there's a lot of complaints about systemd.. wars have been fought just about)...

Forking from your own program, is one approach - however a much simpler and easier one is to create a service. A service is a little wrapper around your program that deals with keeping it running, restarting it if it fails and providing ways to start and stop it.
This link shows you how to write a service. Although its specifically for a web server application, the same logic can be applied to anything.
https://medium.com/#benmorel/creating-a-linux-service-with-systemd-611b5c8b91d6
Then to start the program you would write:
sudo systemctl start my_service_name
To stop it:
sudo systemctl stop my_service_name
To view its outputs:
sudo journalctl -u my_service_name

Related

How to notify a daemon given a pid

So I have been getting my feet wet with python, attempting to build a reminder system that ties into the gnome notification ui. The basic idea is you type a command into your shell like remind me to check on dinner in 20 min and then in 20 min you get a desktop notification saying "check on dinner". The way I am doing this is by having a script parse the message and write the time the notification should be sent and the message that should be sent to a log file.
The notifications are getting triggered by a python daemon. I am using this daemon design I found online. The issue I am seeing is when this daemon is running it is taking 100% of my cpu! I stripped down all the code the daemon was doing and it I still have this problem when all the daemon is doing is
while True:
last_modified = os.path.getmtime(self.logfile)
I presume that this is a bad approach and I should instead be notifying the daemon when there is a new reminder and then most of the time the reminder daemon should be sleeping. Now this is just an idea but I am having a hard time finding resources on 'how to notify a process' when all I know is the daemons pid. So if I have suspend the daemon with something like time.sleep(time_to_next_notification) would there be a way for me to send a signal to to the daemon letting it know that there was a new reminder?
Though I believe you're better off using a server - client type solution that listens on a port, what you are asking is 100% possible using the signal and os libraries. This approach will not work well with multi threaded programs however as signals are only handled by the parent thread in python. Additionally windows doesn't implement signals in the same way so the options are more limited.
Signals
The "client" process can send arbitrary signals using os.kill(pid, signal). You will have to go through the available signals and determine which one you want to use (signal.NSIG may be a good option because it shouldn't stomp on any other default behavior).
The "daemon" process on startup must register a handler for what to do when it receives your chosen signal. The handler is a function you must define that receives the signal itself that was received as well as the current stack frame of execuiton (def handler(signum, frame):). If you're only doing one thing with this handler, and it doesn't need to know what was happening when it was called, you can probably ignore both these parameters. Then you must register the handler with signal.signal ex: signal.signal(signal.NSIG, handler).
From there you will want to find some appropriate way to wait until the next signal without consuming too many resources. This could be as simple as looping on a os.sleep
command, or you could try to get fancy. I'm not sure 100% how execution resumes on returning from a signal handler, so you may need to concern yourself with recursion depth (ie, make sure you don't recurse every time a signal is handled or you'll only ever be able to handle a limited number of signals before needing to re-start).
Server
Having a process listen on a port (generally referred to as a server, but functionally the same as your 'daemon' description) instead of listen for operating system signals has several main advantages.
Ports are able to send data where signals are only able to trigger events
Ports are more similar cross-platform
Ports play nice[r] with multi-threading
Ports make it easy to send messages across a network (ie: create reminder from phone and execute on PC)
Waiting for multiple things at once
In order to address the need to wait for multiple processes at once (listening for input as well as waiting to deliver next notification) you have quite a few options:
Signals actually may be a good use case as signal.SIGALRM can be used as a conveniently re-settable alarm clock (if you're using UNIX). You would set up the handler in the same way as before, and simply set an alarm for the next notification. After setting the alarm, you could simply resume listening on the port for new tasks. If a new task comes in, setting the alarm again will override the existing one, so the handler would need to retrieve the next queued notification and re-set the alarm once done with the first task.
Threads could either be used to poll a queue of notification tasks, or an individual thread could be created to wait for each task. This is not a particularly elegant solution, however it would be effective and easy to implement.
The most elegant solution would likely be to use asyncio co-routines, however I am not as well versed in asyncio, and will admit they're a bit more confusing than threads.

Reload Functions Without Restarting Server [duplicate]

I've developed a set of audio streaming server, all of them are using Twisted, and they are in Python, of course. They work, but a problem keeps troubling me, when I found some bugs there in the running server, or I want add something into the server, I need to stop them and start. Unlike HTTP servers, it's okay to restart them whenever, but not okay with audio streaming servers. Once I restart my streaming server, it means my users will encounter a disconnection.
I did try to setup a manhole (a ssh service for Twisted servers, you can login and type Python code in the console to do something), and connect to the console, reload Python modules on the fly. It works sometimes, but hard to control. You never know how many instances of old class are there in the server, and some of them might be hard to reach, and relationships of class would be very complex. Also, it may works in some situations, but sometimes you really need to restart server, for example, you are running the server with selector reactor, and you want to run it with epoll reactor instead, then you have to restart it. Another example, when the memory usage goes too high, you have to restart them, too.
To build such system, I have an idea comes in my head, I'm thinking is that possible to hand over those connections and data from a process to another. For example:
We have a Server named Broadcasting, and the running instance is under rev.123, and we want replace it with rev.124.
Broadcasting rev.123 is running....
Startup Broadcasting rev.124 ....
Broadcasting rev.124 is stand by
Hand over connections from instance of rev.123 to instance of rev.124
Stop Broadcasting rev. 123 instance
Is this possible? I have no idea that does lifetime of socket handles bound to processes or not, I thought sockets created by a process will be closed when the creator process is killed, but I'm not sure. If it is possible, are there any guidelines or articles for designing such kind of hot code swapping mechanism? And is there something can achieve what I want for Twisted already be done?
Thanks.
I gave a talk about this at PyCon 2004. There's also some effort to add more functionality to help with this to Twisted itself.

Best practice: Monitor processes

I was wondering what the best practice solution would be to constantly monitor and resart processes, because there are multiple ways in doing it.
Additional info:
I have a unix program which uses multiple processes to work. There's a main process, it always starts first and is not likely to die or terminate without stopping the program.
Then I spawn multiple "module" processes, which take care of some work and communicate through the main process. Those modules sometimes die because of exceptions, and because it's an external program I can't resolve the issues, so I have to restart them if they die.
I've made a program to check if any of the modules died and restart them, but I need to run it manually. My program checks if the pid files of the modules exist and if they listen on a specific tcp port. If the pid file doesn't exist or the socket can't establish connection, it restarts the module.
My thoughts so far:
Cron job to run the checks every minute and restart any dead modules. (kind of an overkill, because they don't die that frequently)
Daemon running in the background, which starts the modules and receives notifications if they die, so it doesn't have to check them constantly. (SIGCHLD signal, os.wait)
If I use the daemon method, how should I communicate with the daemon through my interface? (socket, or maybe a file which gets read if the daemon receives a specific signal)
Usually I would just go with the daemon because it seems to be the best practice method to restart the modules asap(cron only runs once a minute), but I've wanted to get some opinions from more experienced users. (I've never done something like this before, and asking doesn't hurt anyone :D)
I apologize if these questions are answered somewhere else, but I couldn't find any related question.
P.S. If I forgot something or you need more infos, please feel free to ask. :)
I would investigate running the monitoring process as part of a dedicated monitoring framework. Monit is one example, however there are of course others.
This has the advantage of providing additional features which might be useful, such as email alerts and analytics. In my experience, you should be able to use your existing program without too much modification, and Monit itself uses few system resources if that is a concern.

SSL error after python/django fork

I've got a python django app where part of it is parsing a large file. This takes forever, so I put a fork in to deal with the processing, allowing the user to continue to browse the site. Within the fork code, there's a bunch of calls to our postgres database, hosted on amazon.
I'm getting the following error:
SSL error: decryption failed or bad record mac
Here's the code:
pid = os.fork()
if pid == 0:
lengthy_code_here(long)
database_queries(my_database)
os._exit(0)
None of my database calls are working, although they were working just fine before I inserted the fork. After looking around a little, it seems like it might be a stale database connection, but I'm not sure how to fix it. Does anyone have any ideas?
Forking while holding a socket open (such as a database connection) is generally not safe, as both processes will end up trying to use the same socket at once.
You will need, at a minimum, to close and reopen the database connection after forking.
Ideally, though, this is probably better suited for a task queueing system like Celery.
Django in production typically has a process dispatching to a bunch of processes that house django/python. These processes are long running, ie. they do NOT terminate after handling one request. Rather they handle a request, and then another, and then another, etc. What this means is changes that are not restored/cleaned up at the end of servicing a request will affect future requests.
When you fork a process, the child inherits various things from the parent including all open descriptors (file, queue, directories). Even if you do nothing with the descriptors, there is still a problem because when a process dies all it's open descriptors will be cleaned up.
So when you fork from a long running process you are setting yourself up to close all the open descriptors (such as the ssl connection) when the child process dies after it finishes processing. There are ways to prevent this from happening in a fork, but they can sometimes be difficult to get right.
A better design is to not fork, and instead hand off to another process that is either running, or started in a safer manner. For example:
at(1) can be used to queue up jobs for later (or immediate) execution
message queues can be used to pass messages to other daemons
standard IPC constructs such as pipes can be used to communicate to other daemons
update:
If you want to use at(1) you will have to create a standalone script. You can use a serializer to pass the data from django to the script.

What are the ways to run a server side script forever?

I need to run a server side script like Python "forever" (or as long as possible without loosing state), so they can keep sockets open and asynchronously react to events like data received. For example if I use Twisted for socket communication.
How would I manage something like this?
Am I confused? or are there are better ways to implement asynchronous socket communication?
After starting the script once via Apache server, how do I stop it running?
If you are using twisted then it has a whole infrastructure for starting and stopping daemons.
http://twistedmatrix.com/projects/core/documentation/howto/application.html
How would I manage something like this?
Twisted works well for this, read the link above
Am I confused? or are there are better ways to implement asynchronous socket communication?
Twisted is very good at asynchronous socket communications. It is hard on the brain until you get the hang of it though!
After starting the script once via Apache server, how do I stop it running?
The twisted tools assume command line access, so you'd have to write a cgi wrapper for starting / stopping them if I understand what you want to do.
You can just write an script that is continuously in a while block waiting for the connection to happen and waits for a signal to close it.
http://docs.python.org/library/signal.html
Then to stop it you just need to run another script that sends that signal to him.
You can use a ‘double fork’ to run your code in a new background process unbound to the old one. See eg this recipe with more explanatory comments than you could possibly want.
I wouldn't recommend this as a primary way of running background tasks for a web site. If your Python is embedded in an Apache process, for example, you'll be forking more than you want. Better to invoke the daemon separately (just under a similar low-privilege user).
After starting the script once via Apache server, how do I stop it running?
You have your second fork write the process number (pid) of the daemon process to a file, and then read the pid from that file and send it a terminate signal (os.kill(pid, signal.SIG_TERM)).
Am I confused?
That's the question! I'm assuming you are trying to have a background process that responds on a different port to the web interface for some sort of unusual net service. If you merely talking about responding to normal web requests you shoudn't be doing this, you should rely on Apache to handle your sockets and service one request at a time.
I think Comet is what you're looking for. Make sure to take a look at Tornado too.
You may want to look at FastCGI, it sounds exactly like what you are looking for, but I'm not sure if it's under current development. It uses a CGI daemon and a special apache module to communicate with it. Since the daemon is long running, you don't have the fork/exec cost. But as a cost of managing your own resources (no automagic cleanup on every request)
One reason why this style of FastCGI isn't used much anymore is there are ways to embed interpreters into the Apache binary and have them run in server. I'm not familiar with mod_python, but i know mod_perl has configuration to allow long running processes. Be careful here, since a long running process in the server can cause resource leaks.
A general question is: what do you want to do? Why do you need this second process, but yet somehow controlled by apache? Why can'ty ou just build a daemon that talks to apache, why does it have to be controlled by apache?

Categories