Save a Script Variables inside code and reset them after reboot - python

in my vps i have run 4 Python Script and its been 60 days that i don't reboot my vps and now i have to, but if i reboot vps my python Variables & data will be removed because i don't store them in file and they are store in variables in python script.
my OS is Ubuntu Server 16.04 LTS and i was run my python codes with nohup command until they can run in background.
Now i need a way to stop my scripts without removing they variables and start them with same variables and data after i reboot my vps.
Is There Any Way That I Can Do This?
In Addition, I'm sorry for writing mistakes in my question.

Python doesn't provide any way of doing this.
But you might be able to use CRIU, or a similar tool, to freeze and snapshot the interpreter process. Then, after restart, you can resume the snapshot into a new process that just picks up exactly where you left off.
It may not work.1 But there's a good chance it will. This is essentially the same thing as a Live Migration in the CRIU docs, except that you're not migrating to a new computer/container/etc., just to the future of the same computer. So, start reading with that page, and follow the links from there.
You should probably test before you commit to it.
* Try it (obviously don't include the system restart, just kill -9 the executable) on a Python script that doesn't do anything important (maybe increments a counter, print it out, sleep for a second, repeat.
* Maybe try it on a script that does similar kinds of stuff to what yours are doing.
* If it's safe to have two copies of one of your programs running at the same time (they're not going to stomp all over each other writing to the same file, or fight over the same socket, or whatever), start a second copy and test dump/kill/resume that.
* Try it on one of your real processes, still without restart.
* Try it on all four.
* Cross your fingers, sacrifice a chicken, and do it for real.
If that doesn't pan out, the only option I can think of is to go through your scripts, manually figure out everything that needs to be saved and how it could be accessed from the top-level global, and do that in the debugger.
Ideally, you'll write a script that will automate accessing and saving all that stuff—plus another one to feed it into a new instance at restart. Then you just pdb the live interpreters and start dumping everything.
This is guaranteed to be a whole lot of work, and not much fun. On the plus side, it is guaranteed to work if you do it right. On the third hand, it's pretty easy to not do it right.
1. If you rely on open files, pipes, sockets, etc., CRIU does about as much as you could do, which is more than you might expect at first, but still not everything you could possibly want… Also, if you're using almost all of your RAM, it can be hard to wedge things back into exactly the same state. And there are probably other possible issues.

Related

running 2 python scripts without them effecting each other

I have 2 python scripts I'm trying to run side by side. However, each of them have to open and close and reopen independently from each other. Also, one of the scripts is running inside a shell script.
Flaskserver.py & ./pyinit.sh
Flaskserver.py is just a flask server that needs to be restarted everynow and again to load a new page. (cant define all pages as the html is interchangeable). the pyinit is runs as xinit ./pyinit.sh (its selenium-webdriver pythoncode)
So when the Flaskserver changes and restarts the ./pyinit needs to wait about 20 seconds then restart as well.
Either one of these can create errors so I need to be able to check if Flaskserver has an error before restarting ./pyinit if ./pyinit errors i need to set the Flaskserver to a default value and then relaunch both of them.
I know a little about subprocess but I'm unsure on how it can deal with errors and stop-start code.
Rather than using sub-process I would recommend you to create a different thread for your processes using multithread.
Multithreading will not solve the problem if global variables are colliding, but by running them in different scripts, while you might solve this, you might collide in something else like a log file.
Now, if you keep both processes running from a single process that takes care of keeping them separated and assigning different global variables where necessary, you should be able to keep a better control. Using things like join and lock from the multithreading library, will also ensure that they don't collide and it should be easy to put a process to sleep while the other is running (as per waiting 20 secs).
You can keep a thread list as a global variable, as well as your lock. I have done this successfully with CherryPy's server for example. Any more details about multithreading look into the question I linked above, it's very well explained.

Multiprocessing with Screen and Bash

Running a python script on different nodes at school using SSH. Each node has 8 cores. I use GNU Screen to be able to detach from a single process.
Is it more desirable to:
Run several different sessions of screen.
Run a single screen process and use & in a bash terminal.
Are they equivalent?
I am not sure if my experiments are poorly coded and taking an inordinate amount of time (very possible) OR my choice to use 1. is slowing the process down considerably. Thank you!
With bash I imagine you're doing something like this (assuming /home is under network mount):
#!/bin/bash
for i in {1..$NUM_NODES}
do
ssh node$i 'python /home/ryan/my_script.py' &
done
Launching this script from behind a single screen will work fine. Starting up several sessions of screen provides no performance gains but adds in the extra complication of starting multiple screens.
Keep in mind that there are much better ways to distribute load across a cluster (e.g. if someone else is using up all of node7 you'd want a way to detect that and send your job elsewhere). Most clusters I've worked with have Torque, Maui or the qsub command installed. I suggest giving those a look.
I would think they are about the same. I would prefer screen just because I have an easier time managing it. Depending on the scripts usage, that could also have some effect on time to process.

How do I load up python in the shell from Vim, but not use it right away?

I'm doing TDD, but the system I'm working with takes 6 seconds to get through boilerplate code. This code is not part of my work, nor of my tests (it's Autodesk Maya's headless/batch/CLI Python mode). I've talked to support, and there's no way around the load time, so I though maybe I could load and initialize Python first in the background, as I code, and then my mapping would simply run the nosetests inside of that when I'm ready. My tests take something like 0.01 seconds, so this should feel instantaneous, which would really help the red/green/refactor cycle.
In short, instead of firing off /path/to/mayapy /path/to/runtests.py /current/buffer/path, Vim would just fire up /path/to/mayapy with the boilerplate stuff from runtests.py, then somehow hold onto that running instance. When I hit my mapping, it would send into that running instance the call to nosetest with the current buffer's path (and then fire up another instance to hold onto while waiting for the next run). How do I hold onto the running instance and call into it later? I'm even considering having a chain of 2 or 3, for the times when I make minor mistakes and rerun 2 seconds later.
Vim-ipython, the excellent work of Paul Ivanov, is an interface between vim and ipython sessions (demo video). This may relieve you of some of the boilerplate of sending buffers to python and waiting on results.
I'm not entirely sure this is exactly what you want, but with a bit of python and vim glue code it may be a good step in the right direction, but I'm guessing you'd need to do a bit of experimentation to get a workflow you're happy with.

Simplest way to have Python output, from a compiled package?

Prior info: I'm on a Mac.
Q: How can I get terminal-like text output from the program execution, if I compile it with py2app for redistribution?
My case is a program that copies a lot of big files and takes a while to process so I would like to at least have an output notification everytime each file is copied.
This is easy if I run it on the command line, I can just print a new line.
But when I make a self-sufficient package, it simply opens on the bottom dock, with no window, and closes upon completion.
A simple text window would be fine.
Thanks in advance.
If you want to create a simple text window, you need to pick a GUI framework to do that with. For something this simple, there's no reason not to use Tkinter (which comes with any Python) or PyObjC (which is pre-installed with Apple's Python 2.7), unless you happen to be more familiar with wx, gobject, Qt, etc.
At any rate, however you do it, you'll need to write a function that takes a message and appends it to the text window (maybe creating it lazily, if necessary), and call that function wherever you would normally print. You may also want to write and install a logging handler that does the same thing, so you can just log.info stuff. (You could instead create a file-like object that does this and redirect stdout and/or stderr, but unless you have no control over the printing code, that's going to be a lot more work.)
The only real problem here is that a GUI needs an event loop, and you probably just wrote your code as a sequential script.
One way around that is to turn your whole current script into a background thread. If you're using a GUI library that allows you to access the widgets from background threads, everything is easy; your printfunc just does textwidget.append(msg). If not, it may at least have a call_on_main_thread type function, so your printfunc does call_on_main_thread(textwidget.append, msg). If worst comes to worst (and I believe with Tkinter, it does), you have to create an explicit queue to push messages through, and write a queue handler in the event loop. This recipe should give you an idea. Replace the body of workerThread with your code, and end it with self.endApplication(). (There are probably better examples out there; this was just what I found first in a quick search.)
The other way around that is to have your code cooperatively operate with the event loop. Some libraries, like wx, have functions like SafeYield that make things work if you just call it after every chunk of processing. Others don't have that, but have a way to explicitly drive the event loop from your code. Others have neither—but every event loop framework has to have a way to schedule new events, so you can break your code up into a sequence of functions that each finish quickly and then do something like root.after_idle(nextfunc).
However… are you sure you need to do this?
First, any app, including one created by py2app, will send its stdout to the terminal if you run it with Foo.app/Contents/MacOS/Foo. And you can even set things up so that open Foo.app works that way, if you want. Obviously this doesn't help for people who just double-click the app in Finder (because then there is no terminal), but sometimes it's sufficient to just have to output available when people need it and know how to follow instructions.
And you can take this farther: Create a Foo.command file that just does something like $(dirname $0)/Foo.app/Contents/MacOS/Foo, and when you double-click that file, it launches Terminal.app and runs your script.
Or you can get even simpler: Just use logging to syslog the output, and if you want to see when each file is done, just watch the log messages go by in Console.app.
Finally, do you even need py2app in the first place? If you don't have any external dependencies, just rename you script to Foo.command, and double-clicking it will run it in Terminal.app. If you do have external dependencies, you might still be able to get away with bundling it all together as a folder with a .command in it instead of as a .app.
Obviously none of these ideas are exactly a professional or newbie-friendly way to build an interface, so if that matters, you will have to create a GUI.

Pylons REPL reevaluate code in running web server

I'm programming in python on a pre-existing pylons project (the okfn's ckan), but I'm a lisper by trade and used to that way of doing things.
Please correct me if I make false statements:
In pylons it seems that I should say
$ paster serve --reload
to get a web server that will notice changes.
At that point I can change a function, save the file and then go to my browser to test the change.
If I want to examine variables in a function in the process of making a webpage, then I put raise "hello", and then when I load the page, I get a browser based debugger, in which I can examine the program.
This is all very nice and works swimmingly, and I get the impression that that's how people tend to write pylons code.
Unfortunately the reload takes several seconds, and it keeps breaking my train of thought.
What I'd like to do is to run the web server from emacs, (although a python REPL on the command line would be almost as good), so that I can change a function in the editor and then send the new code to the running process without having to restart it. (with a command line repl I guess I'd have to copy and paste the new thing, but that would also be workable, just slightly less convenient)
Python seems very dynamic, and much like lisp in many ways, so I can't see in principle any reason why that wouldn't work.
So I guess the question is:
Is anyone familiar with the lisp way of doing things, and with Pylons, and can they tell me how to program the lisp way in pylons? Or is it impossible or a bad idea for some reason?
Edit:
I can run the webserver from my python interpreter inside emacs with:
from paste.script.serve import ServeCommand
ServeCommand("serve").run(["development.ini"])
And I can get the code to stop and show me what it's doing by inserting:
import pdb
pdb.set_trace()
so now all I need is a way to get the webserver to run on a different thread, so that control returns to the REPL and I can redefine functions and variables in the running process.
def start_server():
from paste.script.serve import ServeCommand
ServeCommand("serve").run(["development.ini"])
server_thread=threading.Thread(target=start_server)
server_thread.start()
This seems to work, except that if I redefine a function at the REPL the change doesn't get reflected in the webserver. Does anyone know why?
It seems that this way of working is impossible in python for the reason given by TokenMacGuy's comment, i.e. because redefining a class doesn't change the code in an instance of that class.
That seems a terrible shame, since in many other respects python seems very flexible, but it does explain why there's no python-swank!

Categories