I have 2 python scripts I'm trying to run side by side. However, each of them have to open and close and reopen independently from each other. Also, one of the scripts is running inside a shell script.
Flaskserver.py & ./pyinit.sh
Flaskserver.py is just a flask server that needs to be restarted everynow and again to load a new page. (cant define all pages as the html is interchangeable). the pyinit is runs as xinit ./pyinit.sh (its selenium-webdriver pythoncode)
So when the Flaskserver changes and restarts the ./pyinit needs to wait about 20 seconds then restart as well.
Either one of these can create errors so I need to be able to check if Flaskserver has an error before restarting ./pyinit if ./pyinit errors i need to set the Flaskserver to a default value and then relaunch both of them.
I know a little about subprocess but I'm unsure on how it can deal with errors and stop-start code.
Rather than using sub-process I would recommend you to create a different thread for your processes using multithread.
Multithreading will not solve the problem if global variables are colliding, but by running them in different scripts, while you might solve this, you might collide in something else like a log file.
Now, if you keep both processes running from a single process that takes care of keeping them separated and assigning different global variables where necessary, you should be able to keep a better control. Using things like join and lock from the multithreading library, will also ensure that they don't collide and it should be easy to put a process to sleep while the other is running (as per waiting 20 secs).
You can keep a thread list as a global variable, as well as your lock. I have done this successfully with CherryPy's server for example. Any more details about multithreading look into the question I linked above, it's very well explained.
Related
in my vps i have run 4 Python Script and its been 60 days that i don't reboot my vps and now i have to, but if i reboot vps my python Variables & data will be removed because i don't store them in file and they are store in variables in python script.
my OS is Ubuntu Server 16.04 LTS and i was run my python codes with nohup command until they can run in background.
Now i need a way to stop my scripts without removing they variables and start them with same variables and data after i reboot my vps.
Is There Any Way That I Can Do This?
In Addition, I'm sorry for writing mistakes in my question.
Python doesn't provide any way of doing this.
But you might be able to use CRIU, or a similar tool, to freeze and snapshot the interpreter process. Then, after restart, you can resume the snapshot into a new process that just picks up exactly where you left off.
It may not work.1 But there's a good chance it will. This is essentially the same thing as a Live Migration in the CRIU docs, except that you're not migrating to a new computer/container/etc., just to the future of the same computer. So, start reading with that page, and follow the links from there.
You should probably test before you commit to it.
* Try it (obviously don't include the system restart, just kill -9 the executable) on a Python script that doesn't do anything important (maybe increments a counter, print it out, sleep for a second, repeat.
* Maybe try it on a script that does similar kinds of stuff to what yours are doing.
* If it's safe to have two copies of one of your programs running at the same time (they're not going to stomp all over each other writing to the same file, or fight over the same socket, or whatever), start a second copy and test dump/kill/resume that.
* Try it on one of your real processes, still without restart.
* Try it on all four.
* Cross your fingers, sacrifice a chicken, and do it for real.
If that doesn't pan out, the only option I can think of is to go through your scripts, manually figure out everything that needs to be saved and how it could be accessed from the top-level global, and do that in the debugger.
Ideally, you'll write a script that will automate accessing and saving all that stuff—plus another one to feed it into a new instance at restart. Then you just pdb the live interpreters and start dumping everything.
This is guaranteed to be a whole lot of work, and not much fun. On the plus side, it is guaranteed to work if you do it right. On the third hand, it's pretty easy to not do it right.
1. If you rely on open files, pipes, sockets, etc., CRIU does about as much as you could do, which is more than you might expect at first, but still not everything you could possibly want… Also, if you're using almost all of your RAM, it can be hard to wedge things back into exactly the same state. And there are probably other possible issues.
I have an event based application where I select() on various things and depending on what gets selected, the application handles it. To add to this, I want to add an interactive menu implemented in python. In a typical update() (called from within while(!done) loop):
- while(!done) {
select() with timeout
give control to python menu with timeout
}
Also, I don't want to run the python menu on a separate thread since it brings alongwith it the issue of race conditions and due to the nature of code/application/customers, I cannot have locks in the part of the code where the menu interacts. So, it has to be done in a single threaded environment.
I have been reading online but I couldn't find a way to do this in single threaded environment with a timeout. Can anyone please point me in the right direction or any pointers on how to do this ?
Will it is possible to run a small set of code automatically after a script was run?
I am asking this because for some reasons, if I added this set of code into the main script, though it works, it will displays a list of tab errors (its already there, but it is stating that it cannot find it some sort).
I realized that after running my script, Maya seems to 'load' its own setup of refreshing, along with some plugins done by my company. As such, if I am running the small set of code after my main script execution and the Maya/ plugins 'refresher', it works with no problem. I had like to make the process as automated as possible, all within a script if that is possible...
Thus is it possible to do so? Like a delayed sort of coding method?
FYI, the main script execution time depends on the number of elements in the scene. The more there are, it will takes longer...
Maya has a command Maya.cmds.evalDeferred that is meant for this purpose. It waits till no more Maya processing is pending and then evaluates itself.
You can also use Maya.cmds.scriptJob for the same purpose.
Note: While eval is considered dangerous and insecure in Maya context its really normal. Mainly because everything in Maya is inherently insecure as nearly all GUI items are just eval commands that the user may modify. So the second you let anybody use your Maya shell your security is breached.
I'm doing TDD, but the system I'm working with takes 6 seconds to get through boilerplate code. This code is not part of my work, nor of my tests (it's Autodesk Maya's headless/batch/CLI Python mode). I've talked to support, and there's no way around the load time, so I though maybe I could load and initialize Python first in the background, as I code, and then my mapping would simply run the nosetests inside of that when I'm ready. My tests take something like 0.01 seconds, so this should feel instantaneous, which would really help the red/green/refactor cycle.
In short, instead of firing off /path/to/mayapy /path/to/runtests.py /current/buffer/path, Vim would just fire up /path/to/mayapy with the boilerplate stuff from runtests.py, then somehow hold onto that running instance. When I hit my mapping, it would send into that running instance the call to nosetest with the current buffer's path (and then fire up another instance to hold onto while waiting for the next run). How do I hold onto the running instance and call into it later? I'm even considering having a chain of 2 or 3, for the times when I make minor mistakes and rerun 2 seconds later.
Vim-ipython, the excellent work of Paul Ivanov, is an interface between vim and ipython sessions (demo video). This may relieve you of some of the boilerplate of sending buffers to python and waiting on results.
I'm not entirely sure this is exactly what you want, but with a bit of python and vim glue code it may be a good step in the right direction, but I'm guessing you'd need to do a bit of experimentation to get a workflow you're happy with.
I'm starting up in Python and have done a lot of programming in the past in VB. Python seems much easier to work with and far more powerful. I'm in the process of ditching Windows altogether and have quite a few VB programs I've written and want to use them on Linux without having to touch anything that involves Windows.
I'm trying to take one of my VB programs and convert it to Python. I think I pretty much have.
One thing I never could find a way to do in VB was to use Program 1, a calling program, to call Program 2 and run in it multiple times. I had a program that would search a website looking for new updated material, everything was updated numerically(1234567890_1.mp3, for example). Not every value was used and I would have to search to find which files existed and which didn't. Typically the site would run through around 100,000 possible files a day with only 2-3 files actually being used each day. I had the program set up to search 10,000 files and if it found a file that existed it downloaded it and then moved to the next possible file and tested it. I would run this program, simultaneously 10 times and have each program set up to search a separate 10,000 file block. I always wanted to set up it so I could have a calling program that would have the user set the Main Block(1234) and the Secondary Block(5) with the Secondary Block possibly being a range of values. The calling program then would start up 10 separate programs(6, err 0-9 in reality) and would use Main Block and Secondary Block as the values to set up the call for each of the 10 Program 2s. When each one of the 10 programs got called, all running at the same time, they would be called with the appropriate search locations so they would be searching the website to find what new files had been added throughout the previous day. It would only take 35-45 minutes to complete each day versus multiple hours if I ran through everything in one long continuous program.
I think I could do this with Python using a Program 1(set) and Program 2(read) .txt file. I'm not sure if I would run into problems possibly with changing the set value before Program 2 could have read the value and started using it. I think I would have a to add a pause into the program to play it safe... I'm not really sure.
Is there another way I could pass a value from Program 1 to Program 2 to accomplish the task I'm looking to accomplish?
It seems simple enough to have a master class (a block as you would call it), that would contain all of the different threads/classes in a list. Communication wise, there could simply be a communication stucture like so:
Thread <--> Master <--> Thread
or
Thread <---> Thread <---> Thread
The latter would look more like a web structure.
The first option would probably be a bit more complicated since you have a "middle man". However, this does allow for mass communication. Also all that would need to passed to the other classes is the class Master, or a function that will provide communication
The second option allows for direct communication between threads. So if there is two threads that need to work together a lot, and you don't want to have to deal with other threads possibly interacting to the commands, then the second option is the best. However, the list of classes would have to be sent to the function.
Either way you are going to need to have a Master thread at the central of communication (for simplicity reasons). Otherwise you could get into sock and file stuff, but the other one is faster, more efficient, and is less likely to cause headaches.
Hopefully you understood what I was saying. It is all theoretical, but I have used similar systems in my programs.
There are a bunch of ways to do this; probably the simplest is to use os.spawnv to run multiple instances of Program2 and pass the appropriate values to each, ie
import os
import sys
def main(urlfmt, start, end, threads):
start, end, threads = int(start), int(end), int(threads)
per_thread = (end - start + 1) // threads
for i in range(threads):
s = str(start + i*per_thread)
e = str(min(s + per_thread - 1, end))
outfile = 'output{}.txt'.format(i+1)
os.spawnv(os.P_NOWAIT, 'program2.exe', [urlfmt, s, e, outfile])
if __name__=="__main__":
if len(sys.argv) != 4:
print('Usage: web_search.py urlformat start end threads')
else:
main(*(sys.argv))