first of all a short overview over my current goal:
I want to use a scheduler to execute a simple python program every second. This program reads some data and enter the results inside a database. Because the scheduled task will operate over several days on a raspberry pie the process should start in the background. Therefore I want to create a python file which can start, stop and get the current status from the background job. Furthermore it should be possible to exit and reenter the control file without stopping the background job.
Currently I tried apscheduler to execute the python file every second. The actual problem is, that I can't access the current python file, to control the status, from another external file. Overall I found no real solution how I can control a subprocess form an external file and after find the same subprocess again after restarting the controlling python file.
EDIT:
So overall as far I got it now I'm able to find the current process with his pid. With that im able to send send a terminatesignal to the current process. Inside my scheduled file I'm able to catch these signals and shut down the program on a normal way.
To control (start, restart, stop, schedule) the background process use subprocess. Here is example of subrocess' popen with timeout.
To pass some data between the scheduler and the background job use one of IPC mechanisms, for example sockets.
Related
I have a program that produces a csv file and right at the end I am using os.startfile(fileName) but then due to the program finishing execution the opening file just closes also, same happens if I add a sleep after also, file loads up then once the sleep ends it closes again?
Any help would be appreciated.
From the documentation for os.startfile:
startfile() returns as soon as the associated application is launched. There is no option to wait for the application to close, and no way to retrieve the application’s exit status.
When using this function, there is no way to make your script wait for the program to complete because you have no way of knowing when it is complete. Because the program is being launched as a subprocess of your python script, the program will exit when the python script exits.
Since you don't say in your question exactly what the desired behavior is, I'm going to guess that you want the python script to block until the program finishes execution (as opposed to detaching the subprocess). There are multiple ways to do this.
Use the subprocess module
The subprocess module allows you to make a subprocess call that will not return until the subprocess completes. The exact call you make to launch the subprocess depends heavily on your specific situation, but this is a starting point:
subprocess.Popen(['start', fileName], shell=True)
Use input to allow user to close script
You can have your script block until the user tells the python script that the external program has closed. This probably requires the least modification to your code, but I don't think it's a good solution, as it depends on user input.
os.startfile(fileName)
input('Press enter when external program has completed...')
I'm trying to kill a secondary task of a process using powershell, batch, python...anything I can save as script and run it remotely. TaskManager picture as following:
I'd like to kill the one with longer title leaving the "SAP Logon 740" open. Every task of the tree have the same PID, so I can't just kill the process.
I guess this is posible, because I can do it manually going to Task MAnager, expanding the process and ending that specific task but everything I've found consist in killing the process, which isn't possible in my case.
I've so far tried with tasklist/taskkill, powershell (Get-Process, Get-Object Win32_Process...) but I haven't been able to find how to.
Here you have the output of TaskList (Status=Running)
Only one of the task (the one which is front) is showing there.
As you have used the powershell tag, and even ran your tasklist command using powerhell.exe, I have decided to provide an examples using it.
If your criteria is to stop the process named saplogon with the longest window title string:
GPs saplogon|Sort{$_.mainWindowTitle.Length}|Select -L 1|SpPs -Wh
If your criteria is to stop all processes named saplogon except for the one with the shortest window title string:
GPs saplogon|Sort{$_.mainWindowTitle.Length}|Select -Skip 1|SpPs -Wh`
If you're happy with the output, you can remove -Wh, (-WhatIf), to actually perform the task. If needed you could even replace that with the -F, (-Force) option, if necessary.
I'm writing a small application that uses an "index-file" to open folders in explorer from just a few button presses. Anyway I would like to update that index file in a "background process" every time the applications shuts down. Updating the index file means scanning through our network and for some remote users it could take a few minutes. That's why I would like it to hide the console during the scanning process in order to avoid the process being aborted by user.
I tried several things similar to:
#these are just dummy lines
path = get_user_input()
subprocess.Popen(r'explorer "%s"' % path)
#Here I start my update process
multiprocessing.Process(target=update_index).start()
#end of script, now I want that process to continue until finished while main console closes. I only seem to get one or the other.
I also tried using:
DETACHED_PROCESS = 0x00000008
CREATE_NO_WINDOW = 0x08000000
subprocess.Popen(command, shell=True, stdin=None, stdout=None,
stderr=None,
creationflags=DETACHED_PROCESS|CREATE_NO_WINDOW)
and managed to get a separate console window but still no way from preventing the user for closing down the process.
Also keep in mind I would like to distribute this script with something like py2exe later to make it accessible for those without python so I guess using pythonw.exe is out of question. or?
That's not really the answer you're looking for, but you could redesign your system architecture: Write your index updater as a server process that's communicating with your actual application over sockets. Then you just have the index updater server process run continuously (maybe even on another machine) and have the index updater process do all the time-consuming work.
If you just want to perform background tasks that happen at certain intervals, then use cron. If you want to run a command in the background and keep it running even if you logout of the console, use nohup.
I am trying to build a node app which calls python script (takes a lot of time to run).User essentially chooses parameters and then clicks run which triggers event in socket.on('python-event') and this runs python script.I am using sockets.io to send real-time data to the user about the status of the python program using stdout stream I get from python.But the problem I am facing is that if the user clicks run button twice, the event-handdler is triggered twice and runs 2 instances of python script which corrupts stdout.How can I ensure only one event-trigger happens at a time and if new event trigger happens it should kill previous instance and also stdout stream and then run new instance of python script using updated parameters.I tried using socket.once() but it only allows the event to trigger once per connection.
I will use a job queue to do such kind of job, store each job's info in a queue, so you can cancel it and get its status. You can use a node module like kue.
I am trying to constantly monitor a process which is basically a Python program. If the program stops, then I have to start the program again. I am using another Python program to do so.
For example, say I have to constantly run a process called run_constantly.py. I initially run this program manually, which writes its process ID to the file "PID" (in the location out/PROCESSID/PID).
Now I run another program which has the following code to monitor the program run_constantly.py from a Linux environment:
def Monitor_Periodic_Process():
TIMER_RUNIN = 1800
foo = imp.load_source("Run_Module","run_constantly.py")
PROGRAM_TO_MONITOR = ['run_constantly.py','out/PROCESSID/PID']
while(1):
# call the function checkPID to see if the program is running or not
res = checkPID(PROGRAM_TO_MONITOR)
# if res is 0 then program is not running so schedule it
if (res == 0):
date_time = datetime.now()
scheduler.add_cron_job(foo.Run_Module, year=date_time.year, day=date_time.day, month=date_time.month, hour=date_time.hour, minute=date_time.minute+2)
scheduler.start()
scheduler.get_jobs()
time.sleep(TIMER_NOT_RUNIN)
continue
else:
#the process is running sleep and then monitor again
time.sleep(TIMER_RUNIN)
continue
I have not included the checkPID() function here. checkPID() basically checks if the process ID still exists (i.e. if the program is still running) and if it does not exist, it returns 0. In the above program, I check if res == 0, and if so, then I use Python's scheduler to schedule the program. However, the major problem that I am currently facing is that the process ID of this program and the run_constantly.py program turns to be same once I schedule the run_constantly.py using the scheduler.add_cron_job() function. So if the program run_constantly.py crashes, the following program still thinks that the run_constantly.py is running (since both process IDs are same), and therefore continues to go into the else loop to sleep and monitor again.
Can someone tell me how to solve this issue? Is there a simple way to constantly monitor a program and reschedule it when it has crashed?
There are many programs that can do this.
On Ubuntu there is upstart (installed by default)
Lots of people like http://supervisord.org/
monit as mentioned by #nathan
If you are looking for a python alternative there is a library that has just been released called circus which looks interesting.
And pretty much every linux distro probably has one of these built in.
The choice is really just down to which one you like better, but you would be far better off using one of these than writing it yourself.
Hope that helps
If you are willing to control the monitored program directly from python instead of using cron, have a look at the subprocess module :
The subprocess module allows you to spawn new processes,
connect to their input/output/error pipes, and obtain their return codes.
Check examples like track process status with python on SO for examples and references.
You could just use monit
http://mmonit.com/monit/
It monitors processes and restarts them (and other things.)
I thought I'd add a more versatile solution, which is one that I personally use all the time as well.
It's name is Immortal (source is at https://github.com/immortal/immortal)
To have it monitor and instantly restart a program if it stops, simply run the following command:
immortal <command>
So in your case I would run run_constantly.py like so:
immortal python run_constantly.py
The command ps aux | grep run_constantly.py should return 2 process IDs, one for the Immortal command, and one for the separate command Immortal started (just the regular command. As long as the Immortal process is running, run_constantly.py will stay running.