TaskManager: Process is Terminated At Exactly 2 Hours - python

I'm running a python script remotely from a task machine and it creates a process that is supposed to be running for 3 hours. However, it seems to be terminating prematurely at exactly 2 hours. I don't believe it is a problem with the code because after the while loop ends, I am logging to a log file. The log file doesn't show that it exits out of that while loop successfully. Is there a specific setting on the machine that I need to look into that's interrupting my python process?

Is this perhaps a Scheduled Task? If so, have you checked the task's properties?
On my Windows 7 machine under the "Settings" tab is a checkbox for "Stop the task if it runs longer than:" with a box where you can specify the duration.
One of the suggested durations on my machine is "2 hours."

Related

Python/Django Prevent a script from being run twice

I got a big script that retrieves a lot of data via an API (Magento Invoices). This script is launched by a cron every 20 minutes. But sometimes, we need to refresh manually for getting the last invoiced orders. I have a dedicated page for this.
I would like to prevent from manual launching of the script by testing if it's already running because both API and script take a lot of ressources and time.
I tried to add a "process" model with is_active = True/False that would be tested and avoid re-launching if script is already active. At the start of the script, I switch the process status to TRUE and set it to FALSE when the script has finished.
But it seems that the 2nd instance of the script waits for the first to be finished before starting. At the end, both scripts are run because process.is_active always = False
I also tried with request.session variable, but same issue.
Spent lotsa time on this but didn't find a way to achieve my goal.
Has anyone already faced such a case ?

My python script which contains infinity loop breaks which I don't want. What to do?

I have a python script which uses selenium and chromedriver to get some info from a website. I have used python scheduler module to make the script run every 2 minute. It runs fine and get the info I want. But, whenever another script scheduled in windows task scheduler gets triggered, this script automatically stops although there is a infinity loop in it. I don't want another script causing this script to stop. How to handle that?
This is how my code is structured.
def get_info():
# code here
schedule.every(2).minutes.until("18:30").do(get_info)
while True:
schedule.run_pending()
time.sleep(1)
I want this infinity loop to run for a long time and not interrupted by another task and script scheduled in the windows task scheduler. What should I do? Any suggestion is appreciated.

pythonw.exe creates network activity when running any script

When I run any python script that doesn't even contain any code or imports that could access the internet in any way, I get 2 pythonw.exe processes pop up in my resource monitor under network activity. One of them is always sending more than receiving while the other has the same activity but the amount of sending vs receiving is reversed. The amount of overall activity is dependent on the file size, regardless of how many line are commented out. Even a blank .py document will create network activity of about 200 kb/s. The activity drops from its peak, which is as high as 15,000 kb/s for a file with 10,000 lines, to around zero after around 20 seconds, and then the processes quit on their own. The actual script has finished running long before the network processes stop.
Because the activity is dependent on file size I'm suspicious that every time I run a python script, the whole thing is being transmitted to a server somewhere else in the world.
Is this something that could be built into python, a virus that's infecting my computer, or just something that python is supposed to do and its innocent activity?
If anyone doesn't have an answer but could check to see if this activity affects their own installation of python, that would be great. Thanks!
EDIT:
Peter Wood, to start the process just run any python script from the editor, its runs on its own, at least for me. I'm on 2.7.8.
Robert B, I think you may be right, but why would the communication continue after the script has finished running?

Run python job every x minutes

I have a small python script that basically connects to a SQL Server (Micrsoft) database and gets users from there, and then syncs them to another mysql database, basically im just running queries to check if the user exists, if not, then add that user to the mysql database.
The script usually would take around 1 min to sync. I require the script to do its work every 5 mins (for example) exactly once (one sync per 5 mins).
How would be the best way to go about building this?
I have some test data for the users but on the real site, theres a lot more users so I can't guarantee the script takes 1 min to execute, it might even take 20 mins. However having an interval of say 15 mins everytime the script executes would be ideal for the problem...
Update:
I have the connection params for the sql server windows db, so I'm using a small ubuntu server to sync between the two databases located on different servers. So lets say db1 (windows) and db2 (linux) are the database servers, I'm using s1 (python server) and pymssql and mysql python modules to sync.
Regards
I am not sure cron is right for the job. It seems to me that if you have it run every 15 minutes but sometimes a synch takes 20 minutes you could have multiple processes running at once and possibly collide.
If the driving force is a constant wait time between the variable execution times then you might need a continuously running process with a wait.
def main():
loopInt = 0
while(loopInt < 10000):
synchDatabase()
loopInt += 1
print("call #" + str(loopInt))
time.sleep(300) #sleep 5 minutes
main()
(obviously not continuous, but long running) You can set the result of while to true and it will be continuous. (comment out loopInt += 1)
Edited to add: Please see note in comments about monitoring the process as you don't want the script to hang or crash and you not be aware of it.
You might want to use a system that handles queues, for example RabbitMQ, and use Celery as the python interface to implement it. With Celery, you can add tasks (like execution of a script) to a queue or run a schedule that'll perform a task after a given interval (just like cron).
Get started http://celery.readthedocs.org/en/latest/

Constantly monitor a program/process using Python

I am trying to constantly monitor a process which is basically a Python program. If the program stops, then I have to start the program again. I am using another Python program to do so.
For example, say I have to constantly run a process called run_constantly.py. I initially run this program manually, which writes its process ID to the file "PID" (in the location out/PROCESSID/PID).
Now I run another program which has the following code to monitor the program run_constantly.py from a Linux environment:
def Monitor_Periodic_Process():
TIMER_RUNIN = 1800
foo = imp.load_source("Run_Module","run_constantly.py")
PROGRAM_TO_MONITOR = ['run_constantly.py','out/PROCESSID/PID']
while(1):
# call the function checkPID to see if the program is running or not
res = checkPID(PROGRAM_TO_MONITOR)
# if res is 0 then program is not running so schedule it
if (res == 0):
date_time = datetime.now()
scheduler.add_cron_job(foo.Run_Module, year=date_time.year, day=date_time.day, month=date_time.month, hour=date_time.hour, minute=date_time.minute+2)
scheduler.start()
scheduler.get_jobs()
time.sleep(TIMER_NOT_RUNIN)
continue
else:
#the process is running sleep and then monitor again
time.sleep(TIMER_RUNIN)
continue
I have not included the checkPID() function here. checkPID() basically checks if the process ID still exists (i.e. if the program is still running) and if it does not exist, it returns 0. In the above program, I check if res == 0, and if so, then I use Python's scheduler to schedule the program. However, the major problem that I am currently facing is that the process ID of this program and the run_constantly.py program turns to be same once I schedule the run_constantly.py using the scheduler.add_cron_job() function. So if the program run_constantly.py crashes, the following program still thinks that the run_constantly.py is running (since both process IDs are same), and therefore continues to go into the else loop to sleep and monitor again.
Can someone tell me how to solve this issue? Is there a simple way to constantly monitor a program and reschedule it when it has crashed?
There are many programs that can do this.
On Ubuntu there is upstart (installed by default)
Lots of people like http://supervisord.org/
monit as mentioned by #nathan
If you are looking for a python alternative there is a library that has just been released called circus which looks interesting.
And pretty much every linux distro probably has one of these built in.
The choice is really just down to which one you like better, but you would be far better off using one of these than writing it yourself.
Hope that helps
If you are willing to control the monitored program directly from python instead of using cron, have a look at the subprocess module :
The subprocess module allows you to spawn new processes,
connect to their input/output/error pipes, and obtain their return codes.
Check examples like track process status with python on SO for examples and references.
You could just use monit
http://mmonit.com/monit/
It monitors processes and restarts them (and other things.)
I thought I'd add a more versatile solution, which is one that I personally use all the time as well.
It's name is Immortal (source is at https://github.com/immortal/immortal)
To have it monitor and instantly restart a program if it stops, simply run the following command:
immortal <command>
So in your case I would run run_constantly.py like so:
immortal python run_constantly.py
The command ps aux | grep run_constantly.py should return 2 process IDs, one for the Immortal command, and one for the separate command Immortal started (just the regular command. As long as the Immortal process is running, run_constantly.py will stay running.

Categories