sys.exit() Is not killing process - python

I'm trying to make a program that does something every hour, then runs itself, and then kill itself.
The problem I'm having is that the program does not kill itself completely. I see that the process is not gone when I use System Monitor.
Overtime I just get more and more python2 processes that take up ram.
I am using Python 2.7.12 on a 64 bit machine running Arch Linux
This is the code I'm running
def GoToWebsite(username, password):
chrome_options = webdriver.ChromeOptions()
prefs = {"profile.default_content_setting_values.notifications": 2}
chrome_options.add_experimental_option("prefs", prefs)
chromeBrowser = webdriver.Chrome('/home/daniel/Dropbox/Code/BrowserDrivers/chromedriver',
chrome_options=chrome_options)
chromeBrowser.get('http://www.website.com')
while True:
try:
picX, picY = pyautogui.locateCenterOnScreen(currentPythonDirectory + '/picture.png')
break
except:
pass
pyautogui.click(picX, picY)
time.sleep(3)
url = chromeBrowser.command_executor._url
session_id = chromeBrowser.session_id
return url, session_id
websiteUrl, websiteSessionId = GoToWebsite("username", "password")
#Do Stuff
originalStartTime = time.time()
currentPythonDirectory = os.path.dirname(os.path.realpath(__file__))
while True:
if (time.time() - originalStartTime) >= 3: # 3600:
chromeDriver = webdriver.Remote(command_executor=websiteUrl, desired_capabilities={})
chromeDriver.session_id = websiteSessionId
chromeDriver.quit()
try:
chromeDriver.close()
except:
pass
os.system("python2 " + currentPythonDirectory + "/PythonScript.py")
time.sleep(1)
sys.exit(1)
break
#Other Stuff

I had exactly the same issue when trying to make a better version of crontabs. I modified my code so you understand the approach. With this method you will not run into any max recursion problems.
import os, commands, regex, subprocess
from subprocess import call
allActivePythonProcesses = os.popen('pgrep -lf python').read()
thisIsYourPythonFileProcess = find('\d{7} python myRepeatingFile.py', allActivePythonProcesses )
if thisIsYourPythonFileProcess:
# Store the Process ID
convPID = find('\d{7}', thisIsYourPythonFileProcess)
print "Your Python File is running at PID: " + convPID
else:
print "Process Controller: Your Python file is not running"
try:
print "...Calling your Python file"
subprocess.check_call('python myRepeatingFile.py', shell=True)
except subprocess.CalledProcessError as e:
print "Process Call Error :" + e
if you want it to execute 24/7 just put it into a while True loop. import time module if you want to limit the speed.

As far as I am aware a subprocess will be launched and once it has completed return. This is a blocking action as python will wait for the subprocess you launched to be completed before executing any other code. Adding a print statement after the os.system() will show that the program never reaches the sys.exit(1)

Related

How do I avoid creating multiple python process w/ os.system?

When the 2 consecutive if statements executes, my python program shuts down. But that's not what I want. I want it to loop again and not exit the script. The problem that I found with simply looping is that python processes are created every time the os.system(command) line runs. I've individually tried the following:
os.exit()
sys.exit()
def kill_process():
pid = os.getpid()
sig = signal.SIGKILL
os.kill(pid, sig)
All of those options were individually paired with a os.system("python3 script.py"), yet none of these did the trick. Every scenario simply exits the script.
How do I make it so that when os.system(command) is passed that it just loops again without killing/exiting the script and without creating another python process everytime?
Here's the function in question:
def bluetoothLoop():
while True:
time.sleep(5)
BT_state = subprocess.run(['''system_profiler SPBluetoothDataType'''], shell=True, capture_output=True, encoding="utf", errors="ignore")
BT_state = BT_state.stdout
sound = "Blow"
title = "TURN OFF BLUETOOTH"
message = "-------------------------"
if "State: On" in BT_state and not " Connected:" in BT_state:
time.sleep(1)
BT_state = subprocess.run(['''system_profiler SPBluetoothDataType'''], shell=True, capture_output=True, encoding="utf", errors="ignore")
BT_state = BT_state.stdout
if "State: On" in BT_state and not " Connected:" in BT_state:
command = f'''
osascript -e 'display notification "{message}" with title "{title}" sound name "{sound}"'
'''
os.system(command)
os.exit()
time.sleep(1)
notify.restart()
Thanks a bunch, I've been struggling with this for a while now.

Python: How to resume the python script as soon as vpn network is up?

I have a python script (xyz.py) that I run through the command prompt. My question is that don't we have any method which helps to resume the python code automatically from where it was lost the VPN connection, without any manual intervention. This will help to avoid monitoring the code frequently. Below is my code but it reads from the start if there is any disconnection. Please suggest.
filename = 'xyz.py'
while True:
p = subprocess.Popen('python '+filename, shell=True).wait()
""" #if your there is an error from running 'xyz.py',
the while loop will be repeated,
otherwise the program will break from the loop"""
if p != 0:
continue
else:
break
If me, time.sleep will be used:
import os
import time
from datetime import datetime
import requests
script = 'xyz.py'
def main():
network_check_url = 'http://8.8.8.8'
while True:
try:
requests.get(network_check_url)
except Exception as e:
print(datetime.now(), e)
time.sleep(1)
else:
print('Network is ok. {datetime.now():%Y-%m-%d_%H:%M:%S}')
os.system(f'python {script}')
return
if __name__ == '__main__':
main()

Subprocess not finishing but script is run correctly

In order to run several scheduled python scripts and capture errors that may arise, I'm using a controller module that opens a subprocess for each script with the following:
def _process_script(nm_script, return_val):
try:
env = os.environ
p = subprocess.Popen(['python', nm_script], stdout = subprocess.PIPE,
shell=False, env=env, stderr=subprocess.PIPE)
p.wait()
(result, error) = p.communicate()
if p.returncode != 0:
cd_exec = -1
tx_exec = error.decode('utf-8')
else:
cd_exec = 1
tx_exec = None
except subprocess.CalledProcessError as e:
cd_exec = -1
tx_exec = e.output
return_val += [cd_exec, tx_exec]
def run_script(script):
return_val = multiprocessing.Manager().list()
#20 min timeout for the script
maxDelay = 1200
p = multiprocessing.Process(target=_process_script, args=(script.nm_script, return_val))
p.start()
p.join(maxDelay)
#Checks if the process is still alive
if p.is_alive:
p.terminate()
p.join()
cd_exec = -1
tx_exec = 'Timeout'
else:
cd_exec = return_val[0]
tx_exec = return_val[1]
return return_val
The code above has worked for all the scripts so far. There is however one Selenium datascraping script that seems to hang after its execution, and I can't seem to find the problem. This is it:
from selenium import webdriver
def main():
path = r'\\path'
browser = webdriver.Chrome()
url = '...'
browser.get(url)
#...Several steps that do the necessary scraping and export data to the path
#I have added the following steps to kill the browser instance:
browser.get('about:blank')
browser.find_element_by_tag_name('body').send_keys(Keys.ESCAPE)
time.sleep(2)
browser.quit()
del browser
return
The process executes successfully. The data is exported as expected and the browser instance that had been opened is closed. However, the main script keeps displaying the message regarding DevTools until the timeout of 20 minutes is reached:
DevTools listening on ws://...
I have several other selenium data scraping scripts that work in a similar manner to this one, and without any trouble so far.
I'm completely lost on this one, what could be the issue?
I see in the above code the author uses Popen() directly and then calls "p.wait()", where "P" is subprocess.
I use subprocess.run() in my code, but Python library reference documentation says that Popen will wait for the process to complete. Is "p.wait()" necessary?

Python threads hang and don't close

This is my first try with threads in Python,
I wrote the following program as a very simple example. It just gets a list and prints it using some threads. However, Whenever there is an error, the program just hangs in Ubuntu, and I can't seem to do anything to get the control prompt back, so have to restart another SSH session to get back in.
Also have no idea what the issue with my program is.
Is there some kind of error handling I can put in to ensure it doesn't hang.
Also, any idea why ctrl/c doesn't work (I don't have a break key)
from Queue import Queue
from threading import Thread
import HAInstances
import logging
log = logging.getLogger()
logging.basicConfig()
class GetHAInstances:
def oraHAInstanceData(self):
log.info('Getting HA instance routing data')
# HAData = SolrGetHAInstances.TalkToOracle.main()
HAData = HAInstances.main()
log.info('Query fetched ' + str(len(HAData)) + ' HA Instances to query')
# for row in HAData:
# print row
return(HAData)
def do_stuff(q):
while True:
print q.get()
print threading.current_thread().name
q.task_done()
oraHAInstances = GetHAInstances()
mainHAData = oraHAInstances.oraHAInstanceData()
q = Queue(maxsize=0)
num_threads = 10
for i in range(num_threads):
worker = Thread(target=do_stuff, args=(q,))
worker.setDaemon(True)
worker.start()
for row in mainHAData:
#print str(row[0]) + ':' + str(row[1]) + ':' + str(row[2]) + ':' + str(row[3])i
q.put((row[0],row[1],row[2],row[3]))
q.join()
In your thread method, it is recommended to use the "try ... except ... finally". This structure guarantees to return the control to the main thread even when errors occur.
def do_stuff(q):
while True:
try:
#do your works
except:
#log the error
finally:
q.task_done()
Also, in case you want to kill your program, go find out the pid of your main thread and use kill #pid to kill it. In Ubuntu or Mint, use ps -Ao pid,cmd, in the output, you can find out the pid (first column) by searching for the command (second column) you yourself typed to run your Python script.
Your q is hanging because your worker as errored. So your q.task_done() never got called.
import threading
to use
print threading.current_thread().name

Python script - log failed icmp/ping response - problems

I'm attempting to write a Python script that will ping/icmp an IP address and tell me if it's alive. I'm doing this because I have a intermittent issue. I wanted to ping, log the outcome, sleep for a period and attempt the ping again. I tried a while loop, but still getting errors like these:
line 33, in (module) systemPing('192.168.1.1')
line 30, in systemPing time.sleep(30)
KeyboardInterrupt
I'm using Python 2.6.
Ideally my question is how I loop through this method/function systemPing and what errors there are in my code? The script seems to work, but I get these errors when I hit ctrl-c.
from subprocess import Popen, PIPE
import datetime, time, re
logFile = open("textlog.txt", "a")
def getmyTime():
now = datetime.datetime.now()
return now.strftime("%Y-%m-%d %H:%M \n")
startTime = "Starting ..." + getmyTime()
logFile.write(startTime)
logFile.write("\n")
def systemPing(x):
cmd = Popen("ping -n 1 " + x , stdout=PIPE)
#print getmyTime()
for line in cmd.stdout:
if 'timed out' in line:
loggedTime = "Failure detected - " + getmyTime()
logFile.write(loggedTime)
if 'Reply' in line:
print "Replied..."
logFile.close()
print "Sleeping 30mins ... CTRL C to end"
time.sleep(30) #1800 is 30mins
systemPing('192.168.1.1')
if __name__ =='__main__':
systemPing('192.168.1.1')
Any help is always appreciated.
Thank you.
It's not really an error per se, it's just the default behavior for Python, upon receipt of a SIGINT (which is what happens when you press CTRL-C), to raise a KeyboardInterrupt exception.
You'll get the same thing if you send the signal with kill(1), like...
$ kill -INT <pid>
If you want to handle it, then you can change the code to something like...
if __name__ =='__main__':
try:
systemPing('192.168.1.1')
except KeyboardInterrupt:
print 'Finished'
...or whatever you want it to do.

Categories