I have the following function in my script
import os, re
from pyngrok import ngrok
def server():
os.system('kill -9 $(pgrep ngrok)')
ngrok.connect(443, "tcp")
while True:
ngrok_tunnels = ngrok.get_tunnels()
url = ngrok_tunnels[0].public_url
if re.match("tcp://[0-9]*.tcp.ngrok.io:[0-9]*", url) is not None:
print "your url is : " + url
break
This is responsible for generating a ngrok tcp link and it works, but it gets stuck like in the image below.
enter image description here
How can I prevent it from being charged? And just print the link, they told me about the monitor_thread mode in False but I don't know how to configure it in my function, thank you very much in advance.
The reason the script is “stuck” is because pyngrok starts ngrok with a thread to monitor logs, and the Python process can’t exit until all threads have been dealt with. You can stop the monitor thread, as shown here in the documentation, or, if you have no use for it, you can prevent it from starting in the first place:
import os, re
from pyngrok import ngrok
from pyngrok.conf import PyngrokConfig
def server():
os.system('kill -9 $(pgrep ngrok)')
ngrok_tunnel = ngrok.connect(443, "tcp", pyngrok_config=PyngrokConfig(monitor_thread=False))
print("your url is : " + ngrok_tunnel.public_url)
However, this still won’t do what you want. If you do this, yes, you will be returned back to the console, but with that the ngrok process will also be stopped, as it is a subprocess of Python at this point. To leave the tunnels open, you need to leave the process running.
Related
How to reset Systemd Watchdog using Python? I'm implementing a watchdog for a multi-threaded picture detection software with many dependencies. Previously, the service started a shell script, but now it starts the Python file directly. However, the watchdog implementation is not functioning correctly. Is there a more effective alternative? The goal is to restart the "Picture Detection Main Application" service if the program gets stuck in a loop for 30 seconds or more.
Following the service in the systemd folder
[Unit]
Description=Picturedetection Main application
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
User=user
WorkingDirectory=/home/user/detection/
ExecStart=/usr/bin/python3 /home/user/detection/picturedetection.py
Environment=TF_CUDNN_USE_AUTOTUNE=0
WatchdogSec=30
Restart=always
WatchdogTimestamp=30
[Install]
WantedBy=multi-user.target
Following the python main i currently use
import sys
import syslog
from multiprocessing import Queue
from DetectionDefines import Detection_Version as OV
import time
print("OPTICONTROL START")
syslog.syslog(syslog.LOG_NOTICE, "PICTUREDETECTION START --- Version " + OV.major + "." + OV.minor)
from config.Config import Config as conf
from prediction.ImageFeed import ImageFeed
from prediction.ResultHandler import ResultHandler
from dataflow.CommServer import CommServer
from dataflow.FTLComm import FTLComm
from dataflow.MiniHTTPServer import MiniHTTPServer
from dataflow.GraphDownloader import GraphDownloader
from tools.Logger import Logger
from dataflow.FTPHandler import FTPHandler
from tools.FileJanitor import FileJanitor
from prediction.PredictionPipeline import PredictionPipeline
#Watchdog test
import os
import time
import systemd
# Communication
CommServer().start()
FTLComm()
#Experimental not working right now. Probably even delete
test = Logger("<WATCHDOGWATCHDOG> ")
def WatchdogReset():
test.notice("WATCHDOG has been reseted")
with open("/dev/watchdog", "w") as f:
f.write("1")
#End of Experimental
# Other subprocesses
MiniHTTPServer().start()
FileJanitor().start()
FTPHandler().start()
GraphDownloader().start()
# Detection subprocesses
img_queue = Queue(maxsize = 1)
rst_queue = Queue(maxsize = conf.result_buffer)
ImageFeed(img_queue).start()
ResultHandler(rst_queue).start()
while True:
# CUDA / TensorFlow need to be in the main process
PredictionPipeline(img_queue, rst_queue).predict()
systemd.daemon.notify("WATCHDOG=1")
Additionally, I want to ensure that the program restarts if it gets stuck in an infinite loop. However, this is a multi-threaded program. Will it still be able to restart while other processes are running?
I attempted to activate the watchdog using the method, but it seems to have no effect. The script restarts every 30 seconds. I considered the possibility of an error in my implementation, but using an "os" query didn't resolve the issue.
Additionally, I attempted to use a custom "FileWatchdog" that sends error messages and restarts the service by executing a shell script. However, this requires superuser rights, and I don't want to distribute software with a hardcoded password. Additionally, I believe this solution would pose a challenge in the long term.
I found the solution
Instead I used the sdnotify library which you can download via pip. Then I checked the currend processes if they´re still alive.
Like this:
import sdnotify
from tools.Logger import Logger
from tools import Watchdog
test = Logger("<WATCHDOGWATCHDOG> ")
n = sdnotify.SystemdNotifier()
n.notify("READY=1")
imdfg = ImageFeed(img_queue)
rslt = ResultHandler(rst_queue)
imdfg.start()
rslt.start()
if(Watchdog.check(imdfg)):
n.notify("WATCHDOG=1")
test.notice("OPTICONTROL_WATCHDOG Reset")
time.sleep(2)
#Watchdog file
from multiprocessing import process
def check(prc):
return prc.is_alive()
I have below code to download a file inside a loop,
import wget
try:
wget.download(url)
except:
pass
But if the Internet goes down, it doesn't return!
So my whole loop is stuck.
I want to repeat the same download if internet goes down. So I wanna know does any error happen.
How can i mitigate this?
One simple solution is to move your download code to a thread and make it a separate process which can be interrupted.
You can use python Thread and Timer module to achieve it.
from threading import Thread, Timer
from functools import partial
import time
import urllib
def check_connectivity(t):
try:
urllib.request.urlopen("http://google.com", timeout=2)
except Exception as e:
t._Thread__stop()
class Download(Thread):
def run(self):
print("Trying to download file....")
con = partial(check_connectivity, self)
while True:
t = Timer(5, con) # Checks the connectivity every 5 second or less.
t.start()
# your download code....
def main():
down = Download()
down.start()
down.join()
You code move your main download loop inside the thread's run method. And start a timer inside which listens for the network connectivity.
I'm currently trying to use "Taskkill" in Python to kill tasks, but running into several roadblocks. I was wondering if I'm going about this the right way or if there is a better way? Right now the code is not working at all, and just a black dos box appears briefly and goes away. I need to be able to program variables into the task kill, such as i need to be able to use it on my network over a distance. Basically it needs a username (for elevated privileges), The IP Address of the computer I'm connecting to, and the Task that needs the killing. The user name will need a password manually entered.
This is the part of the code:
import tkinter as tk
from tkinter import ttk
from tkinter import *
from socket import *
from subprocess import call
import platform
import os
import sys
import getpass
def kill():
ipad = ipaddress.get()
tsk = tskille.get()
os.system(['taskkill', '/u', 'domain\A'+usrnme, '/s '+ipad, '/F ', '/IM ' +tsk]) # Admin accounts are represeted by an "A" in front of the acctral username.
return
usrnme = getpass.getuser()
tskille = StringVar() # label that can be overwritten manually to whatever
tskille.set("notepad.exe") # Just to put something in the box
ipaddress = StringVar() # have other code to determine the ip address of a pc.
ipaddress.set("0.0.0.0")
Initially i tried using "call" or "runCmd" and finally "os.system" but I'm definitely doing something wrong. Any help would be appreciated
powershell tutorial
To terminate a process using PowerShell, you can either use the WMI interface or use the Stop-Process cmdlet, which comes by default with PowerShell.
Ok so i found out it doesn't like the username part - it will not prompt for a password.. but this part works as intended:
def kill():
usrnme = getpass.getuser()
ipad = ipaddress.get()
tsk = tskille.get()
os.system("taskkill.exe /s %s /F /IM %s" % (ipad, tsk))
return
the getpass.getuser() works as well, basically i just wanted it to be dynamic so that if others use it; it will scan their username and add:
os.system("taskkill.exe /u:domain\A%s /s %s /F /IM %s" % (usrnme, ipad, tsk))
But this code does not work causes the dos window to open and close immediately.
I'm writing python unit tests that test against a REST API that needs to be running as another process.
The REST server is a tomcat application that I call from the shell to run in development mode, so what I am looking to do in the python test is:
Start the server, return when the server is up.
Run unit tests
Send the server Ctrl+D so it shuts down gracefully.
Is there a way to use a single point of entry for python so that the server starts and unit tests run all from one python script call?
I've look around at python subprocess and multithreading in python, but I still don't quite see how to get there from here.
For those that are familiar, this is an Atlassian JIRA plugin we are developing, so the actual shell command is "atlas-run".
Since no one has offered any code to help with this problem, I would do something like the following. Turns out pexpect is very powerful and you don't need the signal module.
import os
import sys
import pexpect
def run_server():
server_dir = '/path/to/server/root'
current_dir = os.path.abspath(os.curdir)
os.chdir(server_dir)
server_call = pexpect.spawn('atlas-run')
server_response = server_call.expect(['Server Error!', 'Sever is running!'])
os.chdir(current_dir)
if server_response:
return server_call #return server spawn object so we can shutdown later
else:
print 'Error starting the server: %s'%server_response.after
sys.exit(1)
def run_unittests():
# several ways to do this. either make a unittest.TestSuite or run command line
# here is the second option
unittest_dir = '/path/to/tests'
pexpect.spawn('python -m unittest discover -s %s -p "*test.py"'%unittest_dir)
test_response = pexpect.expect('Ran [0-9]+ tests in [0-9\.]+s') #catch end
print test_response.before #print output of unittests before ending.
return
def main():
server = run_sever()
run_unittests()
server.sendcontrol('d') #shutdown server
if __name__ == "__main__":
main()
I need to launch a GUI application, wait for the application to quit, and then start the other processes.
import subprocess
res = subprocess.check_output(["/usr/bin/open", "-a", "/Applications/Mou.app", "p.py"])
print "Finished"
... start the other processes
However, the process returns right away without waiting for the Mou.app to finish.
How can I make the python process to wait? I use Mac OS X.
According to the open man page, the -W flag causes open to wait until the app exits.
Therefore try:
import subprocess
res = subprocess.check_output(["/usr/bin/open", "-a", "-W", "/Applications/Mou.app", "p.py"])
print "Finished"