Subprocess not receiving SIGINT on ubuntu: asyncio - python

I have written code using asyncio that runs a process and after a condition tries to safely interrupt it by sending sigint and waiting for it to close. I tested this on a oracle cloud server with the Oracle Linux 8 image and it worked as I expected. Once I moved over to my main server running Ubuntu 22.04 Everything worked the same except the process never receives the SIGINT signal. I've tried sending other signals and nothing reaches the process.
Code Example:
async def openProc(...):
...
proc = await asyncio.create_subprocess_shell(STARTCOMMAND,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.STDOUT,
stdin=asyncio.subprocess.PIPE)
try:
await stop_request.wait()
proc.send_signal(signal.SIGINT) # The signal that works on one OS but not the other
finally:
await safe_quit.wait()
response = await server.sendData({"request":"STATE", "data":"STOPPED", "returncode":proc.returncode})
logger.info(response)
It just passes straight through the proc.send_signal(..) with no error message.

Related

Asyncio - run tasks cyclically and politely stop them with ctrl+C

I am writing a pyModbus server with asyncio, based on this example.
Alongside the server I've got a serial device which I'm communicating with and a server updating task.
One task should check the status of the serial device every 500ms.
The server updating task should check if there are any changes in the status of the serial device and update the info on the server. Moreover, if there is a request waiting on the server it should call another task which will send necessary info to the serial device.
My three questions are:
How should I stop the server politely? For now the app is running only in console so it is stopped by ctrl+c - how can I stop the server without causing an avalanche of errors?
How can I implement tasks to be executed cyclically (let's say I want to frefresh the server data every 500ms)? I've found the aiocron module but as far as I can tell its functionalities are a bit limtied as it is intended just for calling functions in intervals.
How can I politely cancel all the tasks before stopping the server (the infinitely, cyclically running ones) when closing the app?
Thanks!
EDIT:
Speaking of running cyclical tasks and cancelling them - is this a proper way to do that? This doesn't rise any errors but does it clean eveything correctly? (I created this sketch compiling a dozen of questions on stackoverflow, I am not sure if this makes sense)
import asyncio
async def periodic():
try:
while True:
print('periodic')
await asyncio.sleep(1)
except asyncio.CancelledError as ex:
print('task1', type(ex))
raise
async def periodic2():
try:
while True:
print('periodic2')
await asyncio.sleep(0.5)
except asyncio.CancelledError as ex:
print('task2', type(ex))
raise
async def main():
tasks = []
task = asyncio.create_task(periodic())
tasks.append(task)
task2 = asyncio.create_task(periodic2())
tasks.append(task2)
for task in tasks:
await task
if __name__ == "__main__":
try:
asyncio.run(main())
except KeyboardInterrupt:
pass

How to get realtime top command output using asyncio.create_subprocess_exec when python code is run as a background service?

I have a python code that runs Linux top command using asyncio.create_subprocess_exec to get the real-time output and send it to a WebSocket connection.
The code works well when I run it in the bash shell, but it doesn't work when I run the code as a background service.
I get this error in the logs b'Error opening terminal: unknown.\r\n'.
I suspect that the backend service is not able to execute the Linux top command in the terminal.
My Question is
How do I make the background service access the tty (terminal) to execute the top command and read its output?
I am using python3.6, Asyncio, and aiohttp libraries.
code snippet
async def execute_top(self,request):
cmd = ['top']
line_count = 0
ws = request.pop('ws')
try:
async for res in self.run_commands(cmd):
await ws.send_str(res)
except Exception as e:
self.log.error("{}".format(e))
async def run_commands(self,command):
master, slave = pty.openpty()
self.log.info("tty status : {}".format(sys.stdout.isatty()))
process = await asyncio.create_subprocess_exec(*command, stdout=slave, start_new_session=True)
yield os.read(master, 1000)

How to kill a twisted websocket server programmatically

How do you kill a websocket server programmatically? I'll be deploying this server to production along side other things. And I like to build a single python script that sends a kill signal to everything. I cannot figure out how to kill this thing without a user keyboard interrupt or a kill -9.
sys.exit() didn't work.
psutil and terminate() didn't work either
import os
import psutil
current_system_pid = os.getpid()
ThisSystem = psutil.Process(current_system_pid)
ThisSystem.terminate()
I'm out of ideas. For now I'm killing it on the command line with kill -9.
When I kill it varous ways, it tend to see this message below, but the scrip is still running
2020-12-12 12:24:54-0500 [autobahn.twisted.websocket.WebSocketServerFactory] (TCP Port 8080 Closed)
2020-12-12 12:24:54-0500 [-] Stopping factory <autobahn.twisted.websocket.WebSocketServerFactory object at 0x110680f28>
autobahn install:
pip install autobahn[twisted]
Code:
from autobahn.twisted.websocket import WebSocketServerProtocol, WebSocketServerFactory
import sys
from twisted.python import log
from twisted.internet import reactor
class MyServerProtocol(WebSocketServerProtocol):
def onConnect(self, request):
print("Client connecting: {0}".format(request.peer))
def onOpen(self):
print("WebSocket connection open.")
def onMessage(self, payload, isBinary):
print("Text message received: {0}".format(payload.decode('utf8')))
# echo back message verbatim
# self.sendMessage(payload, isBinary)
def onClose(self, wasClean, code, reason):
print("WebSocket connection closed: {0}".format(reason))
def StopWebsocketServer():
PrintAndLog_FuncNameHeader("Begin")
reactor.stop()
PrintAndLog_FuncNameHeader("End")
if __name__ == '__main__':
# TODO remove the logging that came in the example
log.startLogging(sys.stdout)
factory = WebSocketServerFactory("ws://127.0.0.1:8080")
factory.protocol = MyServerProtocol
# note to self: if using putChild, the child must be bytes...
reactor.listenTCP(Port_ws, factory)
reactor.run()
Solution using #Jean-Paul Calderone's answer:
import os
import signal
os.kill(os.getpid(), signal.SIGKILL)
I have an external python script that sends a kill signal to each of my python scripts. The kill signal is simply the existence of a file that every script knows to look for. Once that kill signal appears, each script knows it has x seconds before it will be killed. This way they have a few seconds to gracefully finish something.
twisted.internet.reactor.stop() is how you cause the reactor to shut down. This is usually what results in a Twisted-based program exiting (though of course it doesn't necessarily have to, if the program does more things after the reactor shuts down - but this is uncommon).
However, it sounds like you don't want to know what Python code to run inside the process to end it. You want to know what some other process can do to your Twisted-based process to make it exit. You gave two solutions - KeyboardInterrupt and SIGKILL. You didn't mention why either of these two solutions is inappropriate. They seem fine to me.
If you're uncomfortable with SIGKILL (which you shouldn't be, after all, your program might meet an untimely demise for many reasons and you should be prepared to deal with this) then what you might have overlooked about KeyboardInterrupt is that it is merely the exception that is raised inside a Python program by the default SIGINT handler.
If you send SIGINT to a Twisted-based process then, under normal usage, this will stop the reactor and allow an orderly shutdown.

discord.py on_message stops working after disowning linux job & disconnecting

I'm writing a discord-bot in python and it runs alright from IntelliJ and even from Terminal.
The problem starts when i try to let it run on a linux server while not beeing connected to it
# Called when a message is created and sent to a server.
# Parameters: message – A Message of the current message.
async def on_message(self, message):
print('Message from {0.author}: {0.content}'.format(message))
if message.author == self.user:
return
try:
await self.serverLog.on_message(message)
except Exception as e:
logger.exception(e)
try:
await self.werwolfBot.on_message(message)
except Exception as e:
logger.exception(e)
i start the bot via commandline
cd WerwolfBot
python3.6 -m werwolf &
disown
while still connected via putty on_message and all other events do trigger
and when i disconnect the ssh connection to the linux server
from that moment on it will trigger other events like on_voice_state_update but not on_message
I expected that i can let the bot run disowned and it will still work. But it only works for other events than on_message
See this answer on Unix SE about what disown actually does. Here's the part relevant to your question:
However note that it still is connected to the terminal, so if the terminal is destroyed (which can happen if it was a pty, like those created by xterm or ssh, and the controlling program is terminated, by closing the xterm or terminating the SSH connection), the program will fail as soon as it tries to read from standard input or write to standard output.
So your on_message fails as soon as the print tries to write to stdout.
I can think of a couple of solutions you could try:
Use nohup instead (this is the most turnkey solution)
Redirect stdout (and maybe stderr) to some other file
print to someplace other that stdout by passing a file= argument
Use logging to handle your logs and don't print them to stdout (See Setting Up Logging)

asyncssh - how to create listener while still getting user inputs

I am working with network equipment, i writing a program that uses asyncssh to do ssh to network device.
The network device sends sometimes updates via the session using broadcast.
What i am trying to do is to make a listener that listens to the stdout, and still can get user inputs (probably will be some sort of rest api functions).
Is it possible to do so?
This stack overflow answer shows you how to listen to stdin async:
https://stackoverflow.com/a/35514777/10840818
Write some code. What are you trying to do? The following is the example from the asyncSSH docs. Is your question about asyncSSH or something else?
import asyncio, asyncssh, sys
async def run_client():
async with asyncssh.connect('localhost') as conn:
result = await conn.run('echo "Hello!"', check=True)
print(result.stdout, end='')
try:
asyncio.get_event_loop().run_until_complete(run_client())
except (OSError, asyncssh.Error) as exc:
sys.exit('SSH connection failed: ' + str(exc))
If you need a server process look at sanic and set up a rest API instead of stdin for communicating.

Categories