I have a Windows System Service that I am trying to write. I'm trying to an interface for a POS machine, so ideally I would like to include this code inside of the system service. However some experimentation has lead me to believe that the windows system service will only execute basic tasks and not oter iterations.
I have another function that I need to call every x seconds, this additional function is a while loop, but I cannot get my function and the win32 loop to wait for system calls to play nicely together. I go into greater detail in my code below.
import win32service
import win32serviceutil
import win32event
class PySvc(win32serviceutil.ServiceFramework):
# net name
_svc_name_ = "test"
_svc_display_name_ = "test"
_svc_description_ = "Protects your computer."
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self,args)
# create an event to listen for stop requests on
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
# core logic of the service
def SvcDoRun(self):
# if the stop event hasn't been fired keep looping
while rc != win32event.WAIT_OBJECT_0:
# block for 60 seconds and listen for a stop event
rc = win32event.WaitForSingleObject(self.hWaitStop, 60000)
## I want to put an additional function that uses a while loop here.
## The service will not work correctly with additional iterations, inside or
## the above api calls.
## Due to the nature of the service and the api call above,
## this leads me to have to compile an additional .exe and somehow call that
## from the service.
# called when we're being shut down
def SvcStop(self):
# tell the SCM we're shutting down
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
# fire the stop event
win32event.SetEvent(self.hWaitStop)
if __name__ == '__main__':
win32serviceutil.HandleCommandLine(PySvc)
My research has shown me that I need to somehow call a .exe from a windows system service. Does anyone know how to do this? I have tried using os.system, and variant calls of the subprocess module to no avail, it seems that windows simply ignores them. Any ideas?
EDIT: revert to original question
Can't say as I'm familiar with Windows development but in *nix I've found sockets are very useful in situations where two things shouldn't be able to talk by definition but you need them to anyway e.g. making web browsers launch desktop apps, making the clipboard interact with the browser etc.
In most cases UDP sockets are all that you need for a little IPC and they are trivial to code for in Python. You do have to be extra careful though, often restrictions are there for a good reason and you need to really understand a rule before you go breaking it... Bear in mind anyone can send a UDP packet so make sure the receiving app only accept packets from localhost and make sure you sanity check all incoming packets to protect against local hackers/malware. If the data transmitted is particularly sensitive or the action initiated is powerful it may not be a good idea at all, only you know your app well enough to say really.
Related
I'm building photovoltaic motorized solar trackers. They're controlled by Raspberry Pi's running python script. RPI's are connected to my public openVPN server for remote control and continuous software development. That's working fine. Recently a passionate customer asked me for some sort of telemetry data for his tracker - let's say, it's current orientation, measured wind speed etc.. By being new to python, I'm really struggling with this part.
I've decided to use socket approach from guides like this. Python script listens on a socket, and my openVPN server, which is also web server, connects to it using PHP fsockopen. Python sends telemetry data, PHP makes it user friendly and displays it on the web. Everything so far works, however I don't know how to design my python script around it.
The problem is, that my script has to run continuously, and socket.accept() halts it's execution, waiting for a connection. Didn't find any obvious solution on the web. Would multi-threading work for this? Sounds a bit like overkill.
Is there a way to run socket listening asynchronously? Like, for example, pigpio callback's which I'm using abundantly?
Or alternatively, is there a better way to accomplish my goal?
I tried with remote accessing status file that my script is maintaining, but that proved to be extremely involved with setup and prone to errors when the file was being written.
I also tried running the second script. Problem is, then I have no access to relevant data, or I need to read beforementioned status file, and that leads to the same problems as above.
Relevant bit of code is literally only this:
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
Best regards.
For a simple case like this I would probably just wrap the socket code into a separate thread.
With multithreading in python, the Global Interpreter Lock (GIL) means that only one thread executes at a time, so you don't really need to add any further locks to the data if you're just reading the values, and don't care if it's also being updated at the same time.
Your code would essentially read something like:
from threading import Thread
def handle_telemetry_requests():
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
except:
# Error handling here (this will cause thread to exit if any error occurs)
pass
socket_thread = Thread(target=handle_telemetry_requests)
socket_thread.daemon = True
socket_thread.start()
Setting the daemon flag means that when the main application ends, the thread will also be terminated.
Python does provide the asyncio module - which may provide the callbacks you're looking for (though I don't have any experience with this).
Other options are to run a flask server in the python apps which will handle the sockets for you and you can just code the endpoints to request the data. Or think about using an MQTT broker - the current data can be written to that - and other apps can subscribe to updates.
I am trying to launch a steam game on my computer through an ssh connection (into a Win10 machine). When run locally, the following python call works.
subprocess.run("start steam://rungameid/[gameid]", shell=True)
However, whenever I run this over an ssh connection—either in an interactive interpreter or by invoking a script on the target machine—my steam client suddenly exits.
I haven't noticed anything in the steam logs except that Steam\logs\connection_log.txt contains logoff and a new session start each time. This is not the case when I run the command locally on my machine. Why is steam aware of the different sources of this command, and why is this causing the steam connection to drop? Can anyone suggest a workaround?
Thanks.
Steam is likely failing to launch the application because Windows services, including OpenSSH server, cannot access the desktop, and, hence, cannot launch GUI applications. Presumably, Steam does not expect to run an application in an environment in which it cannot interact with the desktop, and this is what eventually causes Steam to crash. (Admittedly, this is just a guess—it's hard to be sure exactly what is happening when the crash does not seem to appear in the logs or crash dumps.)
You can see a somewhat more detailed explanation of why starting GUI applications over SSH fails when the server is run as a Windows service in this answer by domih to this question about running GUI applications over SSH on Windows.
domih also suggests some workarounds. If it is an option for you, the simplest one is probably to download and run OpenSSH server manually instead of running the server as a service. You can find the latest release of Win32-OpenSSH/Windows for OpenSSH here.
The other workaround that still seems to work is to use schtasks. The idea is to create a scheduled task that runs your command—the Task Scheduler can access the desktop. Unfortunately, this is only an acceptable solution if you don't mind waiting until the next minute at least; schtasks can only schedule tasks to occur exactly on the minute. Moreover, to be safe to run at any time, code should probably schedule the task for at least one minute into the future, meaning that wait times could be anywhere between 1–2 minutes.
There are also other drawbacks to this approach. For example, it's probably harder to monitor the running process this way. However, it might be an acceptable solution in some circumstances, so I've written some Python code that can be used to run a program with schtasks, along with an example. The code depends on the the shortuuid package; you will need to install it before trying the example.
import subprocess
import tempfile
import shortuuid
import datetime
def run_with_schtasks_soon(s, delay=2):
"""
Run a program with schtasks with a delay of no more than
delay minutes and no less than delay - 1 minutes.
"""
# delay needs to be no less than 2 since, at best, we
# could be calling subprocess at the end of the minute.
assert delay >= 2
task_name = shortuuid.uuid()
temp_file = tempfile.NamedTemporaryFile(mode="w", suffix=".bat", delete=False)
temp_file.write('{}\nschtasks /delete /tn {} /f\ndel "{}"'.format(s, task_name, temp_file.name))
temp_file.close()
run_time = datetime.datetime.now() + datetime.timedelta(minutes=delay)
time_string = run_time.strftime("%H:%M")
# This is locale-specific. You will need to change this to
# match your locale. (locale.setlocale and the "%x" format
# does not seem to work here)
date_string = run_time.strftime("%m/%d/%Y")
return subprocess.run("schtasks /create /tn {} /tr {} /sc once /st {} /sd {}".format(task_name,
temp_file.name,
time_string,
date_string),
shell=True)
if __name__ == "__main__":
# Runs The Witness (if you have it)
run_with_schtasks_soon("start steam://rungameid/210970")
After reading A LOT of data on the subject I still couldn't find any actual solution to my problem (there might not be any).
My problem is as following:
In my project I have multiple drivers working with various hardware's (IO managers, programmable loads, power supplies and more).
Initializing connection to these hardware's is costly (in time), and I cant open and then close the connection for every communication iteration between us.
Meaning I cant do this (Assuming programmable load implements enter / exit):
start of code...
with programmable_load(args) as program_instance:
programmable_load_instance.do_something()
rest of code...
So I went for a different solution :
class programmable_load():
def __init__(self):
self.handler = handler_creator()
def close_connection(self):
self.handler.close_connection()
self.handler = None
def __del__(self):
if (self.handler != None):
self.close_connection()
For obvious reasons I dont 'trust' the destructor to actually get called so I explicitly call close_connection() when I want to end my program (for all drivers).
The problem happens when I abruptly terminate the process, for example when I run via debug mode and quit debugging.
In these cases the process terminates without running through any destructors.
I understand that the OS will clear all memory unused at this point, but is there any way to clear the memory in an organized manner?
and if not, is there a way to make the quit debugging function pass through a certain set of functions? Does the python process know it got a quite debugging event or does it treat it as a normal termination?
Operating system: Windows
According to this documentation:
If a process is terminated by TerminateProcess, all threads of the
process are terminated immediately with no chance to run additional
code.
(Emphasis mine.) This implies that there is nothing you can do in this case.
As detailed here, signals don't work very well on ms-windows.
As was mentioned in a comment, you could use atexit to do the cleanup. But that only works if the process is asked to close (e.g. QUIT signal on Linux) and not just killed (as is likely the case when stopping the debugging session). Similarily if you force your computer to turn off (e.g. long press power button or remove power) then it won't be called either. There is no 'solution' to that for obvious reasons. Your program can't expect to be called when the power suddenly goes off or when it is forcefully killed. The point of forcefully killing is to definitely kill the process now. If it first called your clean-up code then you could delay that which defeats the purpose. That is why there are signals such as to ask your process to stop. This is not Python specific. The same concept also applies across operating systems.
Bonus (design suggestion, not a solution): I would argue that you can still make use of the context manager (using with). Your problem is not unique. Database connections are usually kept alive for longer as well. It is a question of the scope. Move the context further up to the application level. Then it is clear what the boundary is and you don't need any magic (you are probably also aware of #contextmanager to make that a breeze).
I haven't tested properly as I don't have wingide installed over here so I can't grant you this will work but what about using setconsolectrlhandler? For instance, try something like this:
import os
import sys
import win32api
if __name__ == "__main__":
def callback(sig, func=None):
print("Exit handler called!")
try:
win32api.SetConsoleCtrlHandler(callback, True)
except Exception as e:
print("Captured exception", e)
sys.exit(1)
print("Press to quit")
input()
print("Bye!")
It'll be able to handle CTRL+C and CTRL+BREAK signals:
Most python windows service examples based on the win32serviceutil.ServiceFramework use the win32event for synchronization.
For example:
http://tools.cherrypy.org/wiki/WindowsService (the example for cherrypy 3.0)
(sorry I dont have the reputation to post more links, but many similar examples can be googled)
Can somebody clearly explain why the win32events are necessary (self.stop_event in the above example)?
I guess its necessary to use the win32event due to different threads calling svcStop and svcRun? But I'm getting confused, there are so many other things happening: the split between python.exe and pythonservice.exe, system vs local threads (?), python GIL..
For the top of PythonService.cpp
PURPOSE: An executable that hosts Python services.
This source file is used to compile 2 discrete targets:
* servicemanager.pyd - A Python extension that contains
all the functionality.
* PythonService.exe - This simply loads servicemanager.pyd, and
calls a public function. Note that PythonService.exe may one
day die - it is now possible for python.exe to directly host
services.
What exactly do you mean by system threads vs local threads? You mean threads created directly from C outside the GIL?
The PythonService.cpp just related the names to callable python objects and a bunch of properties, like the accepted methods.
For example a the accepted controls from the ServiceFramework:
def GetAcceptedControls(self):
# Setup the service controls we accept based on our attributes. Note
# that if you need to handle controls via SvcOther[Ex](), you must
# override this.
accepted = 0
if hasattr(self, "SvcStop"): accepted = accepted | win32service.SERVICE_ACCEPT_STOP
if hasattr(self, "SvcPause") and hasattr(self, "SvcContinue"):
accepted = accepted | win32service.SERVICE_ACCEPT_PAUSE_CONTINUE
if hasattr(self, "SvcShutdown"): accepted = accepted | win32service.SERVICE_ACCEPT_SHUTDOWN
return accepted
I suppose the events are recommended because that way you could interrupt the interpreter from outside the GIL, even if python is in a blocking call from the main thread, e.g.: time.sleep(10) you could interrupt from those points outside the GIL and avoid having an unresponsive service.
Most of the win32 services calls are in between the python c macros:
Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS
It may be that, being examples, they don't have anything otherwise interesting to do in SvcDoRun. SvcStop will be called from another thread, so using an event is just an easy way to do the cross-thread communication to have SvcDoRun exit at the appropriate time.
If there were some service-like functionality that blocks in SvcDoRun, they wouldn't necessarily need the events. Consider the second example in the CherryPy page that you linked to. It starts the web server in blocking mode, so there's no need to wait on an event.
I have an application that uses PyQt4 and python-twisted to maintain a connection to another program. I am using "qt4reactor.py" as found here. This is all packaged up using py2exe. The application works wonderfully for 99% of users, but one user has reported that networking is failing completely on his Windows system. No other users report the issue, and I cannot replicate it on my own Windows VM. The user reports no abnormal configuration.
The debugging logs show that the reactor.connectTCP() call is executing immediately, even though the reactor hasn't been started yet! There's no mistaking run order because this is a single-threaded process with 60 sec of computation and multiple log messages between this line and when the reactor is supposed to start.
There's a lot of code, so I am only putting in pseudo-code, hoping that there is a general solution for this issue. I will link to the actual code below it.
import qt4reactor
qt4reactor.install()
# Start setting up main window
# ...
from twisted.internet import reactor
# Separate listener for detecting/processing multiple instances
self.InstanceListener = ListenerFactory(...)
reactor.listenTCP(LISTEN_PORT, self.InstanceListener)
# The active/main connection
self.NetworkingFactory = ClientFactory(...)
reactor.connectTCP(ACTIVE_IP, ACTIVE_PORT, self.NetworkingFactory)
# Finish setting up main window
# ...
from twisted.internet import reactor
reactor.runReturn()
The code is nested throughout the Armory project files. ArmoryQt.py (containing the above code) and armoryengine.py (containing the ReconnectingClientFactory subclass used for this connection).
So, the reactor.connectTCP() call executes immediately. The client code executes the send command and then immediately connectionLost() gets called. It does not appear to try to reconnect. It also doesn't throw any errors other than connectionLost(). Even more mysteriously, it receives messages from the remote node later on, and this app even processes them! But it believes it's not connected (and handshake never finished, so the remote node shouldn't be sending messages, but might be a bug/oversight in that program).
What on earth is going on!? How could the reactor get started before I tell it to start? I searched the code and found no other code that (I believe) could start the reactor.
The API that you're looking for is twisted.internet.reactor.callWhenRunning.
However, it wouldn't hurt to have less than 60 seconds of computation at startup, either :). Perhaps you should spread that out, or delegate it to a thread, if it's relatively independent?