explain python windows service example (win32serviceutil.serviceframework) - python

Most python windows service examples based on the win32serviceutil.ServiceFramework use the win32event for synchronization.
For example:
http://tools.cherrypy.org/wiki/WindowsService (the example for cherrypy 3.0)
(sorry I dont have the reputation to post more links, but many similar examples can be googled)
Can somebody clearly explain why the win32events are necessary (self.stop_event in the above example)?
I guess its necessary to use the win32event due to different threads calling svcStop and svcRun? But I'm getting confused, there are so many other things happening: the split between python.exe and pythonservice.exe, system vs local threads (?), python GIL..

For the top of PythonService.cpp
PURPOSE: An executable that hosts Python services.
This source file is used to compile 2 discrete targets:
* servicemanager.pyd - A Python extension that contains
all the functionality.
* PythonService.exe - This simply loads servicemanager.pyd, and
calls a public function. Note that PythonService.exe may one
day die - it is now possible for python.exe to directly host
services.
What exactly do you mean by system threads vs local threads? You mean threads created directly from C outside the GIL?
The PythonService.cpp just related the names to callable python objects and a bunch of properties, like the accepted methods.
For example a the accepted controls from the ServiceFramework:
def GetAcceptedControls(self):
# Setup the service controls we accept based on our attributes. Note
# that if you need to handle controls via SvcOther[Ex](), you must
# override this.
accepted = 0
if hasattr(self, "SvcStop"): accepted = accepted | win32service.SERVICE_ACCEPT_STOP
if hasattr(self, "SvcPause") and hasattr(self, "SvcContinue"):
accepted = accepted | win32service.SERVICE_ACCEPT_PAUSE_CONTINUE
if hasattr(self, "SvcShutdown"): accepted = accepted | win32service.SERVICE_ACCEPT_SHUTDOWN
return accepted
I suppose the events are recommended because that way you could interrupt the interpreter from outside the GIL, even if python is in a blocking call from the main thread, e.g.: time.sleep(10) you could interrupt from those points outside the GIL and avoid having an unresponsive service.
Most of the win32 services calls are in between the python c macros:
Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS

It may be that, being examples, they don't have anything otherwise interesting to do in SvcDoRun. SvcStop will be called from another thread, so using an event is just an easy way to do the cross-thread communication to have SvcDoRun exit at the appropriate time.
If there were some service-like functionality that blocks in SvcDoRun, they wouldn't necessarily need the events. Consider the second example in the CherryPy page that you linked to. It starts the web server in blocking mode, so there's no need to wait on an event.

Related

Guaranteeing calling to destruction on process termination

After reading A LOT of data on the subject I still couldn't find any actual solution to my problem (there might not be any).
My problem is as following:
In my project I have multiple drivers working with various hardware's (IO managers, programmable loads, power supplies and more).
Initializing connection to these hardware's is costly (in time), and I cant open and then close the connection for every communication iteration between us.
Meaning I cant do this (Assuming programmable load implements enter / exit):
start of code...
with programmable_load(args) as program_instance:
programmable_load_instance.do_something()
rest of code...
So I went for a different solution :
class programmable_load():
def __init__(self):
self.handler = handler_creator()
def close_connection(self):
self.handler.close_connection()
self.handler = None
def __del__(self):
if (self.handler != None):
self.close_connection()
For obvious reasons I dont 'trust' the destructor to actually get called so I explicitly call close_connection() when I want to end my program (for all drivers).
The problem happens when I abruptly terminate the process, for example when I run via debug mode and quit debugging.
In these cases the process terminates without running through any destructors.
I understand that the OS will clear all memory unused at this point, but is there any way to clear the memory in an organized manner?
and if not, is there a way to make the quit debugging function pass through a certain set of functions? Does the python process know it got a quite debugging event or does it treat it as a normal termination?
Operating system: Windows
According to this documentation:
If a process is terminated by TerminateProcess, all threads of the
process are terminated immediately with no chance to run additional
code.
(Emphasis mine.) This implies that there is nothing you can do in this case.
As detailed here, signals don't work very well on ms-windows.
As was mentioned in a comment, you could use atexit to do the cleanup. But that only works if the process is asked to close (e.g. QUIT signal on Linux) and not just killed (as is likely the case when stopping the debugging session). Similarily if you force your computer to turn off (e.g. long press power button or remove power) then it won't be called either. There is no 'solution' to that for obvious reasons. Your program can't expect to be called when the power suddenly goes off or when it is forcefully killed. The point of forcefully killing is to definitely kill the process now. If it first called your clean-up code then you could delay that which defeats the purpose. That is why there are signals such as to ask your process to stop. This is not Python specific. The same concept also applies across operating systems.
Bonus (design suggestion, not a solution): I would argue that you can still make use of the context manager (using with). Your problem is not unique. Database connections are usually kept alive for longer as well. It is a question of the scope. Move the context further up to the application level. Then it is clear what the boundary is and you don't need any magic (you are probably also aware of #contextmanager to make that a breeze).
I haven't tested properly as I don't have wingide installed over here so I can't grant you this will work but what about using setconsolectrlhandler? For instance, try something like this:
import os
import sys
import win32api
if __name__ == "__main__":
def callback(sig, func=None):
print("Exit handler called!")
try:
win32api.SetConsoleCtrlHandler(callback, True)
except Exception as e:
print("Captured exception", e)
sys.exit(1)
print("Press to quit")
input()
print("Bye!")
It'll be able to handle CTRL+C and CTRL+BREAK signals:

How can I safely run untrusted python code?

Here is the scenario, my website has some unsafe code, which is generated by website users, to run on my server.
I want to disable some reserved words for python to protect my running environment, such as eval, exec, print and so on.
Is there a simple way (without changing the python interpreter, my python version is 2.7.10) to implement the feature I described before?
Many thanks.
Disabling names on python level won't help as there are numerous ways around it. See this and this post for more info. This is what you need to do:
For CPython, use RestrictedPython to define a restricted subset of Python.
For PyPy, use sandboxing. It allows you to run arbitrary python code in a special environment that serializes all input/output so you can check it and decide which commands are allowed before actually running them.
Since version 3.8 Python supports audit hooks so you can completely prevent certain actions:
import sys
def audit(event, args):
if event == 'compile':
sys.exit('nice try!')
sys.addaudithook(audit)
eval('5')
Additionally, to protect your host OS, use
either virtualization (safer) such as KVM or VirtualBox
or containerization (much lighter) such as lxd or docker
In the case of containerization with docker you may need to add AppArmor or SELinux policies for extra safety. lxd already comes with AppArmor policies by default.
Make sure you run the code as a user with as little privileges as possible.
Rebuild the virtual machine/container for each user.
Whichever solution you use, don't forget to limit resource usage (RAM, CPU, storage, network). Use cgroups if your chosen virtualization/containerization solution does not support these kinds of limits.
Last but not least, use timeouts to prevent your users' code from running forever.
One way is to shadow the methods:
def not_available(*args, **kwargs):
return 'Not allowed'
eval = not_available
exec = not_available
print = not_available
However, someone smart can always do this:
import builtins
builtins.print('this works!')
So the real solution is to parse the code and not allow the input if it has such statements (rather than trying to disable them).

Is it possible to pass a callable programmatically to a python instance (invoked with different permissions)

Assume I have python code
def my_great_func(an_arg):
a_file = open("/user/or/root/file", "w")
a_file.write("bla")
which I want to maintain without paying attention to invokation with and without priveleges. At the same time I don't want to invoke the script with sudo/enforce the invokation with sudo (although this would be a legitemate pratice) or enable setuid for my python interpreter (generally a bad idea...). An idea is now to start a second instance of the python interpretor and communicate over processes/pipes. In order to maximize the maintainability of the code it would be nice to simply pass the callable to the instance (e.g. started with subprocess.Popen and addressed to with its PID) like I would pass it to multiprocess.Process (which I can't use because I can't setuid in the subprocess). I imagine something like
# please consider this pseudo python code
pid = subprocess.Popen(["sudo", "python"]).get_pid()
thelib.pass_callable(pid, target, args)
or even
interpreter_instance = greatlib.Python(target, args)
interpreter_instance.start()
interpreter_instance.wait()
Is that possible and covered by existing libs?
Generally speaking, you don't want any script to run as Super User unless the script invoking it was called with Super User. This is not only an issue of good practice and secure programming, but also programmer etiquette. If any part of your program requires use of Super User, this intention should be made known before you even begin the program.
With that in mind, the Python thread library should work just fine for this.

How can I access Ring 0 with Python?

This answer, stating that the naming of classes in Python is not done because of special privileges, here confuses me.
How can I access lower rings in Python?
Is the low-level io for accessing lower level rings?
If it is, which rings I can access with that?
Is the statement "This function is intended for low-level I/O." referring to lower level rings or to something else?
C tends to be prominent language in os -programming. When there is the OS -class in Python, does it mean that I can access C -code through that class?
Suppose I am playing with bizarre machine-language code and I want to somehow understand what it means. Are there some tools in Python which I can use to analyze such things? If there is not, is there some way that I could still use Python to control some tool which controls the bizarre machine language? [ctypes suggested in comments]
If Python has nothing to do with the low-level privileged stuff, do it still offers some wrappers to control the privileged?
Windows and Linux both use ring 0 for kernel code and ring 3 for user processes. The advantage of this is that user processes can be isolated from one another, so the system continues to run even if a process crashes. By contrast, a bug in ring 0 code can potentially crash the entire machine.
One of the reasons ring 0 code is so critical is that it can access hardware directly. By contrast, when a user-mode (ring 3) process needs to read some data from a disk:
the process executes a special instruction telling the CPU it wants to make a system call
CPU switches to ring 0 and starts executing kernel code
kernel checks that the process is allowed to perform the operation
if permitted, the operation is carried out
kernel tells the CPU it has finished
CPU switches back to ring 3 and returns control to the process
Processes belonging to "privileged" users (e.g. root/Administrator) run in ring 3 just like any other user-mode code; the only difference is that the check at step 3 always succeeds. This is a good thing because:
root-owned processes can crash without taking the entire system down
many user-mode features are unavailable in the kernel, e.g. swappable memory, private address space
As for running Python code in lower rings - kernel-mode is a very different environment, and the Python interpreter simply isn't designed to run in it, e.g. the procedure for allocating memory is completely different.
In the other question you reference, both os.open() and open() end up making the open() system call, which checks whether the process is allowed to open the corresponding file and performs the actual operation.
I think SimonJ's answer is very good, but I'm going to post my own because from your comments it appears you're not quite understanding things.
Firstly, when you boot an operating system, what you're doing is loading the kernel into memory and saying "start executing at address X". The kernel, that code, is essentially just a program, but of course nothing else is loaded, so if it wants to do anything it has to know the exact commands for the specific hardware it has attached to it.
You don't have to run a kernel. If you know how to control all the attached hardware, you don't need one, in fact. However, it was rapidly realised way back when that there are many types of hardware one might face and having an identical interface across systems to program against would make code portable and generally help get things done faster.
So the function of the kernel, then, is to control all the hardware attached to the system and present it in a common interface, called an API (application programming interface). Code for programs that run on the system don't talk directly to hardware. They talk to the kernel. So user land programs don't need to know how to ask a specific hard disk to read sector 0x213E or whatever, but the kernel does.
Now, the description of ring 3 provided in SimonJ's answer is how userland is implemented - with isolated, unprivileged processes with virtual private address spaces that cannot interfere with each other, for the benefits he describes.
There's also another level of complexity in here, namely the concept of permissions. Most operating systems have some form of access control, whereby "administrators" have total control of the system and "users" have a restricted subset of options. So a kernel request to open a file belonging to an administrator should fail under this sort of approach. The user who runs the program forms part of the program's context, if you like, and what the program can do is constrained by what that user can do.
Most of what you could ever want to achieve (unless your intention is to write a kernel) can be done in userland as the root/administrator user, where the kernel does not deny any API requests made to it. It's still a userland program. It's still a ring 3 program. But for most (nearly all) uses it is sufficient. A lot can be achieved as a non-root/administrative user.
That applies to the python interpreter and by extension all python code running on that interpreter.
Let's deal with some uncertainties:
The naming of os and sys I think is because these are "systems" tasks (as opposed to say urllib2). They give you ways to manipulate and open files, for example. However, these go through the python interpreter which in turn makes a call to the kernel.
I do not know of any kernel-mode python implementations. Therefore to my knowledge there is no way to write code in python that will run in the kernel (linux/windows).
There are two types of privileged: privileged in terms of hardware access and privileged in terms of the access control system provided by the kernel. Python can be run as root/an administrator (indeed on Linux many of the administration gui tools are written in python), so in a sense it can access privileged code.
Writing a C extension or controlling a C application to Python would ostensibly mean you are either using code added to the interpreter (userland) or controlling another userland application. However, if you wrote a kernel module in C (Linux) or a Driver in C (Windows) it would be possible to load that driver and interact with it via the kernel APIs from python. An example might be creating a /proc entry in C and then having your python application pass messages via read/write to that /proc entry (which the kernel module would have to handle via a write/read handler. Essentially, you write the code you want to run in kernel space and basically add/extend the kernel API in one of many ways so that your program can interact with that code.
"Low-level" IO means having more control over the type of IO that takes place and how you get that data from the operating system. It is low level compared to higher level functions still in Python that give you easier ways to read files (convenience at the cost of control). It is comparable to the difference between read() calls and fread() or fscanf() in C.
Health warning: Writing kernel modules, if you get it wrong, will at best result in that module not being properly loaded; at worst your system will panic/bluescreen and you'll have to reboot.
The final point about machine instructions I cannot answer here. It's a totally separate question and it depends. There are many tools capable of analysing code like that I'm sure, but I'm not a reverse engineer. However, I do know that many of these tools (gdb, valgrind) e.g. tools that hook into binary code do not need kernel modules to do their work.
You can use inpout library http://logix4u.net/parallel-port/index.php
import ctypes
#Example of strobing data out with nStrobe pin (note - inverted)
#Get 50kbaud without the read, 30kbaud with
read = []
for n in range(4):
ctypes.windll.inpout32.Out32(0x37a, 1)
ctypes.windll.inpout32.Out32(0x378, n)
read.append(ctypes.windll.inpout32.Inp32(0x378)) #Dummy read to see what is going on
ctypes.windll.inpout32.Out32(0x37a, 0)
print read
[note: I was wrong. usermode code can no longer access ring 0 on modern unix systems. -- jc 2019-01-17]
I've forgotten what little I ever knew about Windows privileges. In all Unix systems with which I'm familiar, the root user can access all ring0 privileges. But I can't think of any mapping of Python modules with privilege rings.
That is, the 'os' and 'sys' modules don't give you any special privileges. You have them, or not, due to your login credentials.
How can I access lower rings in Python?
ctypes
Is the low-level io for accessing lower level rings?
No.
Is the statement "This function is intended for low-level I/O." referring to lower level rings or to something else?
Something else.
C tends to be prominent language in os -programming. When there is the OS -class in Python, does it mean that I can access C -code through that class?
All of CPython is implemented in C.
The os module (it's not a class, it's a module) is for accessing OS API's. C has nothing to do with access to OS API's. Python accesses the API's "directly".
Suppose I am playing with bizarre machine-language code and I want to somehow understand what it means. Are there some tools in Python which I can use to analyze such things?
"playing with"?
"understand what it means"? is your problem. You read the code, you understand it. Whether or not Python can help is impossible to say. What don't you understand?
If there is not, is there some way that I could still use Python to control some tool which controls the bizarre machine language? [ctypes suggested in comments]
ctypes
If Python has nothing to do with the low-level privileged stuff, do it still offers some wrappers to control the privileged?
You don't "wrap" things to control privileges.
Most OS's work like this.
You grant privileges to a user account.
The OS API's check the privileges granted to the user making the OS API request.
If the user has the privileges, the OS API works.
If the user lacks the privileges, the OS API raises an exception.
That's all there is to it.

Execute arbitrary python code remotely - can it be done?

I'm working on a grid system which has a number of very powerful computers. These can be used to execute python functions very quickly. My users have a number of python functions which take a long time to calculate on workstations, ideally they would like to be able to call some functions on a remote powerful server, but have it appear to be running locally.
Python has an old function called "apply" - it's mostly useless these days now that python supports the extended-call syntax (e.g. **arguments), however I need to implement something that works a bit like this:
rapply = Rapply( server_hostname ) # Set up a connection
result = rapply( fn, args, kwargs ) # Remotely call the function
assert result == fn( *args, **kwargs ) #Just as a test, verify that it has the expected value.
Rapply should be a class which can be used to remotely execute some arbitrary code (fn could be literally anything) on a remote server. It will send back the result which the rapply function will return. The "result" should have the same value as if I had called the function locally.
Now let's suppose that fn is a user-provided function I need some way of sending it over the wire to the execution server. If I could guarantee that fn was always something simple it could could just be a string containing python source code... but what if it were not so simple?
What if fn might have local dependencies: It could be a simple function which uses a class defined in a different module, is there a way of encapsulating fn and everything that fn requires which is not standard-library? An ideal solution would not require the users of this system to have much knowledge about python development. They simply want to write their function and call it.
Just to clarify, I'm not interested in discussing what kind of network protocol might be used to implement the communication between the client & server. My problem is how to encapsulate a function and its dependencies as a single object which can be serialized and remotely executed.
I'm also not interested in the security implications of running arbitrary code on remote servers - let's just say that this system is intended purely for research and it is within a heavily firewalled environment.
Take a look at PyRO (Python Remote objects) It has the ability to set up services on all the computers in your cluster, and invoke them directly, or indirectly through a name server and a publish-subscribe mechanism.
It sounds like you want to do the following.
Define a shared filesystem space.
Put ALL your python source in this shared filesystem space.
Define simple agents or servers that will "execfile" a block of code.
Your client then contacts the agent (REST protocol with POST methods works well for
this) with the block of code.
The agent saves the block of code and does an execfile on that block of code.
Since all agents share a common filesystem, they all have the same Python library structure.
We do with with a simple WSGI application we call "batch server". We have RESTful protocol for creating and checking on remote requests.
Stackless had ability to pickle and unpickle running code. Unfortunately current implementation doesn't support this feature.
You could use a ready-made clustering solution like Parallel Python. You can relatively easily set up multiple remote slaves and run arbitrary code on them.
You could use a SSH connection to the remote PC and run the commands on the other machine directly. You could even copy the python code to the machine and execute it.
Syntax:
cat ./test.py | sshpass -p 'password' ssh user#remote-ip "python - script-arguments-if-any for test.py script"
1) here "test.py" is the local python script.
2) sshpass used to pass the ssh password to ssh connection

Categories