Python process contest with root privileges - python

So, I am participating in this python process competition where every candidate writes a script that should "kill" the others.
The winner is the one that the last message in dmesg (the kernel ring buffer) contains his name.
We will run all with root privileges.
There are no actual rules, in fact you can reboot the system and so on.
All the processes will be running at the same time on a Linux machine.
I'll appreciate some advises and ideas, Thanks !!

It appears you would like to write a python script that can
Catch signals
Write to the kernel ring buffer.
Can be handled with the signal module.
Can be handled with a C program to call the printk() function and a python function to call that C program. Another alternative may be to use /dev/kmsg.
If you get these pieces working, try writing to the kernel ring buffer when you catch signal(s).

Related

If I Keyboard Interupt a running Python script, is there a way to begin again where it left off? (windows)

I have a long-running script at work (windows unfortunately) where I programmed it to print the current analysis results if I ctrl-c. However, I was curious if after doing ctrl-c, I could start the script running again where it left off?
This is actually 3 questions:
-is it possible to do this without any programming changes? - e.g. I accidentally hit ctrl-c and want to retroactively start it where it left off
-can I use a command like ctrl-z (only on Mac I believe) on windows and program the script to print results when I issue it?
-what is the best programmatic way of automatically finishing the execution of the line I am on (massive .txt file of data) when I use an interrupt command, store that line number (in a file maybe), and restart the program on the next line with the next execution?
Thanks!
(FYI: I'm a novice Pythoner and my script currently takes about 10 min to perform 1 million lines. Files I will use in the future will often have 100+ million lines)
The short answer to your first question is No. Ctrl-C signals the interpreter, which unwinds the stack, presents you with a stack trace, and halts. You can't recover from ctrl-C for the same reason that you can't recover from any other untrapped exception. What you are asking for is a quick way to put Humpty Dumpty back together again.
You can restart a chess game from any point simply by laying out the pieces according to a picture you made before abandoning the game. But you can't easily do that with a program. The problem is that knowing the line number where the program stopped is not nearly enough information to recreate the state of the program at the time: the values of all the variables, the state of the stack, how much of the input it had read, and so forth. In other words, the picture is complicated, and laying out the pieces accurately is hard.
If your program is writing to the Windows console, you can suspend output by pressing ctrl-S and restart it by pressing ctrl-Q. These control characters are holdovers from the days of Teletype machines, but modern terminal emulators still obey them. This is a quick way to do what you want without program changes. Unsophisticated, but maybe good enough to begin with.
And your program will probably run a lot faster if it writes its output to file, for later examination in a text editor, rather than writing directly to the Windows console.
A full-on solution to your problem is something that I hesitate to recommend to a novice. The idea is to split calculation and display into two processes. The calculation process does its thing and feeds its results line by line to the display process. The display process listens to the calculation process and puts the results that it gets on the screen, but can also accept pause and resume commands. What happens while it is in the paused state is a design decision. You can decide either that the calculation process should block (easier option) or that it should buffer its results until the display process is ready to accept them again (harder option).

Is there a way to determine if multiple users are running a particular Python script?

I have script which can be run by any user who is connected to a server. This script writes to a single log file, but there is no restriction on who can use it at one time. So multiple people could attempt to write to the log and data might be lost. Is there a way for one instance of the code to know if other instances of that code are running? Moreover, is it possible to gather this information dynamically? (ie not allow data saving for the second user until the first user has completed hes/her task)
I know I could do this with a text file. So I could write the user name to the file when the start, then delete it when they finish, but this could lead to errors if the either step misses, such as an unexpected script termination. So what other reliable ways are there?
Some information on the system: Python 2.7 is installed on a Windows 7 64-bit server via Anaconda. All connected machines are also Windows 7 64-bit. Thanks in advance
Here is an implementation:
http://www.evanfosmark.com/2009/01/cross-platform-file-locking-support-in-python/
If you are using a lock, be aware that stale locks (that are left by hung or crashed processes) can be a bitch. Have a process that periodically searches for locks that were created longer than X minutes ago and free them.
It just in't clean allowing multiple users to write to a single log and hoping things go ok..
why dont you write a daemon that handles logs? other processes connect to a "logging port" and in the simplest case they only succeed if no one else has connected.
you can just modify the echoserver example given here: (keep a timeout in the server for all connections)
http://docs.python.org/release/2.5.2/lib/socket-example.html
If you want know exactly who logged what, and make sure no one unauthorized gets in, you can use unix sockest to restrict it to only certain uids/gids etc.
here is a very good example
NTEventLogHandler is probably the easiest way for logging to a given Windows machine/server, but it might make more sense to use SyslogHandler if you have a syslog sink on a Unix server.
The catch I can think of with SyslogHandler is that you'll likely need to poke holes through the Windows firewall in order to send packets over the syslog protocol, i.e., 514/TCP ("reliable syslog") and 514/UDP (traditional or "unreliable syslog").

Python socket program with shell script?

I have two machines connected by a switch. I have a popular server application which we can call "SXC_SERVER" on machine A and I interrogate the "SXC_SERVER" with the corresponding application from machine B, which I'll call "SXC_CLIENT". What I am trying to do is two-fold:
firstly, gain the traffic flow of SXC_SERVER and SXC_CLIENT interaction through tcpdump. The interaction between the two is a simple GET and RESPONSE, but I require the traffic traces.
secondly, I am wanting to log the Resident Set Size (RSS) usage of the SXC_SERVER process during each interaction/iteration
Moreover, I don't just need one traffic trace of the communication and one memory usage log of the SXC_SERVER process otherwise I wouldn't be writing this because I could go away and do that in ten minutes... In fact I am aiming to do very many! But let's say here for simplicity I want to do 10.
Since this will be very labor intensive as it will require me to be at both machines stopping and starting all of the SCX_CLIENT-to-SXC_SERVER interrogation, the tcpdump traffic capture, and the RSS memory usage of SXC_SERVER logging I want to write an automation script.
But! I am not a programmer, or software guy...(darn)
However, that said I can imaging a separate client/server program that oversees this automation, which we can call AUTO_SERVER and AUTO_CLIENT. My thoughts are that machine B would run AUTO_CLIENT and machine A would run AUTO_SERVER. The aim of both are to facilitate the automation, i.e. the stopping and starting of the tcpdump, and the memory logging on machine A of SXC_SERVER process before machine B queries SXC_SERVER with SXC_CLIENT (if you follow me!).
Effectively after one run of the SXC_SERVER-to-SXC_CLIENT GET/RESPONSE interaction I'll end up with:
one traffic capture *.pcap file called n1.pcap
and one memory log dump (of the RSS associated to the process) called n1.csv.
I am not a programmer or software guy but I can see a rough method (to the best of my ability) to achieve this, as follows:
Machine A: AUTO_SERVER
BEGIN:
msgRecieved = open socket(listen on port *n*)
DO
1. wait for machine A to tell me when to start watch (as in the program) to log RSS memory usage of the SXC_SERVER process using hardcoded command:
watch -n 0.1 'ps -p $(pgrep -d"," -x snmpd) -o rss= | awk '\''{ i += $1 } END { print i }'\'' >> ~/Desktop/mem_logs/mem_i.csv
UNTIL (messageRecieved == "FINISH")
quit
END.
Machine B: AUTO_CLIENT
BEGIN:
open socket(new)
for i in 10, do
1. locally start tcpdump with hardcoded hardcoded tcpdump command with relevant filter to only capture the SXC_SERVER-to-SXC_CLIENT traffic and set output flag to capture all traffic to a PCAP file called n*i*.pcap where *i* is the integer of the current for loop, saving the file in folder "~/Desktop/test_captures/".
2. Send the GET request to SXC_SERVER
3. wait for RESPONSE reply from SXC_SERVER
4. after recieved reply tell machine B to stop watch command
i++
5. send string "FINISH" to machine A.
END.
As you can see I would assume that this would be achieved by the use of a separate, and small client/server-like program (which here I've called AUTO_SERVER and AUTO_CLIENT) on both machines. The really rought pseudo-code design should be self-explanatory.
I have found a small client/server socket program located here: http://www.velvetcache.org/2010/06/14/python-unix-sockets which I would think may be suitable if I edit it, but I am not sure how exactly I can feasibly achieve this. Which is where you may be able to provide some assistance.
Can Python to do this automating?
Can it be done with a single bash script?
Do you think I am on the right path with this?
Or have you any helpful suggestions?
Regards.
You can use Python for this kind of thing, but I would strongly recommend using SSH for the bulk of the work (rather than coding the connection stuff yourself), and then using either a bash script or Python script to launch the tcpdump etc. processes.
Your question, however, is a bit too open-ended for stackoverflow - it sounds like you are asking someone to write this program for you, rather than for help with a specific problem.

How to start daemon process from python on windows?

Can my python script spawn a process that will run indefinitely?
I'm not too familiar with python, nor with spawning deamons, so I cam up with this:
si = subprocess.STARTUPINFO()
si.dwFlags = subprocess.CREATE_NEW_PROCESS_GROUP | subprocess.CREATE_NEW_CONSOLE
subprocess.Popen(executable, close_fds = True, startupinfo = si)
The process continues to run past python.exe, but is closed as soon as I close the cmd window.
Using the answer Janne Karila pointed out this is how you can run a process that doen't die when its parent dies, no need to use the win32process module.
DETACHED_PROCESS = 8
subprocess.Popen(executable, creationflags=DETACHED_PROCESS, close_fds=True)
DETACHED_PROCESS is a Process Creation Flag that is passed to the underlying CreateProcess function.
This question was asked 3 years ago, and though the fundamental details of the answer haven't changed, given its prevalence in "Windows Python daemon" searches, I thought it might be helpful to add some discussion for the benefit of future Google arrivees.
There are really two parts to the question:
Can a Python script spawn an independent process that will run indefinitely?
Can a Python script act like a Unix daemon on a Windows system?
The answer to the first is an unambiguous yes; as already pointed out; using subprocess.Popen with the creationflags=subprocess.CREATE_NEW_PROCESS_GROUP keyword will suffice:
import subprocess
independent_process = subprocess.Popen(
'python /path/to/file.py',
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP
)
Note that, at least in my experience, CREATE_NEW_CONSOLE is not necessary here.
That being said, the behavior of this strategy isn't quite the same as what you'd expect from a Unix daemon. What constitutes a well-behaved Unix daemon is better explained elsewhere, but to summarize:
Close open file descriptors (typically all of them, but some applications may need to protect some descriptors from closure)
Change the working directory for the process to a suitable location to prevent "Directory Busy" errors
Change the file access creation mask (os.umask in the Python world)
Move the application into the background and make it dissociate itself from the initiating process
Completely divorce from the terminal, including redirecting STDIN, STDOUT, and STDERR to different streams (often DEVNULL), and prevent reacquisition of a controlling terminal
Handle signals, in particular, SIGTERM.
The reality of the situation is that Windows, as an operating system, really doesn't support the notion of a daemon: applications that start from a terminal (or in any other interactive context, including launching from Explorer, etc) will continue to run with a visible window, unless the controlling application (in this example, Python) has included a windowless GUI. Furthermore, Windows signal handling is woefully inadequate, and attempts to send signals to an independent Python process (as opposed to a subprocess, which would not survive terminal closure) will almost always result in the immediate exit of that Python process without any cleanup (no finally:, no atexit, no __del__, etc).
Rolling your application into a Windows service, though a viable alternative in many cases, also doesn't quite fit. The same is true of using pythonw.exe (a windowless version of Python that ships with all recent Windows Python binaries). In particular, they fail to improve the situation for signal handling, and they cannot easily launch an application from a terminal and interact with it during startup (for example, to deliver dynamic startup arguments to your script, say, perhaps, a password, file path, etc), before "daemonizing". Additionally, Windows services require installation, which -- though perfectly possible to do quickly at runtime when you first call up your "daemon" -- modifies the user's system (registry, etc), which would be highly unexpected if you're coming from a Unix world.
In light of that, I would argue that launching a pythonw.exe subprocess using subprocess.CREATE_NEW_PROCESS_GROUP is probably the closest Windows equivalent for a Python process to emulate a traditional Unix daemon. However, that still leaves you with the added challenge of signal handling and startup communications (not to mention making your code platform-dependent, which is always frustrating).
That all being said, for anyone encountering this problem in the future, I've rolled a library called daemoniker that wraps both proper Unix daemonization and the above strategy. It also implements signal handling (for both Unix and Windows systems), and allows you to pass objects to the "daemon" process using pickle. Best of all, it has a cross-platform API:
from daemoniker import Daemonizer
with Daemonizer() as (is_setup, daemonizer):
if is_setup:
# This code is run before daemonization.
do_things_here()
# We need to explicitly pass resources to the daemon; other variables
# may not be correct
is_parent, my_arg1, my_arg2 = daemonizer(
path_to_pid_file,
my_arg1,
my_arg2
)
if is_parent:
# Run code in the parent after daemonization
parent_only_code()
# We are now daemonized, and the parent just exited.
code_continues_here()
For that purpose you could daemonize your python process or as you are using windows environment you would like to run this as a windows service.
You know i like to hate posting only web-links:
But for more information according to your requirement:
A simple way to implement Windows Service. read all comments it will resolve any doubt
If you really want to learn more
First read this
what is daemon process or creating-a-daemon-the-python-way
update:
Subprocess is not the right way to achieve this kind of thing

How can I access Ring 0 with Python?

This answer, stating that the naming of classes in Python is not done because of special privileges, here confuses me.
How can I access lower rings in Python?
Is the low-level io for accessing lower level rings?
If it is, which rings I can access with that?
Is the statement "This function is intended for low-level I/O." referring to lower level rings or to something else?
C tends to be prominent language in os -programming. When there is the OS -class in Python, does it mean that I can access C -code through that class?
Suppose I am playing with bizarre machine-language code and I want to somehow understand what it means. Are there some tools in Python which I can use to analyze such things? If there is not, is there some way that I could still use Python to control some tool which controls the bizarre machine language? [ctypes suggested in comments]
If Python has nothing to do with the low-level privileged stuff, do it still offers some wrappers to control the privileged?
Windows and Linux both use ring 0 for kernel code and ring 3 for user processes. The advantage of this is that user processes can be isolated from one another, so the system continues to run even if a process crashes. By contrast, a bug in ring 0 code can potentially crash the entire machine.
One of the reasons ring 0 code is so critical is that it can access hardware directly. By contrast, when a user-mode (ring 3) process needs to read some data from a disk:
the process executes a special instruction telling the CPU it wants to make a system call
CPU switches to ring 0 and starts executing kernel code
kernel checks that the process is allowed to perform the operation
if permitted, the operation is carried out
kernel tells the CPU it has finished
CPU switches back to ring 3 and returns control to the process
Processes belonging to "privileged" users (e.g. root/Administrator) run in ring 3 just like any other user-mode code; the only difference is that the check at step 3 always succeeds. This is a good thing because:
root-owned processes can crash without taking the entire system down
many user-mode features are unavailable in the kernel, e.g. swappable memory, private address space
As for running Python code in lower rings - kernel-mode is a very different environment, and the Python interpreter simply isn't designed to run in it, e.g. the procedure for allocating memory is completely different.
In the other question you reference, both os.open() and open() end up making the open() system call, which checks whether the process is allowed to open the corresponding file and performs the actual operation.
I think SimonJ's answer is very good, but I'm going to post my own because from your comments it appears you're not quite understanding things.
Firstly, when you boot an operating system, what you're doing is loading the kernel into memory and saying "start executing at address X". The kernel, that code, is essentially just a program, but of course nothing else is loaded, so if it wants to do anything it has to know the exact commands for the specific hardware it has attached to it.
You don't have to run a kernel. If you know how to control all the attached hardware, you don't need one, in fact. However, it was rapidly realised way back when that there are many types of hardware one might face and having an identical interface across systems to program against would make code portable and generally help get things done faster.
So the function of the kernel, then, is to control all the hardware attached to the system and present it in a common interface, called an API (application programming interface). Code for programs that run on the system don't talk directly to hardware. They talk to the kernel. So user land programs don't need to know how to ask a specific hard disk to read sector 0x213E or whatever, but the kernel does.
Now, the description of ring 3 provided in SimonJ's answer is how userland is implemented - with isolated, unprivileged processes with virtual private address spaces that cannot interfere with each other, for the benefits he describes.
There's also another level of complexity in here, namely the concept of permissions. Most operating systems have some form of access control, whereby "administrators" have total control of the system and "users" have a restricted subset of options. So a kernel request to open a file belonging to an administrator should fail under this sort of approach. The user who runs the program forms part of the program's context, if you like, and what the program can do is constrained by what that user can do.
Most of what you could ever want to achieve (unless your intention is to write a kernel) can be done in userland as the root/administrator user, where the kernel does not deny any API requests made to it. It's still a userland program. It's still a ring 3 program. But for most (nearly all) uses it is sufficient. A lot can be achieved as a non-root/administrative user.
That applies to the python interpreter and by extension all python code running on that interpreter.
Let's deal with some uncertainties:
The naming of os and sys I think is because these are "systems" tasks (as opposed to say urllib2). They give you ways to manipulate and open files, for example. However, these go through the python interpreter which in turn makes a call to the kernel.
I do not know of any kernel-mode python implementations. Therefore to my knowledge there is no way to write code in python that will run in the kernel (linux/windows).
There are two types of privileged: privileged in terms of hardware access and privileged in terms of the access control system provided by the kernel. Python can be run as root/an administrator (indeed on Linux many of the administration gui tools are written in python), so in a sense it can access privileged code.
Writing a C extension or controlling a C application to Python would ostensibly mean you are either using code added to the interpreter (userland) or controlling another userland application. However, if you wrote a kernel module in C (Linux) or a Driver in C (Windows) it would be possible to load that driver and interact with it via the kernel APIs from python. An example might be creating a /proc entry in C and then having your python application pass messages via read/write to that /proc entry (which the kernel module would have to handle via a write/read handler. Essentially, you write the code you want to run in kernel space and basically add/extend the kernel API in one of many ways so that your program can interact with that code.
"Low-level" IO means having more control over the type of IO that takes place and how you get that data from the operating system. It is low level compared to higher level functions still in Python that give you easier ways to read files (convenience at the cost of control). It is comparable to the difference between read() calls and fread() or fscanf() in C.
Health warning: Writing kernel modules, if you get it wrong, will at best result in that module not being properly loaded; at worst your system will panic/bluescreen and you'll have to reboot.
The final point about machine instructions I cannot answer here. It's a totally separate question and it depends. There are many tools capable of analysing code like that I'm sure, but I'm not a reverse engineer. However, I do know that many of these tools (gdb, valgrind) e.g. tools that hook into binary code do not need kernel modules to do their work.
You can use inpout library http://logix4u.net/parallel-port/index.php
import ctypes
#Example of strobing data out with nStrobe pin (note - inverted)
#Get 50kbaud without the read, 30kbaud with
read = []
for n in range(4):
ctypes.windll.inpout32.Out32(0x37a, 1)
ctypes.windll.inpout32.Out32(0x378, n)
read.append(ctypes.windll.inpout32.Inp32(0x378)) #Dummy read to see what is going on
ctypes.windll.inpout32.Out32(0x37a, 0)
print read
[note: I was wrong. usermode code can no longer access ring 0 on modern unix systems. -- jc 2019-01-17]
I've forgotten what little I ever knew about Windows privileges. In all Unix systems with which I'm familiar, the root user can access all ring0 privileges. But I can't think of any mapping of Python modules with privilege rings.
That is, the 'os' and 'sys' modules don't give you any special privileges. You have them, or not, due to your login credentials.
How can I access lower rings in Python?
ctypes
Is the low-level io for accessing lower level rings?
No.
Is the statement "This function is intended for low-level I/O." referring to lower level rings or to something else?
Something else.
C tends to be prominent language in os -programming. When there is the OS -class in Python, does it mean that I can access C -code through that class?
All of CPython is implemented in C.
The os module (it's not a class, it's a module) is for accessing OS API's. C has nothing to do with access to OS API's. Python accesses the API's "directly".
Suppose I am playing with bizarre machine-language code and I want to somehow understand what it means. Are there some tools in Python which I can use to analyze such things?
"playing with"?
"understand what it means"? is your problem. You read the code, you understand it. Whether or not Python can help is impossible to say. What don't you understand?
If there is not, is there some way that I could still use Python to control some tool which controls the bizarre machine language? [ctypes suggested in comments]
ctypes
If Python has nothing to do with the low-level privileged stuff, do it still offers some wrappers to control the privileged?
You don't "wrap" things to control privileges.
Most OS's work like this.
You grant privileges to a user account.
The OS API's check the privileges granted to the user making the OS API request.
If the user has the privileges, the OS API works.
If the user lacks the privileges, the OS API raises an exception.
That's all there is to it.

Categories