Python watching for process start up? - python

Is there any way to watch for a new process with name 'X' starting in python (ideally) or bash? I know that I can look at running processes, but that is not fast enough for my needs. The only think that I can think of is some how hooking into the new process, and registering that, but how?
More background: I am part of a CCDC team (http://www.nationalccdc.org/) and am on the blue team. The premise of the competition is to give students a network to defend against professional pen testers to help the next generation of security experts be better. What I want to do is load this python script on the linux boxs and watch for certain commands that are being run, that likely would only be used by the red team, for example the 'chattr' command. Ideally I would like to be able to provide the script a list of processes to watch. I can figure out that part but do not know how to watch for a process spawning.
Any direction is appreciated. Thank you.

I know of no way for a process which does not have root privileges to be notified when a process is started via any means on a fully-running Linux system. If polling isn't fast enough, you're going to have to do some serious hackery.
If you've got root, this is possible. If not, I can't see it.
With root, you could set a system-wide replacement of the fork and exec system calls which provides you with your desired notification. This could be in the kernel, or it could be an LD_PRELOAD hack.
This applies not just to Python; even with a C program, I don't know of an "inotify for process creation".

I have not tested this idea, but on Linux each process is given a directory under /proc/<it's process id>/ If you opened an inotify on directory creation in /proc you might be able to track creation of process directories and then see if /proc/<dir>/cmdline matches the process your looking for. This is just a thought, hope it helps!

Related

Save a Script Variables inside code and reset them after reboot

in my vps i have run 4 Python Script and its been 60 days that i don't reboot my vps and now i have to, but if i reboot vps my python Variables & data will be removed because i don't store them in file and they are store in variables in python script.
my OS is Ubuntu Server 16.04 LTS and i was run my python codes with nohup command until they can run in background.
Now i need a way to stop my scripts without removing they variables and start them with same variables and data after i reboot my vps.
Is There Any Way That I Can Do This?
In Addition, I'm sorry for writing mistakes in my question.
Python doesn't provide any way of doing this.
But you might be able to use CRIU, or a similar tool, to freeze and snapshot the interpreter process. Then, after restart, you can resume the snapshot into a new process that just picks up exactly where you left off.
It may not work.1 But there's a good chance it will. This is essentially the same thing as a Live Migration in the CRIU docs, except that you're not migrating to a new computer/container/etc., just to the future of the same computer. So, start reading with that page, and follow the links from there.
You should probably test before you commit to it.
* Try it (obviously don't include the system restart, just kill -9 the executable) on a Python script that doesn't do anything important (maybe increments a counter, print it out, sleep for a second, repeat.
* Maybe try it on a script that does similar kinds of stuff to what yours are doing.
* If it's safe to have two copies of one of your programs running at the same time (they're not going to stomp all over each other writing to the same file, or fight over the same socket, or whatever), start a second copy and test dump/kill/resume that.
* Try it on one of your real processes, still without restart.
* Try it on all four.
* Cross your fingers, sacrifice a chicken, and do it for real.
If that doesn't pan out, the only option I can think of is to go through your scripts, manually figure out everything that needs to be saved and how it could be accessed from the top-level global, and do that in the debugger.
Ideally, you'll write a script that will automate accessing and saving all that stuff—plus another one to feed it into a new instance at restart. Then you just pdb the live interpreters and start dumping everything.
This is guaranteed to be a whole lot of work, and not much fun. On the plus side, it is guaranteed to work if you do it right. On the third hand, it's pretty easy to not do it right.
1. If you rely on open files, pipes, sockets, etc., CRIU does about as much as you could do, which is more than you might expect at first, but still not everything you could possibly want… Also, if you're using almost all of your RAM, it can be hard to wedge things back into exactly the same state. And there are probably other possible issues.

How do I load up python in the shell from Vim, but not use it right away?

I'm doing TDD, but the system I'm working with takes 6 seconds to get through boilerplate code. This code is not part of my work, nor of my tests (it's Autodesk Maya's headless/batch/CLI Python mode). I've talked to support, and there's no way around the load time, so I though maybe I could load and initialize Python first in the background, as I code, and then my mapping would simply run the nosetests inside of that when I'm ready. My tests take something like 0.01 seconds, so this should feel instantaneous, which would really help the red/green/refactor cycle.
In short, instead of firing off /path/to/mayapy /path/to/runtests.py /current/buffer/path, Vim would just fire up /path/to/mayapy with the boilerplate stuff from runtests.py, then somehow hold onto that running instance. When I hit my mapping, it would send into that running instance the call to nosetest with the current buffer's path (and then fire up another instance to hold onto while waiting for the next run). How do I hold onto the running instance and call into it later? I'm even considering having a chain of 2 or 3, for the times when I make minor mistakes and rerun 2 seconds later.
Vim-ipython, the excellent work of Paul Ivanov, is an interface between vim and ipython sessions (demo video). This may relieve you of some of the boilerplate of sending buffers to python and waiting on results.
I'm not entirely sure this is exactly what you want, but with a bit of python and vim glue code it may be a good step in the right direction, but I'm guessing you'd need to do a bit of experimentation to get a workflow you're happy with.

Starting an IPython cluster from the notebook with a delay

Our SGE cluster setup requires there to be a delay between controller and engines starting. If this delay is not there, some of the servers use "old" ipcontroller-client.json files and attempt to connect to previous (and not running) controllers. This is an NFS "feature", so to remedy, I set c.IPClusterStart.delay = 30 in the ipcluster_config.py file and things work well. The controller gets submitted to SGE, has enough time to start and write its json files, and then the engines can start correctly to the newly running controller. However, I'd like to also be able to start the cluster from the notebook. Unfortunately, it appears that this timeout is not used, the controller and engines start up at the same time (as seen with watch qstat), some of the engines connect (because the pick up the new settings from the json file) and some do not (because of NFS).
I ran an strace on the notebook and saw that it's using sge_controller and sge_engines scripts (created by the notebook when you press start) to start these processes.
I'm wondering if there's any way to implement a delay here, as well. It's starting the controller and engines the right way (SGE) so I know it's reading the ipcluster_config.py.
I've Googled around and searched this site, with no luck. Hoping maybe someone can shed some light on the deeper workings of this behavior.
Thanks,
Chris
Well, this is probably too late for the OP, but hopefully it helps someone.
If it is a timeout issue, just set c.EngineFactory.timeout and c.IPEngineApp.wait_for_url_file to some larger times.
If it is due to failure after the first run, it is probably due to lingering security files, which should be deleted ( ipcontroller-engine.json and ipcontroller-client.json ) from the relevant iPython profile using IPython.utils.path.get_security_file to get the full paths. To automate this and make it somewhat less painful, this deletion step can be tacked on to the beginning of the same profile's ipcluster_config.py.
These changes alone were enough for me to get the cluster running with the notebook easily.
If neither of these solve the problem, there are some other thoughts ( http://mail.scipy.org/pipermail/ipython-user/2011-November/008741.html ).

Python: Running Daemon Processes in Windows7

I had a program that Scraped certain data from certain Web-Pages, and when the Web-Pages changed, acted accordingly.
How would one set up the program so it continues to run in the background?
I don't need any specifics
I'm just really confused on this concept and would appreciate whatever help anybody has to offer.
start path-to-pythonw.exe your-code.py
pythonw means without console.
start means start on background.
if your python is installed system-wide, you can probably start your-code.pyw
.pyw is associated with pythonw.exe
remember you cannot use print (to stdout) in this case.
If you want to be able to just start your process and have it background itself and do a few more typical things that "daemon" processes do in Unix, look here: How do you create a daemon in Python?
There is no concept of "background" in Windows. But the UNIX shell concept of a background process can be reasonably emulated by running your Python script as a Windows service. There are a couple of suggestions in this question: Is it possible to run a Python script as a service in Windows? If possible, how?
For casual use, I suggest that you learn how to use srvany from the second answer.
You simply need to leave your program running! Please google "python daemon" and see how to implement a persistent background process in Python.
Now, you cannot know when a website changes unless you poll it. If the website is well designed, the page you are trying to poll will have a "Last-Modified" header, you can make a "HEAD" request every so often (be nice: don't poll like crazy) and act when Last-Modified is >= than the one on record. If the site is not well designed, it will not have a reliable Last-Modified or ETAG header, in that case you will have to parse manually and check for changes yourself.
Cheers.

auto detect file in a folder

Sorry wasn't sure how to best word this question.
My scenario is that I have some python code (on a linux machine) that uses an xml file to acquire its arguements to perform a task, on completion of the task it disposes of the xml file and waits for another xml file to arrive to do it all over again.
I'm trying to find out the best way to be alerted an xml file has arrived in a specified folder.
On way would be to continually monitor the folder in the Python code, but that would mean a lot of excess resourses used while waiting for something to turn up (which may be as little as a few times a day). Another way, would be to set up a cronjob, but it's efficiency would't be any better than monitoring from within the code. An option I was hoping was possible would be to set up some sort of interrupt that would alert the code when an xml file appeared.
Any thoughts?
Thanks.
If you're looking for something "easy" to just run a specific script when new files arrive, the incron daemon provides a very handy combination of inotify(7) and cron(8)-like support for executing programs on demand.
If you want something a little better integrated into your application, or if you can't afford the constant fork(2) and execve(2) of the incron approach, then you should probably use the inotify(7) interface directly in your script. The pyinotify module can integrate with the underlying inotify(7) interfaces.

Categories