Check Idle Time when running as a Windows Service - python

Using win32api.GetLastInputInfo() is an easy way to determine a USERS's idle time. However when running as a SERVICE this does not apply (always returns 0).
Does anyone know a simple way for a WINDOWS SERVICE to determine last keypress/mouse activity? (or some other effective way to determine idle time)

Not in Python, but the approach proposed in http://www.codeproject.com/KB/DLL/trackuseridle.aspx looks interesting.
[edit]
The code it is a standard C DLL, so you should be able to use it with ctypes. The way the C code is written using SetWindowsHookEx means you could maybe rewrite it directly Python + pywin32. See stackoverflow.com/questions/6458812 and python-forum.org/pythonforum/viewtopic.php?f=2&t=11154 for more on this (the first link mentions kinds of events you can get without writing a DLL, and the other shows a python example).

Related

How does one verify Python script is a pure math function?

I have a Python project that dynamically loads Python scripts from a set of specified directories and executes an expected function off of them. To harden the security of this application, I would like to analyze the scripts to ensure that they are just pure math functions and, therefore, not interacting with any system components such as the HDD/SDD, the network, a database, etc. Is this even possible to do in Python?
This question has been moved to https://security.stackexchange.com/questions/131283/how-does-one-verify-that-a-python-script-is-a-pure-math-function, but I'm leaving this here, for now, to keep the comments and answers that have already been provided.
It appears that sandboxing to disable things like I/O, network etc.. isn't fully reliable.
Since Python doesn't have any permission system embedded, it'll be pretty hard to do what you want.

How to reproduce the less/more effect in Python?

I have a long text to expose to the user in a console, for him to make his choice. And I still haven't found how to reproduce the less/more effect with Python.
I'd be grateful for some directions on the proper way to achieve that. After a lot of googling I just understood that I don't know the tools or the appropriate vocabulary to get my way around this.
less and more mainly use terminal capabilities.
The main problem with these program is that most of them are made in c using termios.h/curses.h, so no great documentation about terminal capabilities exist in python but a good start should be the python termios doc and the gnu C library reference.
After a quick lookup the curses wrapper in python should also be able to do the job.

Python - How to know if Matlab is complaining about missing free network license

One of the tools I use at work is Matlab, however due to server license there is limited number of users that can use it at the same time.
I decided to write a short script that will open Matlab - simple script with a infinite loop.
Now I want to improve my code a bit, to determine if the Matlab is actually opened (otherwise Licence error pops up).
Easy way would be just to check the process in task manager - unfortunately if error occurs as Matlab.exe process (the same as I would be in case of properly opened program).
So I figured out maybe it would be possible to check the name of the window header to determine if there is error or no. I tried to find some solution on the internet, with no luck. Could You provide me with some hint? Or maybe some other solution to the problem?
You can check with
$MATLABROOT/etc/lmstat -c yourlicencefile -a
and parse its output to see if you allocated a license or not to your computer.

Is there a statistical profiler for python? If not, how could I go about writing one?

I would need to run a python script for some random amount of time, pause it, get a stack traceback, and unpause it. I've googled around for a way to do this, but I see no obvious solution.
There's the statprof module
pip install statprof (or easy_install statprof), then to use:
import statprof
statprof.start()
try:
my_questionable_function()
finally:
statprof.stop()
statprof.display()
There's a bit of background on the module from this blog post:
Why would this matter, though? Python already has two built-in profilers: lsprof and the long-deprecated hotshot. The trouble with lsprof is that it only tracks function calls. If you have a few hot loops within a function, lsprof is nearly worthless for figuring out which ones are actually important.
A few days ago, I found myself in exactly the situation in which lsprof fails: it was telling me that I had a hot function, but the function was unfamiliar to me, and long enough that it wasn’t immediately obvious where the problem was.
After a bit of begging on Twitter and Google+, someone pointed me at statprof. But there was a problem: although it was doing statistical sampling (yay!), it was only tracking the first line of a function when sampling (wtf!?). So I fixed that, spiffed up the documentation, and now it’s both usable and not misleading. Here’s an example of its output, locating the offending line in that hot function more accurately:
% cumulative self
time seconds seconds name
68.75 0.14 0.14 scmutil.py:546:revrange
6.25 0.01 0.01 cmdutil.py:1006:walkchangerevs
6.25 0.01 0.01 revlog.py:241:__init__
[...blah blah blah...]
0.00 0.01 0.00 util.py:237:__get__
---
Sample count: 16
Total time: 0.200000 seconds
I have uploaded statprof to the Python package index, so it’s almost trivial to install: "easy_install statprof" and you’re up and running.
Since the code is up on github, please feel welcome to contribute bug reports and improvements. Enjoy!
I can think of a couple of few ways to do this:
Rather than trying to get a stack trace while the program is running, just fire an interrupt at it, and parse the output. You could do this with a shell script or with another python script that invokes your app as a subprocess. The basic idea is explained and rather thoroughly defended in this answer to a C++-specific question.
Actually, rather than having to parse the output, you could register a postmortem routine (using sys.excepthook) that logs the stack trace. Unfortunately, Python doesn't have any way to continue from the point at which an exception occurred, so you can't resume execution after logging.
In order to actually get a stack trace from a running program, you will may have to hack the implementation. So if you really want to do that, it may be worth your time to check out pypy, a Python implementation written mostly in Python. I've no idea how convenient it would be to do this in pypy. I'm guessing that it wouldn't be particularly convenient, since it would involve introducing a hook into basically every instruction, which would I think be prohibitively inefficient. Also, I don't think there will be much advantage over the first option, unless it takes a very long time to reach the state where you want to start doing stack traces.
There exists a set of macros for the gdb debugger intended to facilitate debugging Python itself. gdb can attach to an external process (in this case the instance of python which is executing your application) and do, well, pretty much anything with it. It seems that the macro pystack will get you a backtrace of the Python stack at the current point of execution. I think it would be pretty easy to automate this procedure, since you can (at worst) just feed text into gdb using expect or whatever.
Python already contains everything you need to do what you described, no need to hack the interpreter.
You just have to use the traceback module in conjunction with the sys._current_frames() function. All you need is a way to dump the tracebacks you need at the frequency you want, for example using UNIX signals, or another thread.
To jump-start your code, you can do exactly what is done in this commit:
Copy the threads.py module from that commit, or at least the stack trace dumping function (ZPL license, very liberal):
Hook it up to a signal handler, say, SIGUSR1
Then you just need to run your code and "kill" it with SIGUSR1 as frequently as you need.
For the case where a single function of a single thread is "sampled" from time to time with the same technique, using another thread for timing, I suggest dissecting the code of Products.LongRequestLogger and its tests (developed by yours truly, while under the employ of Nexedi):
Whether or not this is proper "statistical" profiling, the answer by Mike Dunlavey referenced by intuited makes a compelling argument that this is a very powerful "performance debugging" technique, and I have personal experience that it really helps zoom in quickly on the real causes of performance issues.
To implement an external statistical profiler for Python, you're going to need some general debugging tools that let you interrogate another process, as well as some Python specific tools to get a hold of the interpreter state.
That's not an easy problem in general, but you may want to try starting with GDB 7 and the associated CPython analysis tools.
Seven years after the question was asked, there are now several good statistical profilers available for Python. In addition to vmprof, already mentioned by Dmitry Trofimov in this answer, there are also vprof and pyflame. All of them support flame graphs one way or another, giving you a nice overview of where the time was spent.
Austin is a frame stack sampler for CPython that can be used to make statistical profilers for Python that require no instrumentation and introduce minimal overhead. The simplest thing to do is to pipe the output of Austin with FlameGraph. However, you can just grab Austin's output with a custom application to make your very own profiler that is targeted at precisely your needs.
This is a screenshot of Austin TUI, a terminal application that provides a top-like view of everything that is happening inside a running Python application.
This is Web Austin, a web application that shows you a live flame graph of the collected samples. You can configure the address where to serve the application which then allows you to do remote profiling.
There is a cross-platform sampling(statistical) Python profiler written in C called vmprof-python.
Developed by the members of PyPy team, it supports PyPy as well as CPython.
It works on Linux, Mac OSX, and Windows. It is written in C, thus has a very small overhead.
It profiles Python code as well as native calls made from Python code.
Also, it has a very useful option to collect statistics about execution lines inside functions in addition to function names.
It can also profile memory usage (by tracing the heap size).
It can be called from the Python code via API or from the console.
There is a Web UI to view the profile dumps: vmprof.com, which is also open sourced.
Also, some Python IDEs (for example PyCharm) have integration with it, allowing to run the profiler and see the results in the editor.
For Python there is py-spy to dump the stacktraces. The dumps can get analyzed by speedscope
Source: Guidelines

Python interface to dynamic binary instrumentaton framework PIN

I work in analyzing binary files, using Python. I have been using debuggers to do a dynamic analysis (i.e running the application and using breakpoints to get runtime execution). however, results can be improved if i can use some binary instrumentation fremework like PIN. The PIN is developed in C++ and provided as closed source (only dlls). We write something called PinTools do describe where and what we want to intercepts. I want to port PIN functionality into Python so that i continue using Python. I am aware of "ctypes" and boost-python.
My problem is: in order to use PIN, we write a pintool and run our bibnary executable with Pin and pintool (it is like running application with JIT). Now, I have no idea if I can use ctypes etc. to import PIN functions and use this python code for dynamically analyzing the binary. Can you please provide some suggestions or guidelines on how to proceed with this task.
So, in nut-n-shell, I want to create a Python interface (wrapper) to PIN framework.
Check out the ProcessTap project. Appears to implement exactly what you are looking for: http://code.google.com/p/processtap/
I was thinking about this recently, while I haven't looked into it, I would approach the problem like this: write a pintool that, upon initialization, starts an embedded python interpreter and imports a python module. I'd look at using SWIG to generate bindings for all the PIN api calls you want to use. Then the pintool would call a hardcoded function in the imported python module that would issue calls to the api to register more functions and do whatever you want to do.
I'm not sure how the callbacks would work, I don't know enough about SWIG. Also, this may fail if the program you're trying to instrument itself uses Python. But that's how I'd try to solve this problem to start out.

Categories