Python - How to know if Matlab is complaining about missing free network license - python

One of the tools I use at work is Matlab, however due to server license there is limited number of users that can use it at the same time.
I decided to write a short script that will open Matlab - simple script with a infinite loop.
Now I want to improve my code a bit, to determine if the Matlab is actually opened (otherwise Licence error pops up).
Easy way would be just to check the process in task manager - unfortunately if error occurs as Matlab.exe process (the same as I would be in case of properly opened program).
So I figured out maybe it would be possible to check the name of the window header to determine if there is error or no. I tried to find some solution on the internet, with no luck. Could You provide me with some hint? Or maybe some other solution to the problem?

You can check with
$MATLABROOT/etc/lmstat -c yourlicencefile -a
and parse its output to see if you allocated a license or not to your computer.

Related

Programmatically control Xcode/iOS testing (preferably with Python)?

I've got an iOS app that needs to run efficiently and accurately. There's a lot of parameters involved in the code and different combinations of them provide varying results of success. So I plan to write a genetic algorithm to go ahead and find some good parameter sets for me. While I could do this using Objective-C directly in Xcode, I would complete this task much faster and enjoy it more writing the genetic algorithm part in Python. Is there any good way to control Xcode through Python? As in, be able to execute the simulator/device app through Xcode and get feedback from it using external code? I know a keyboard macro would be possible, but that approach would probably be a bit clumsy. If there's some way to directly control it programmatically, that would be great. Thanks!
Your question is not very clear.
Do you want to control Xcode to run your app ? Do you want to be able to change anything like flags automatically ?
If you want to control Xcode by writing code explaining what to do or even where to clic you can write an AppleScript
Maybe also take a look at this post
But if your question is about performing automatic UI tests maybe see UIAutomation, Calabash or MonkeyTalk

How to protect my Python program

What I want to do is protect a Python program from being stolen by people with no computer knowledge. I accept the inevitability of the program being pirated, all I want to do is protect it from average users.
I have come up with two ideas.
1.)Set a time restriction by checking online for the date and time. I.E. 10 days from downloaded time.
2.)Checking the IP or Name of the computer that downloaded it and make the program only runs on that computer. (to prevent friends from simply sharing the file).
The problem with both of these is that I'll need to create a .py file "on the fly" and then use something like pytoexe to make it into an .exe so that the user doesn't need to have Python installed.
The problem with the second is that to my understanding ip's change and getting the computer name is a security risk and might scare away users.
So to sum it up, here are my two questions:
1.) Is there a good way in python to only allow the program to run on that single computer?
2.) What is the best way to implement a "on the fly" creation of the exe? (I was going to host the website on my computer and learn php(?)/servers.
I have moderate c/c++ and basic html/css, java, and python experience.
Thank you for your time!
Messy business. You probably already understand that compiled does not mean encrypted.
However, if you're boss considers c-compiled as satisfactory, you can use cython to compile your python code to c-code and then gcc to compile the c-code.
Check here on how to build your setup.py script.
http://docs.cython.org/src/reference/compilation.html#compiling-with-distutils
And you can embed python using into the resulting c code using the --embed option:
# will generate the target.c
$ cython target.py --embed
Give each user a customized installer that has a unique key in it. When it runs, it contacts a server (with the key) and requests the actual program. Server-side, you check if the key is valid and if so, serve the program customized with the key, and mark the key as used. The installer saves the program somewhere the user can access it, and creates a hidden file that contains the key somewhere deep in the bowels of the computer, where the "average user" won't think of looking. When the program is run, the first thing it does is check if the hidden file exists and if it contains the correct key, and refuses to run if not.
(I am assuming that unzipping an executable and reading source code is beyond the ability of the "average user" (read: "grandma"), so using py2exe is ok.)
To avoid having to contact anything on the Internet, you could use the following way to 'dongle' your program:
Take a vital part of your program (something without which the rest won't be useful),
put it into a string,
encrypt that string symmetrically with the MAC address of the computer it shall run on,
then in your program do this:
decrypt that string with the MAC address of the current machine it runs on an and
call that decrypted string using exec.
Example with the vital part being print "hello world":
def getMyMac():
return 123456789L # for testing now
return uuid.getnode() # this would be the real thing
def strxor(s, key):
key *= len(s) / len(key) + 1
return ''.join(chr(ord(key[i]) ^ ord(s[i])) for i in range(len(s)))
def performVitalCode():
code = 'A#ZZA\x16\x15P\\]^\\\x14BYET]\x13'
# I found that code by entering:
# strxor('print "hello world"', str(getMyMac()))
realCode = strxor(code, str(getMyMac()))
exec realCode
I here used a simple xor on strings for crypting (which is not a hard cypher to crack).
Of course the user of the "allowed" computer can hand over his MAC address to the next user which then
either can patch the getMyMac() or
spoof his own MAC address; most network cards allow this.
So this is not a "safe" solution to your problem.
But if a person with just little computer knowledge gave your code to some other guy without any further information (maybe by putting it online in a forum or similar), the receiver won't be able to execute it out of the box.
Finally I need to point out that every chaining of code to a specific computer may well become a hassle for the rightful user of the code. If I'm using a program which stops working just because I switch to a different hardware (maybe just because I get a new laptop), I'm usually pissed and curse the writer of that code. You might not want to annoy your customers.
Seems like a combination of things here. Encapsulating python as others have suggested is a good way to go, for bundling. You may also consider obfuscation, as talked about here in another StackOverflow thread:
Python Code Obfuscation
Which references:
http://freecode.com/projects/pyobfuscate
As for making it so that someone can't just download your program and run it elsewhere or distribute it, how willing are you to inconvenience your end users? :)
As others have noted above, you can generate a compiled and bundled bit of code with the ID specific to the user. That way, if/when the application phones home, you can track usage.
If your end users/customers have the following requirements fulfilled:
An account with you, which they can login and check their subscription/account status
Internet connectivity from the machine running the program
You can make the installation process do the following:
Installer is NOT the full program. Generates a profile of the machine.
This profile bundled up and a hash is generated. Both are uploaded to your servers.
The hash is displayed to the end user to enter, once they have signed into their online web account.
The user installing the application submits the hash, and gets a download link and an unlock key. The key is only good for that dynamically generated download and only on that machine. The downloaded program, however, will accept a range of keys unique to itself(iteratively generated, etc).
The user can then complete the install and run their program. The program will periodically check the profile of the machine it is on, to make sure that the hash does not deviate. If it does, the program can prompt them to login to their web account to generate a new key. The new key is then used to refresh their application. This can be automated by the application, but with the internet laws and such, having the user sign into their account and perhaps agree to any updates to the EULA, would be better.
If they decide to share the program in a VM bundle, they would still need an active account to get keys.
Note, this does not prevent someone from bypassing the hash check. But for the average user, this will serve as a good way for you to deter people giving away or reselling your program.
Just bear in mind, no system is foolproof.

Use Python script to check whether or not a program is hung/crashed in Windows? Also, Pipes

I have 2 questions, so I figured I would cram them into 1 single post instead of filling the board up with useless information
Simple description of situation: I am attempting to create a python script that opens an executable for a simple C++ program with an unknown number of inputs in a windows environment, sends some data into that program, and then check to see if it has crashed / rinse and repeat.
Question 1: This is a pipes question. Bear with me, I am still learning about pipes, so I may have a misunderstanding of exactly how they work. Forgive me if I do. Is it possible to detect how many inputs a program has? Basically what I'm attempting to do is open an executable using my python script, that I personally know nothing about, and send in garbage data into each available input. If it is NOT possible to detect how many inputs there are: would there be an adverse reaction (like crashing the program Im sending the data into) if I send more arguments into it than there are inputs? As in the C++ program takes 3 inputs and I send in 6 arguments?
Question 2: Does anyone know if it possible using a python script to detect whether a program has hung or not? So far the best information on this I've been able to find is simply detecting whether the program is running or not via FindWindow, and then I suppose I could monitor the CPU usage to see if it continues to rise... but that is hardly an ideal method (and may not even work properly!) If there are any better known methods out there I would be thrilled!
Thanks for your time :)
An Answer to Question 2
You should look into investigating psutil, hosted # https://github.com/giampaolo/psutil . I don't know whether you'll find what you're looking for, but pusutil is a decent API, offering access to info such as number of CPUs in addition to process information, which is what you want.

Check Idle Time when running as a Windows Service

Using win32api.GetLastInputInfo() is an easy way to determine a USERS's idle time. However when running as a SERVICE this does not apply (always returns 0).
Does anyone know a simple way for a WINDOWS SERVICE to determine last keypress/mouse activity? (or some other effective way to determine idle time)
Not in Python, but the approach proposed in http://www.codeproject.com/KB/DLL/trackuseridle.aspx looks interesting.
[edit]
The code it is a standard C DLL, so you should be able to use it with ctypes. The way the C code is written using SetWindowsHookEx means you could maybe rewrite it directly Python + pywin32. See stackoverflow.com/questions/6458812 and python-forum.org/pythonforum/viewtopic.php?f=2&t=11154 for more on this (the first link mentions kinds of events you can get without writing a DLL, and the other shows a python example).

Is there a statistical profiler for python? If not, how could I go about writing one?

I would need to run a python script for some random amount of time, pause it, get a stack traceback, and unpause it. I've googled around for a way to do this, but I see no obvious solution.
There's the statprof module
pip install statprof (or easy_install statprof), then to use:
import statprof
statprof.start()
try:
my_questionable_function()
finally:
statprof.stop()
statprof.display()
There's a bit of background on the module from this blog post:
Why would this matter, though? Python already has two built-in profilers: lsprof and the long-deprecated hotshot. The trouble with lsprof is that it only tracks function calls. If you have a few hot loops within a function, lsprof is nearly worthless for figuring out which ones are actually important.
A few days ago, I found myself in exactly the situation in which lsprof fails: it was telling me that I had a hot function, but the function was unfamiliar to me, and long enough that it wasn’t immediately obvious where the problem was.
After a bit of begging on Twitter and Google+, someone pointed me at statprof. But there was a problem: although it was doing statistical sampling (yay!), it was only tracking the first line of a function when sampling (wtf!?). So I fixed that, spiffed up the documentation, and now it’s both usable and not misleading. Here’s an example of its output, locating the offending line in that hot function more accurately:
% cumulative self
time seconds seconds name
68.75 0.14 0.14 scmutil.py:546:revrange
6.25 0.01 0.01 cmdutil.py:1006:walkchangerevs
6.25 0.01 0.01 revlog.py:241:__init__
[...blah blah blah...]
0.00 0.01 0.00 util.py:237:__get__
---
Sample count: 16
Total time: 0.200000 seconds
I have uploaded statprof to the Python package index, so it’s almost trivial to install: "easy_install statprof" and you’re up and running.
Since the code is up on github, please feel welcome to contribute bug reports and improvements. Enjoy!
I can think of a couple of few ways to do this:
Rather than trying to get a stack trace while the program is running, just fire an interrupt at it, and parse the output. You could do this with a shell script or with another python script that invokes your app as a subprocess. The basic idea is explained and rather thoroughly defended in this answer to a C++-specific question.
Actually, rather than having to parse the output, you could register a postmortem routine (using sys.excepthook) that logs the stack trace. Unfortunately, Python doesn't have any way to continue from the point at which an exception occurred, so you can't resume execution after logging.
In order to actually get a stack trace from a running program, you will may have to hack the implementation. So if you really want to do that, it may be worth your time to check out pypy, a Python implementation written mostly in Python. I've no idea how convenient it would be to do this in pypy. I'm guessing that it wouldn't be particularly convenient, since it would involve introducing a hook into basically every instruction, which would I think be prohibitively inefficient. Also, I don't think there will be much advantage over the first option, unless it takes a very long time to reach the state where you want to start doing stack traces.
There exists a set of macros for the gdb debugger intended to facilitate debugging Python itself. gdb can attach to an external process (in this case the instance of python which is executing your application) and do, well, pretty much anything with it. It seems that the macro pystack will get you a backtrace of the Python stack at the current point of execution. I think it would be pretty easy to automate this procedure, since you can (at worst) just feed text into gdb using expect or whatever.
Python already contains everything you need to do what you described, no need to hack the interpreter.
You just have to use the traceback module in conjunction with the sys._current_frames() function. All you need is a way to dump the tracebacks you need at the frequency you want, for example using UNIX signals, or another thread.
To jump-start your code, you can do exactly what is done in this commit:
Copy the threads.py module from that commit, or at least the stack trace dumping function (ZPL license, very liberal):
Hook it up to a signal handler, say, SIGUSR1
Then you just need to run your code and "kill" it with SIGUSR1 as frequently as you need.
For the case where a single function of a single thread is "sampled" from time to time with the same technique, using another thread for timing, I suggest dissecting the code of Products.LongRequestLogger and its tests (developed by yours truly, while under the employ of Nexedi):
Whether or not this is proper "statistical" profiling, the answer by Mike Dunlavey referenced by intuited makes a compelling argument that this is a very powerful "performance debugging" technique, and I have personal experience that it really helps zoom in quickly on the real causes of performance issues.
To implement an external statistical profiler for Python, you're going to need some general debugging tools that let you interrogate another process, as well as some Python specific tools to get a hold of the interpreter state.
That's not an easy problem in general, but you may want to try starting with GDB 7 and the associated CPython analysis tools.
Seven years after the question was asked, there are now several good statistical profilers available for Python. In addition to vmprof, already mentioned by Dmitry Trofimov in this answer, there are also vprof and pyflame. All of them support flame graphs one way or another, giving you a nice overview of where the time was spent.
Austin is a frame stack sampler for CPython that can be used to make statistical profilers for Python that require no instrumentation and introduce minimal overhead. The simplest thing to do is to pipe the output of Austin with FlameGraph. However, you can just grab Austin's output with a custom application to make your very own profiler that is targeted at precisely your needs.
This is a screenshot of Austin TUI, a terminal application that provides a top-like view of everything that is happening inside a running Python application.
This is Web Austin, a web application that shows you a live flame graph of the collected samples. You can configure the address where to serve the application which then allows you to do remote profiling.
There is a cross-platform sampling(statistical) Python profiler written in C called vmprof-python.
Developed by the members of PyPy team, it supports PyPy as well as CPython.
It works on Linux, Mac OSX, and Windows. It is written in C, thus has a very small overhead.
It profiles Python code as well as native calls made from Python code.
Also, it has a very useful option to collect statistics about execution lines inside functions in addition to function names.
It can also profile memory usage (by tracing the heap size).
It can be called from the Python code via API or from the console.
There is a Web UI to view the profile dumps: vmprof.com, which is also open sourced.
Also, some Python IDEs (for example PyCharm) have integration with it, allowing to run the profiler and see the results in the editor.
For Python there is py-spy to dump the stacktraces. The dumps can get analyzed by speedscope
Source: Guidelines

Categories