Run python program from another python program (with certain requirements) - python

Let's say I have two python scripts A.py and B.py. I'm looking for a way to run B from within A in such a way that:
B believes it is __main__ (so that code in an if __name__=="__main__" block in B will run)
B is not actually __main__ (so that it does not, e.g., overwrite the "__main__" entry in sys.modules)
Exceptions raised within B propagate to A (i.e., could be caught with an except clause in A).
Those exceptions, if not caught, generate a correct traceback referencing line numbers within B.
I've tried various techniques, but none seem to satisfy all my requirements.
using tools from the subprocess module means exceptions in B do not propagate to A.
execfile("B.py", {}) runs B, but it doesn't think it's main.
execfile("B.py", {'__name__': '__main__'}) makes B.py think it's main, but it also seems to screw up the exception traceback printing, so that the tracebacks refer to lines within A (i.e., the real __main__).
using imp.load_source with __main__ as the name almost works, except that it actually modifies sys.modules, thus stomping on the existing value of __main__
Is there any way to get what I want?
(The reason I'm doing this is because I'm doing some cleanup on an existing library. This library has no real test suite, just a set of "example" scripts that produce certain output. I'm trying to leverage these as tests to ensure that my cleanup doesn't affect the library's ability to execute these examples, so I want to run each example script from within my test suite. I'd like to be able to see exceptions from these scripts within the test script so the test script can report the type of failure, instead of just reporting a generic SubprocessError whenever an example script raises some exception.)

Your use case makes sense, but I still think you'd be better off refactoring the tests such that they can be run externally.
Do you test scripts have something like this?
def test():
pass
if __name__ == '__main__':
test()
If not, perhaps you should convert your tests to calling a function such as test. Then, from your main test script, you can just:
import test1
test1.test()
import test2
test2.test()
Provide a common interface to running tests, that the tests themselves use. Having a big block of code in a __main__ check is Not A Good Thing.
Sorry that I didn't answer the question you asked, but I feel this is the correct solution without deviating too far from the original test code.

Answering my own question because the result is kind of interesting and might be useful to others:
It turns out I was wrong: execfile("B.py", {'__name__': '__main__'} is the way to go after all. It does correctly produce the tracebacks. What I was seeing with incorrect line numbers weren't exceptions but warnings. These warnings were produced using warnings.warn("blah", stacklevel=2). The stacklevel=2 argument is supposed to allow for things like deprecation warnings to be raised where the deprecated thing is used, rather than at the warning call (see the documentation).
However, it seems that the execfile-d file doesn't count as a "stack level" for this purpose, and is invisible for stacklevel purposes. So if code at the top level of an execfile-d module causes a warning with stacklevel 2, the warning is not raised at the right line number in the execfile-d source file; instead it is raised at the corresponding line number of the file which is running the execfile.
This is unfortunate, but I can live with it, since these are only warnings, so they won't impact the actual performance of the tests. (I didn't notice at first that it was only the warnings that were affected by the line-number mismatches, because there were lots of warnings and exceptions intermixed in the test output.)

Related

Is there a way to run several time a combinaison of python code and Pytest tests automatically?

I am looking to automate the process where:
I run some python code,
then run a set of tests using pytest
then, if all tests are validated, start the process again with new data.
I am thinking of writing a script executing the python code, then calling pytest using pytest.main(), check with the help of the exit code that all tests passed and in case of success start again.
The issue is that it is stated in pytest docs (https://docs.pytest.org/en/stable/usage.html) that it is not recommended to make multiple calls to pytest.main():
Note from pytest docs:
"Calling pytest.main() will result in importing your tests and any modules that they import. Due to the caching mechanism of python’s import system, making subsequent calls to pytest.main() from the same process will not reflect changes to those files between the calls. For this reason, making multiple calls to pytest.main() from the same process (in order to re-run tests, for example) is not recommended."
I was woundering if it was ok to call pytest.main() the way I intend to or if there was any better way to achieve what I am looking for?
I've made a simple example to make it problem more clear:
A = [0]
def some_action(x):
x[0] += 1
if __name__ == '__main__':
print('Initial value of A: {}'.format(A))
for i in range(10):
if i == 5:
# one test in test_mock2 that fails
test_dir = "./tests/functional_tests/test_mock2.py"
else:
# two tests in test_mock that pass
test_dir = "./tests/functional_tests/test_mock.py"
some_action(A)
check_tests = int(pytest.main(["-q", "--tb=no", test_dir]))
if check_tests != 0:
print('Interrupted at i={} because of tests failures'.format(i))
break
if i > 5:
print('All tests validated, final value of A: {}'.format(A))
else:
print('final value of A: {}'.format(A))
In this example some_action is executed until i reaches 5, at which point the tests fail and the process of executing/testing is interrupted. It seems to work fine, I'm only concerned because of the comments in pytest docs as stated above
The warning applies to the following sequence of events:
Run pytest.main on some folder which imports a.py, directly or indirectly.
Modify a.py (manually or programatically).
Attempt to rerun pytest.main on the same directory in the same python process as #1
The second run in step #3 will not not see the changes you made to a.py in step #2. That is because python does not import a file twice. Instead, it will check if the file has an entry in sys.modules, and use that instead. It's what lets you import large libraries multiple times without incurring a huge penalty every time.
Modifying the values in imported modules is fine. Python binds names to references, so if you bind something (like a new integer value) to the right name, everyone will be able to see it. Your some_action function is a good example of this. Future tests will run with the modified value if they import your script as a module.
The reason that the caveat is there is that pytest is usually used to test code after it has been modified. The warning is simply telling you that if you modify your code, you need to start pytest.main in a new python process to see the changes.
Since you do not appear to be modifying the code of the files in your test and expecting the changes to show up, the caveat you cite does not apply to you. Keep doing what you are doing.

Isolating Python unittest from imports in other test modules

When running tests that target a specific method which uses reflection, I encounter the problem that the output of tests is dependent on whether I run them with PTVS ('run all tests' in Test Explorer) or with the command line Python tool (both on Windows and Linux systems):
$ python -m unittest
I assumed from the start that it has something to do with differences in how the test runners work in PTVS and Python's unittest framework (because I've noticed other differences, too).
# method to be tested
# written in Python 3
def create_line(self):
problems = []
for creator in LineCreator.__subclasses__():
item = creator(self.structure)
cls = item.get_subtype()
line = cls(self.text)
try:
line.parse()
return line
except ParseException as exc:
problems.append(exc)
raise ParseException("parsing did not succeed", problems)
""" subclasses of LineCreator are defined in separate files.
They implement get_subtype() and return the class objects of the actual types they must instantiate.
"""
I have noticed that the subclasses found in this way will vary, depending on which modules have been loaded in the code that calls this method. This is exactly what I want (for now). Given this knowledge, I am always careful to only have access to one subclass of LineCreator in any given test module, class, or method.
However, when I run the tests from the Python command line, it is clear from the ParseException.problems attribute that both are loaded at all times. It is also easy to reproduce: inserting the following code makes all tests fail on the command line, yet they succeed on PTVS.
if len(LineCreator.__subclasses__()) > 1:
raise ImportError()
I know that my tests should run independently from each other and from any contextual factors. That is actually what I'm trying to achieve here.
In case I wasn't clear, my question is why behaviors are different, and which one is correct. And if you're feeling really generous, how to change my code to make tests succeed on all platforms.

Is using Python modules main function for validation testing a bad idea?

I'll quickly explain exactly what I mean by this.
I'm working on a project using python, where I have multiple modules doing segments of work. Let's say for example I have a module called Parser.py and this module has a function parseFile() which my main module Main.py calls in order to parse some files.
As of right now, I'm using a main method inside of the Parser.py
if __name__ == "__main__":
line_list = parseFile(sys.argv[1])
out_file = open(sys.argv[2], "w")
for i in range(len(line_list)):
out_file.write(line_list[i].get_string(True))
It's not important what exactly the parsing does, but the important part is if you call it, the first argument will be the input file for the parsing, the second argument is the output file for parsing.
So, what I'm doing essentially, is I'm using a batch file to validate the results of my parser by a typical input, output, baseline system...
ECHO Set the test, source, input, output and baseline directories
set TESTDIR=%CD%
set SRCDIR=%CD%\..\pypro\src
set INDIR=%CD%\input
set OUTDIR=%CD%\output
set BASEDIR=%CD%\baseline
:: Parser.py main method is base for unit testing on parsing
ECHO Begin Parser testing
cd %INDIR%\Parser
FOR %%G IN (*.psma) DO %SRCDIR%\Parser.py %%G %OUTDIR%\Parser\%%G
ECHO Parser testing complete
cd %TESTDIR%
"C:\Program Files\WinMerge\winmergeU.exe" "%OUTDIR%" "%BASEDIR%"
As you can see it diffs the results against the baseline, so if anything is changed the programmer knows it is no longer valid, or the requirements are wrong.
Is there anything wrong with this method? I did it because it would be easy. My plan is to continue doing this with as many modules that I can which are valid and make sense to do this way, as well as a suite of pyunit tests inside pydev...
I think it's a good idea, and it does seem to be a common use case for if __name__ == '__main__' construct. Though this is a more usual structure:
def main(argv=None):
if argv is None:
argv = sys.argv
# etc.
if __name__ == "__main__":
sys.exit(main() or 0)
This gives you the additional flexibility to use your main from within the interactive interpreter. There are a few more nice examples from Guido and others here.
Personally, what I do in these situations is creating test cases (although these would could more as integration test cases and not only unit test cases).
So, usually (in my workflow), those would be regular test cases (which diff the actual output with the expected output). Although probably in a separate source folder which is not run as often as the unit-test cases.
The bad part of having it as the __main__ is that you'll have to remember to run it as the entry point and you'll probably forget to do it later on as the project grows and you have many of those files -- or at least have a test case that calls that main() :)

_shutdown AttributeError (ignored) when linting code that uses M2Crypto

I'm running lint as follows:
$ python -m pylint.lint m2test.py
with this code:
import M2Crypto
def f():
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None).split("\n")
The lint output ends with:
Exception AttributeError: '_shutdown' in <module 'threading' from '/usr/lib/python2.7/site-packages/M2Crypto-0.21.1-py2.7-linux-x86_64.egg/M2Crypto/threading.pyc'> ignored
This code works fine when run (the above is actually a minimal test case; but the full version does work). The exception is ignored, but Bitten considers this a failure, so stops on this step.
I've tried adding 'M2Crypto.threading.init()'/'M2Crypto.threading.cleanup()' around the definition of the function, but that didn't fix the problem.
How can I prevent this problem from occurring?
I'm using M2Crypto 0.21.1, pylint 0.24 and Python 2.7 (also tried 2.7.2) on Debian Lenny x86_64.
The exception that you are seeing is caused by a bug in the astng package (presumably “Abstract Syntax Tree, Next Generation”?) which is a toolkit on which pylint depends, written by the same people. I should note in passing that I always encourage people to use pyflakes instead of pylint when possible, because it is quick, simple, fast, and predictable, whereas pylint tries to do several kinds of deep magic that are not only slow but that can get it into exactly this kind of trouble. :)
Here are the two packages on PyPI:
http://pypi.python.org/pypi/pylint
http://pypi.python.org/pypi/astng
And note that this problem had to be, necessarily, a bug in pylint and not in your code, because pylint does not run your code in order to produce its report — imagine the havoc that could be wreaked if it did (since code being linted might delete files, etcetera)! Since your code does not get run, no amount of caution, like protecting your call with threading init() or cleanup() functions, could possibly have prevented this error — unless the code snippets happened, for other reasons, to alter the behavior we are about to investigate.
So, on to your actual exception.
I had never actually heard of _shutdown before! A quick search of the Python standard library showed its definition in threading.py but not a call of the function from anywhere; only by searching the Python C source code did I discover where in pythonrun.c, during interpreter shutdown, the function is actually called:
static void
wait_for_thread_shutdown(void)
{
...
PyObject *threading = PyMapping_GetItemString(tstate->interp->modules,
"threading");
if (threading == NULL) {
/* threading not imported */
PyErr_Clear();
return;
}
result = PyObject_CallMethod(threading, "_shutdown", "");
if (result == NULL) {
PyErr_WriteUnraisable(threading);
}
...
}
Apparently it is some sort of cleanup function that the threading Standard Library module requires, and they have special-cased the Python interpreter itself to make sure that it gets called.
As you can see from the code above, Python quietly and without complaint handles the case where the threading module never gets imported during a program's run. But if threading does get imported, and still exists at shutdown time, then the interpreter looks inside for a _shutdown function and goes so far as to print an error message — and then return a non-zero exit status, the cause of your problems — if it cannot call it.
So we have to discover why the threading module exists but has no _shutdown method at the moment when pylint is done examining your program and Python is exiting. Some instrumention is called for. Can we print out what the module looks like as pylint exits? We can! The pylint/lint.py module, in its last few lines, runs its “main program” by instantiating a Run class it has defined:
if __name__ == '__main__':
Run(sys.argv[1:])
So I opened lint.py in my editor — one of the magnificent things about having each little project installed in a Python Virual Environment is that I can jump in and edit third-party code for quick experiments — and added the following print statement down at the bottom of the Run class's __init__() method:
sys.path.pop(0)
print "*****", sys.modules['threading'].__file__ # added by me!
if exit:
sys.exit(self.linter.msg_status)
I re-ran the command:
python -m pylint.lint m2test.py
And out came the __file__ string of the threading module:
***** /home/brandon/venv/lib/python2.7/site-packages/M2Crypto/threading.pyc
Well, look at that.
This is the problem!
According to this path, there actually exists an M2Crypto/threading.py module that, under all normal circumstances, should just be called M2Crypto.threading, and therefore sit in the sys.modules dictionary under the name:
sys.modules['M2Crypto.threading']
But somehow that file is also getting loaded as the main Python threading module, shadowing the official threading module that sits in the Standard Library. Because of this, the Python exit logic is quite correctly complaining that the Standard Library _shutdown() function is missing.
How could this happen? Top-level modules can only appear in paths that are listed explicitly in sys.path, not in sub-directories beneath them. This leads to a new question: is there any point during the pylint run that the …/M2Crypto/ directory itself is getting put on sys.path as though it contained top-level modules? Let's see!
We need more instrumentation: we need to have Python tell us the moment that a directory with M2Crypto in the name appears in sys.path. It will really slow things down, but let's add a trace function to pylint's __init__.py — because that is the first module that gets imported when you run -m pylint.lint — that will write an output file telling us, for every line of code executed, whether sys.path has any bad values in it:
def install_tracer():
import sys
output = open('mytracer.out', 'w')
def mytracer(frame, event, arg):
broken = any(p.endswith('M2Crypto') for p in sys.path)
output.write('{} {}:{} {}\n'.format(
broken, frame.f_code.co_filename, frame.f_lineno, event))
return mytracer
sys.settrace(mytracer)
install_tracer()
del install_tracer
Note how careful I am here: I define only one name in the module's namespace, and then carefully delete it to clean up after myself before I let pylint continue loading! And all of the resources that the trace function itself needs — namely, the sys module and the output open file — are available in the install_tracer() closure so that, from the outside, pylint looks exactly the same as always. Just in case anyone tries to introspect it, like pylint might!
This generates a file mytracer.out of about 800k lines, that each look something like this:
False /home/brandon/venv/lib/python2.7/posixpath.py:118 call
The False says that sys.path looks clean, the filename and line number are the line of code being executed, and call indicates what stage of execution the interpreter is in.
So does sys.path ever get poisoned? Let's look at just the first True or False on each line, and see how many successive lines start with each value:
$ awk '{print$1}' mytracer.out | uniq -c
607997 False
3173 True
4558 False
33217 True
4304 False
41699 True
2953 False
110503 True
52575 False
Wow! That's a problem! For runs of several thousand lines at a time, our test case is True, which means that the interpreter is running with …/M2Crypto/ — or some variant of a pathname with M2Crypto in it — on the path, where it should not be; only the directory that contains …/M2Crypto should ever be on the path. Looking for the first False to True transition in the file, I see this:
False /home/brandon/venv/lib/python2.7/site-packages/logilab/astng/builder.py:132 line
False /home/brandon/venv/lib/python2.7/posixpath.py:118 call
...
False /home/brandon/venv/lib/python2.7/posixpath.py:124 line
False /home/brandon/venv/lib/python2.7/posixpath.py:124 return
True /home/brandon/venv/lib/python2.7/site-packages/logilab/astng/builder.py:133 line
And looking at lines 132 and 133 in the builder.py file reveals our culprit:
130 # build astng representation
131 try:
132 sys.path.insert(0, dirname(path)) # XXX (syt) iirk
133 node = self.string_build(data, modname, path)
134 finally:
135 sys.path.pop(0)
Note the comment, which is part of the original code, not an addition of my own! Obviously, XXX (syt) iirk is an exclamation in this programmer's strange native language for the phrase, “put this module's parent directory on sys.path so that pylint will break mysteriously every time someone forces pylint to introspect a package with a threading sub-module.” It is, obviously, a very compact native language. :)
If you adjust the tracing module to watch sys.modules for the actual import of threading — an exercise I will leave to the reader — you will see that it happens when SocketServer, which is imported by some other Standard Library module during the analysis, in turn tries to innocently import threading.
So let us review what is happening:
pylint is dangerous magic.
As part of its magic, if it sees you import foo, then it runs off trying to find foo.py on disk, to parse it, and to predict whether you are loading valid or invalid names from its namespace.
[See my comment, below.] Because you call .split() on the return value of RSA.as_pem(), pylint tries to introspect the as_pem() method, which in turn uses the M2Crypto.BIO module, which in turn makes calls that induce pylint to import threading.
As part of loading any module foo.py, pylint throws the directory containing foo.py on sys.path, even if that directory is inside a package, and therefore gives modules in that directory the privilege of shadowing Standard Library modules of the same name during its analysis.
When Python exits, it is upset that the M2Crypto.threading library is sitting where threading belongs, because it wants to run the _shutdown() method of threading.
You should report this as a bug to the pylint / astng folks at logilab.org. Tell them I sent you.
If you decide to keep using pylint after it has done this to you, then there seem to be two solutions in this case: either don't inspect code that calls M2Crypto, or import threading during the pylint import process — by sticking import threading into the pylint/__init__.py, for example — so that the module gets the chance to grab the sys.modules['threading'] slot before pylint gets all excited and tries to let M2Crypto/threading.py grab the slot instead.
In conclusion, I think the author of astng says it best: XXX (syt) iirk. Indeed.
Many thanks to Brandon Craig Rhodes for having tracing this down and for such a detailed post.
I've removed the offending line from astng, code available from the hg repository until logilab-astng 0.23.0 is out. And I can confirm this fixes the OP's pb.
This looks more like a hack but I think it works. Copying the result of "as_pem()" and splitting it.
import M2Crypto
def f():
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None)[:].split("\n")
I'm using Python 2.6.7, M2Crypto 0.21.1, pylint 0.23
I was unable to reproduce (pylint 0.24 and M2Crypto 0.21.1 on Ubuntu 11.04 64bit) but two suggestions:
Explicitly initialize threading:
import M2Crypto
def f():
M2Crypto.threading.init()
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None).split("\n")
M2Crypto.threading.cleanup()
Or recompile without threading:
m2crypto = Extension(name = 'M2Crypto.__m2crypto',
sources = ['SWIG/_m2crypto.i'],
extra_compile_args = ['-DTHREADING'],
#extra_link_args = ['-Wl,-search_paths_first'], # Uncomment to build Universal Mac binaries
)

What cool hacks can be done using sys.settrace?

I love being able to modify the arguments the get sent to a function, using settrace, like :
import sys
def trace_func(frame,event,arg):
value = frame.f_locals["a"]
if value % 2 == 0:
value += 1
frame.f_locals["a"] = value
def f(a):
print a
if __name__ == "__main__":
sys.settrace(trace_func)
for i in range(0,5):
f(i)
And this will print:
1
1
3
3
5
What other cool stuff can you do using settrace?
I would strongly recommend against abusing settrace. I'm assuming you understand this stuff, but others coming along later may not. There are a few reasons:
Settrace is a very blunt tool. The OP's example is a simple one, but there's practically no way to extend it for use in a real system.
It's mysterious. Anyone coming to look at your code would be completely stumped why it was doing what it was doing.
It's slow. Invoking a Python function for every line of Python executed is going to slow down your program by many multiples.
It's usually unnecessary. The original example here could have been accomplished in a few other ways (modify the function, wrap the function in a decorator, call it via another function, etc), any of which would have been better than settrace.
It's hard to get right. In the original example, if you had not called f directly, but instead called g which called f, your trace function wouldn't have done its job, because you returned None from the trace function, so it's only invoked once and then forgotten.
It will keep other tools from working. This program will not be debuggable (because debuggers use settrace), it will not be traceable, it will not be possible to measure its code coverage, etc. Part of this is due to lack of foresight on the part of the Python implementors: they gave us settrace but no gettrace, so it's difficult to have two trace functions that work together.
Trace functions make for cool hacks. It's fun to be able to abuse it, but please don't use it for real stuff. If I sound hectoring, I apologize, but this has been done in real code, and it's a pain. For example, DecoratorTools uses a trace function to perform the magic feat of making this syntax work in Python 2.3:
# Method decorator example
from peak.util.decorators import decorate
class Demo1(object):
decorate(classmethod) # equivalent to #classmethod
def example(cls):
print "hello from", cls
A neat hack, but unfortunately, it meant that any code that used DecoratorTools wouldn't work with coverage.py (or debuggers, I guess). Not a good tradeoff if you ask me. I changed coverage.py to provide a mode that lets it work with DecoratorTools, but I wish I hadn't had to.
Even code in the standard library sometimes gets this stuff wrong. Pyexpat decided to be different than every other extension module, and invoke the trace function as if it were Python code. Too bad they did a bad job of it.
</rant>
I made a module called pycallgraph which generates call graphs using sys.settrace().
Of course, code coverage is accomplished with the trace function. One cool thing we haven't had before is branch coverage measurement, and that's coming along nicely, about to be released in an alpha version of coverage.py.
So for example, consider this function:
def foo(x):
if x:
y = 10
return y
if you test it with this call:
assert foo(1) == 10
then statement coverage will tell you that all the lines of the function were executed. But of course, there's a simple problem in that function: calling it with 0 raises a UnboundLocalError.
Branch measurement would tell you that there's a branch in the code that isn't fully exercised, because only one leg of the branch is ever taken.
For example, get the memory consumption of Python code line-by-line: http://pypi.python.org/pypi/memory_profiler
One latest project that uses settrace heavily is PySnooper
It helps new programmers to trace/log/monitor their program output. Cheers!
I don't have an exhaustively comprehensive answer but one thing I did with it, with the help of another user on SO, was create a program that generates the trace tables of other Python programs.
The python debugger Pdb uses sys.settrace to analyse lines to debug.
Here's an c optimization/extension for pdb that also uses sys.settrace
https://bitbucket.org/jagguli/cpdb

Categories