Python script behaves differently when there is a function definition - python

I'm running a script from a Linux terminal with Python 3.6.8 and the script started failing when I tried to expand it with a function definition. I whittled it down to the basics and found that the device fails to connect when there is a function definition followed by a print statement in the code, but not when there's a print statement followed by a function definition.
This code successfully connects to (and disconnects from) the device:
import DeviceInterface
device_class = DeviceInterface.Device()
print()
def dummy_function_that_does_nothing():
pass
with device_class:
pass
This code, which swaps the function definition and print statement, gives a device connection error:
import DeviceInterface
device_class = DeviceInterface.Device()
def dummy_function_that_does_nothing():
pass
print()
with device_class:
pass
These examples are the exact file contents of the scripts being run (nothing added or omitted for this post). The DeviceInterface module is a ctypes wrapper around a C-based .so library. That library uses Aravis v0.6.4. The connection failure is caused by a null pointer being returned from a call to arv_camera_new().
I would expect no difference between the 2 versions of code above. There seems to be something deeper going on in Python or Linux libraries that I don't understand.
Why would there be different behavior when the print() comes before the function definition, rather than after? I have workarounds, so my question is not centered around how to get my code working, but rather to understand at a low level why there would be a difference in the way Python is working. I was shocked that there would be a difference between these 2 versions of code.
Reproducibility
Unfortunately, I haven't found a way to reproduce the problem without a library I do not have rights to distribute. I'm hoping someone stumbles on this that knows how Python would behave differently when there's a function definition followed by a print statement (vs a print statement followed by a function definition). If I understood the difference between the 2 versions of code I could likely come up with a more generic way to reproduce the problem.
Other things I've tried
I've inserted delays in various places, but none had an effect on whether the device successfully connected, so it doesn't seem to be a timing issue as I originally suspected.
I tried running both versions a number of times, and the problem has been very repeatably linked to the order of the function definition and the print statement (as opposed to being able to randomly connect).
If I remove the print statement entirely, it succeeds regardless of where I put the function definition.
I thought it might have to do with garbage collection killing a socket. I tried disabling the garbage collection with gc.disable() at the start of the script, but it didn't change the behavior.
This code, which adds an additional function definition, successfully connects:
import DeviceInterface
device_class = DeviceInterface.Device()
def dummy_function_that_does_nothing():
pass
print()
def dummy_function_that_does_nothing_again():
pass
with device_class:
pass
This code, which adds an additional function definition and another print statement, fails to connect:
import DeviceInterface
device_class = DeviceInterface.Device()
def dummy_function_that_does_nothing():
pass
print()
def dummy_function_that_does_nothing_again():
pass
print()
with device_class:
pass
Changing the print statement to print(flush=True) or print(sys.stderr) did not change the functionality. However, print(end="") caused the problem to go away.
Running python with unbuffered stdin/stdout/stderr (python3 -u odd_behavior_test.py) caused the failure to go away.

Related

Python asyncio: Can‘t debug into Task class

When working with Python's asyncio package, I've noticed that I can't step into any code of its tasks.Task class. For example, when the calling code invokes the class's constructor, my next 'step into' get's me into a get_debug() function outside the class. After that, I return to the calling code with an initialised Task object. I've observed similar behaviour with Task.__next_step(): I'll just step into code that gets called by this method.
All Python versions (3.9, 3.10), IDEs (PyCharm, Visual Studio Code) and OSs (macOS, Windows) that I tested showed the same issue.
Does anyone know the reason for the debugger’s strange behaviour and, possibly, how to overcome it?
The call_soon() in the last line of the screenshot is issued from within Task.__init__. However, as you can see, the debugger never stepped into the initializer.
Update: Surprisingly, with Python 3.6 (Pythonista on iPadOS) I can step into Task.__init__ from base_events.BaseEventLoop.create_task().
The implementation of Task is in C (where available). The debugger cannot step into C code.
You can see this in the asyncio.tasks module:
class Task(futures._PyFuture):
...
_PyTask = Task
try:
import _asyncio
except ImportError:
pass
else:
# _CTask is needed for tests.
Task = _CTask = _asyncio.Task
That last line shows the Python implementation being overridden by the C implementation.
You can verify which implementation you have by inspecting the __module__ attribute of Task. e.g.
import asyncio
print(asyncio.Task.__module__)
A pure Python implementation will print asyncio.tasks. The C implementation will print _asyncio.

Python how to print full stack, including magic methods (dunder methods) used?

I am trying to debug a Python built-in class. My debugging has brought me into the realm of magic methods (aka dunder methods).
I am trying to figure out which dunder methods are called, if any. Normally I would do something like this:
import sys
import traceback
# This would be located where the I'm currently debugging
traceback.print_stack(file=sys.stdout)
However, traceback.print_stack does not give me the level of detail of printing what dunder methods area used in its vicinity.
Is there some way I can print out, in a very verbose manner, what is actually happening inside a block of code?
Sample Code
#!/usr/bin/env python3.6
import sys
import traceback
from enum import Enum
class TestEnum(Enum):
"""Test enum."""
A = "A"
def main():
for enum_member in TestEnum:
traceback.print_stack(file=sys.stdout)
print(f"enum member = {enum_member}.")
if __name__ == "__main__":
main()
I would like the above sample code to print out any dunder methods used (ex: __iter__).
Currently it prints out the path to the call to traceback.print_stack:
/path/to/venv/bin/python /path/to/file.py
File "/path/to/file.py", line 56, in <module>
main()
File "/path/to/file.py", line 51, in main
traceback.print_stack(file=sys.stdout)
enum member = TestEnum.A.
P.S. I'm not interested in going to the byte code level given by dis.dis.
I think, with the stacktrace you are looking at the wrong place. When you call print_stack from a place, that is executed only when coming from a dunder method, this method is very well included in the output.
I tried this code to verify:
import sys
import traceback
from enum import Enum
class TestEnum(Enum):
"""Test enum."""
A = "A"
class MyIter:
def __init__(self):
self.i = 0
def __next__(self):
self.i += 1
if self.i <= 1:
traceback.print_stack(file=sys.stdout)
return TestEnum.A
raise StopIteration
def __iter__(self):
return self
def main():
for enum_member in MyIter():
print(f"enum member = {enum_member}.")
if __name__ == "__main__":
main()
The last line of the stack trace is printed as
File "/home/lydia/playground/demo.py", line 21, in __next__
traceback.print_stack(file=sys.stdout)
In your original code, you are getting the stack trace at a time when all dunder methods have already returned. Thus they have been removed from the stack.
So I think, you want to have a look at a call graph instead. I know that IntelliJ / PyCharm can do this nicely at least in the paid editions.
There are other tools that you may want to try. How does pycallgraph look to you?
Update:
Python makes it actually pretty easy to dump a plain list of all the function calls.
Basically all you need to do is
import sys
sys.setprofile(tracefunc)
Write the tracefunc depending on your needs. Find a working example at this SO question: How do I print functions as they are called
Warning: I needed to start the script from an external shell. Starting it by using the play button in my IDE meant that the script would never terminate but write more and more lines. I assume it collides with the internal profiling done by my IDE.
The official documentation of sys.setprofile: https://docs.python.org/3/library/sys.html#sys.setprofile
And a random tutorial about tracing in Python: https://pymotw.com/2/sys/tracing.html
Note however, that by my experience you can get the best insights into the questions "who is calling whom?" or "where does this value even come from?" by using a plain-old debugger.
I also did some research on the subject matter, as information in #LydiaVanDyke's answer fueled better searches.
Printing Entire Call Stack
As #LydiaVanDyke points out, an IDE debugger is a really great way. I use PyCharm, and found that was my favorite solution, because one can:
Follow function calls + exact line numbers in the code
Read code around the calls, better understanding typing
Skip over calls one doesn't care to investigate
Another way is Python's standard library's trace. It offers both command line and embeddable methods for printing the entire call stack.
And yet another one is Python's built-in debugger module, pdb. This (invoked via pdb.set_trace()) really changed the game for me.
Visualization of Profiler Output
gprof2dot is another useful profiler visualization tool.
source code
useful tutorial
Finding Source Code
One of my other problems was not actually seeing the real source code, due to my IDE's stub files (PyCharm).
How to retrieve source code of Python functions details two methods of actually printing source code
With all this tooling, one feels quite empowered!

Fatal Python error: Can't initialize threads for interpreter when calling python from c

I tried to call python code from c, the example runs ok for sample code on my environment(python3.6), but when I integrate it into my program, I got following error when I call Py_Initialize();:
...
sem_init: Success
Fatal Python error: Can't initialize threads for interpreter
Could you provide some clues to solve this problem?
It seems the error comes from here, but I am still not sure how to avoid this.
The failing code is
if (head_mutex == NULL)
Py_FatalError("Can't initialize threads for interpreter");
Searching the code back for head_mutex references finds
#define HEAD_INIT() (void)(head_mutex || (head_mutex = PyThread_allocate_lock()))
which is called right before the failing code.
So, the reason is that PyThread_allocate_lock returns NULL. There are a few different implementations for it in Python codebase depending on the OS and build flags, so you need to debug it or otherwise figure out which one is used in your case to track the error further to an OS call.
There is a function named sem_init in my program, which may conflict with the system library, the program runs ok after I modify the name of this function(but still not sure the reason).

_shutdown AttributeError (ignored) when linting code that uses M2Crypto

I'm running lint as follows:
$ python -m pylint.lint m2test.py
with this code:
import M2Crypto
def f():
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None).split("\n")
The lint output ends with:
Exception AttributeError: '_shutdown' in <module 'threading' from '/usr/lib/python2.7/site-packages/M2Crypto-0.21.1-py2.7-linux-x86_64.egg/M2Crypto/threading.pyc'> ignored
This code works fine when run (the above is actually a minimal test case; but the full version does work). The exception is ignored, but Bitten considers this a failure, so stops on this step.
I've tried adding 'M2Crypto.threading.init()'/'M2Crypto.threading.cleanup()' around the definition of the function, but that didn't fix the problem.
How can I prevent this problem from occurring?
I'm using M2Crypto 0.21.1, pylint 0.24 and Python 2.7 (also tried 2.7.2) on Debian Lenny x86_64.
The exception that you are seeing is caused by a bug in the astng package (presumably “Abstract Syntax Tree, Next Generation”?) which is a toolkit on which pylint depends, written by the same people. I should note in passing that I always encourage people to use pyflakes instead of pylint when possible, because it is quick, simple, fast, and predictable, whereas pylint tries to do several kinds of deep magic that are not only slow but that can get it into exactly this kind of trouble. :)
Here are the two packages on PyPI:
http://pypi.python.org/pypi/pylint
http://pypi.python.org/pypi/astng
And note that this problem had to be, necessarily, a bug in pylint and not in your code, because pylint does not run your code in order to produce its report — imagine the havoc that could be wreaked if it did (since code being linted might delete files, etcetera)! Since your code does not get run, no amount of caution, like protecting your call with threading init() or cleanup() functions, could possibly have prevented this error — unless the code snippets happened, for other reasons, to alter the behavior we are about to investigate.
So, on to your actual exception.
I had never actually heard of _shutdown before! A quick search of the Python standard library showed its definition in threading.py but not a call of the function from anywhere; only by searching the Python C source code did I discover where in pythonrun.c, during interpreter shutdown, the function is actually called:
static void
wait_for_thread_shutdown(void)
{
...
PyObject *threading = PyMapping_GetItemString(tstate->interp->modules,
"threading");
if (threading == NULL) {
/* threading not imported */
PyErr_Clear();
return;
}
result = PyObject_CallMethod(threading, "_shutdown", "");
if (result == NULL) {
PyErr_WriteUnraisable(threading);
}
...
}
Apparently it is some sort of cleanup function that the threading Standard Library module requires, and they have special-cased the Python interpreter itself to make sure that it gets called.
As you can see from the code above, Python quietly and without complaint handles the case where the threading module never gets imported during a program's run. But if threading does get imported, and still exists at shutdown time, then the interpreter looks inside for a _shutdown function and goes so far as to print an error message — and then return a non-zero exit status, the cause of your problems — if it cannot call it.
So we have to discover why the threading module exists but has no _shutdown method at the moment when pylint is done examining your program and Python is exiting. Some instrumention is called for. Can we print out what the module looks like as pylint exits? We can! The pylint/lint.py module, in its last few lines, runs its “main program” by instantiating a Run class it has defined:
if __name__ == '__main__':
Run(sys.argv[1:])
So I opened lint.py in my editor — one of the magnificent things about having each little project installed in a Python Virual Environment is that I can jump in and edit third-party code for quick experiments — and added the following print statement down at the bottom of the Run class's __init__() method:
sys.path.pop(0)
print "*****", sys.modules['threading'].__file__ # added by me!
if exit:
sys.exit(self.linter.msg_status)
I re-ran the command:
python -m pylint.lint m2test.py
And out came the __file__ string of the threading module:
***** /home/brandon/venv/lib/python2.7/site-packages/M2Crypto/threading.pyc
Well, look at that.
This is the problem!
According to this path, there actually exists an M2Crypto/threading.py module that, under all normal circumstances, should just be called M2Crypto.threading, and therefore sit in the sys.modules dictionary under the name:
sys.modules['M2Crypto.threading']
But somehow that file is also getting loaded as the main Python threading module, shadowing the official threading module that sits in the Standard Library. Because of this, the Python exit logic is quite correctly complaining that the Standard Library _shutdown() function is missing.
How could this happen? Top-level modules can only appear in paths that are listed explicitly in sys.path, not in sub-directories beneath them. This leads to a new question: is there any point during the pylint run that the …/M2Crypto/ directory itself is getting put on sys.path as though it contained top-level modules? Let's see!
We need more instrumentation: we need to have Python tell us the moment that a directory with M2Crypto in the name appears in sys.path. It will really slow things down, but let's add a trace function to pylint's __init__.py — because that is the first module that gets imported when you run -m pylint.lint — that will write an output file telling us, for every line of code executed, whether sys.path has any bad values in it:
def install_tracer():
import sys
output = open('mytracer.out', 'w')
def mytracer(frame, event, arg):
broken = any(p.endswith('M2Crypto') for p in sys.path)
output.write('{} {}:{} {}\n'.format(
broken, frame.f_code.co_filename, frame.f_lineno, event))
return mytracer
sys.settrace(mytracer)
install_tracer()
del install_tracer
Note how careful I am here: I define only one name in the module's namespace, and then carefully delete it to clean up after myself before I let pylint continue loading! And all of the resources that the trace function itself needs — namely, the sys module and the output open file — are available in the install_tracer() closure so that, from the outside, pylint looks exactly the same as always. Just in case anyone tries to introspect it, like pylint might!
This generates a file mytracer.out of about 800k lines, that each look something like this:
False /home/brandon/venv/lib/python2.7/posixpath.py:118 call
The False says that sys.path looks clean, the filename and line number are the line of code being executed, and call indicates what stage of execution the interpreter is in.
So does sys.path ever get poisoned? Let's look at just the first True or False on each line, and see how many successive lines start with each value:
$ awk '{print$1}' mytracer.out | uniq -c
607997 False
3173 True
4558 False
33217 True
4304 False
41699 True
2953 False
110503 True
52575 False
Wow! That's a problem! For runs of several thousand lines at a time, our test case is True, which means that the interpreter is running with …/M2Crypto/ — or some variant of a pathname with M2Crypto in it — on the path, where it should not be; only the directory that contains …/M2Crypto should ever be on the path. Looking for the first False to True transition in the file, I see this:
False /home/brandon/venv/lib/python2.7/site-packages/logilab/astng/builder.py:132 line
False /home/brandon/venv/lib/python2.7/posixpath.py:118 call
...
False /home/brandon/venv/lib/python2.7/posixpath.py:124 line
False /home/brandon/venv/lib/python2.7/posixpath.py:124 return
True /home/brandon/venv/lib/python2.7/site-packages/logilab/astng/builder.py:133 line
And looking at lines 132 and 133 in the builder.py file reveals our culprit:
130 # build astng representation
131 try:
132 sys.path.insert(0, dirname(path)) # XXX (syt) iirk
133 node = self.string_build(data, modname, path)
134 finally:
135 sys.path.pop(0)
Note the comment, which is part of the original code, not an addition of my own! Obviously, XXX (syt) iirk is an exclamation in this programmer's strange native language for the phrase, “put this module's parent directory on sys.path so that pylint will break mysteriously every time someone forces pylint to introspect a package with a threading sub-module.” It is, obviously, a very compact native language. :)
If you adjust the tracing module to watch sys.modules for the actual import of threading — an exercise I will leave to the reader — you will see that it happens when SocketServer, which is imported by some other Standard Library module during the analysis, in turn tries to innocently import threading.
So let us review what is happening:
pylint is dangerous magic.
As part of its magic, if it sees you import foo, then it runs off trying to find foo.py on disk, to parse it, and to predict whether you are loading valid or invalid names from its namespace.
[See my comment, below.] Because you call .split() on the return value of RSA.as_pem(), pylint tries to introspect the as_pem() method, which in turn uses the M2Crypto.BIO module, which in turn makes calls that induce pylint to import threading.
As part of loading any module foo.py, pylint throws the directory containing foo.py on sys.path, even if that directory is inside a package, and therefore gives modules in that directory the privilege of shadowing Standard Library modules of the same name during its analysis.
When Python exits, it is upset that the M2Crypto.threading library is sitting where threading belongs, because it wants to run the _shutdown() method of threading.
You should report this as a bug to the pylint / astng folks at logilab.org. Tell them I sent you.
If you decide to keep using pylint after it has done this to you, then there seem to be two solutions in this case: either don't inspect code that calls M2Crypto, or import threading during the pylint import process — by sticking import threading into the pylint/__init__.py, for example — so that the module gets the chance to grab the sys.modules['threading'] slot before pylint gets all excited and tries to let M2Crypto/threading.py grab the slot instead.
In conclusion, I think the author of astng says it best: XXX (syt) iirk. Indeed.
Many thanks to Brandon Craig Rhodes for having tracing this down and for such a detailed post.
I've removed the offending line from astng, code available from the hg repository until logilab-astng 0.23.0 is out. And I can confirm this fixes the OP's pb.
This looks more like a hack but I think it works. Copying the result of "as_pem()" and splitting it.
import M2Crypto
def f():
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None)[:].split("\n")
I'm using Python 2.6.7, M2Crypto 0.21.1, pylint 0.23
I was unable to reproduce (pylint 0.24 and M2Crypto 0.21.1 on Ubuntu 11.04 64bit) but two suggestions:
Explicitly initialize threading:
import M2Crypto
def f():
M2Crypto.threading.init()
M2Crypto.RSA.new_pub_key("").as_pem(cipher=None).split("\n")
M2Crypto.threading.cleanup()
Or recompile without threading:
m2crypto = Extension(name = 'M2Crypto.__m2crypto',
sources = ['SWIG/_m2crypto.i'],
extra_compile_args = ['-DTHREADING'],
#extra_link_args = ['-Wl,-search_paths_first'], # Uncomment to build Universal Mac binaries
)

What cool hacks can be done using sys.settrace?

I love being able to modify the arguments the get sent to a function, using settrace, like :
import sys
def trace_func(frame,event,arg):
value = frame.f_locals["a"]
if value % 2 == 0:
value += 1
frame.f_locals["a"] = value
def f(a):
print a
if __name__ == "__main__":
sys.settrace(trace_func)
for i in range(0,5):
f(i)
And this will print:
1
1
3
3
5
What other cool stuff can you do using settrace?
I would strongly recommend against abusing settrace. I'm assuming you understand this stuff, but others coming along later may not. There are a few reasons:
Settrace is a very blunt tool. The OP's example is a simple one, but there's practically no way to extend it for use in a real system.
It's mysterious. Anyone coming to look at your code would be completely stumped why it was doing what it was doing.
It's slow. Invoking a Python function for every line of Python executed is going to slow down your program by many multiples.
It's usually unnecessary. The original example here could have been accomplished in a few other ways (modify the function, wrap the function in a decorator, call it via another function, etc), any of which would have been better than settrace.
It's hard to get right. In the original example, if you had not called f directly, but instead called g which called f, your trace function wouldn't have done its job, because you returned None from the trace function, so it's only invoked once and then forgotten.
It will keep other tools from working. This program will not be debuggable (because debuggers use settrace), it will not be traceable, it will not be possible to measure its code coverage, etc. Part of this is due to lack of foresight on the part of the Python implementors: they gave us settrace but no gettrace, so it's difficult to have two trace functions that work together.
Trace functions make for cool hacks. It's fun to be able to abuse it, but please don't use it for real stuff. If I sound hectoring, I apologize, but this has been done in real code, and it's a pain. For example, DecoratorTools uses a trace function to perform the magic feat of making this syntax work in Python 2.3:
# Method decorator example
from peak.util.decorators import decorate
class Demo1(object):
decorate(classmethod) # equivalent to #classmethod
def example(cls):
print "hello from", cls
A neat hack, but unfortunately, it meant that any code that used DecoratorTools wouldn't work with coverage.py (or debuggers, I guess). Not a good tradeoff if you ask me. I changed coverage.py to provide a mode that lets it work with DecoratorTools, but I wish I hadn't had to.
Even code in the standard library sometimes gets this stuff wrong. Pyexpat decided to be different than every other extension module, and invoke the trace function as if it were Python code. Too bad they did a bad job of it.
</rant>
I made a module called pycallgraph which generates call graphs using sys.settrace().
Of course, code coverage is accomplished with the trace function. One cool thing we haven't had before is branch coverage measurement, and that's coming along nicely, about to be released in an alpha version of coverage.py.
So for example, consider this function:
def foo(x):
if x:
y = 10
return y
if you test it with this call:
assert foo(1) == 10
then statement coverage will tell you that all the lines of the function were executed. But of course, there's a simple problem in that function: calling it with 0 raises a UnboundLocalError.
Branch measurement would tell you that there's a branch in the code that isn't fully exercised, because only one leg of the branch is ever taken.
For example, get the memory consumption of Python code line-by-line: http://pypi.python.org/pypi/memory_profiler
One latest project that uses settrace heavily is PySnooper
It helps new programmers to trace/log/monitor their program output. Cheers!
I don't have an exhaustively comprehensive answer but one thing I did with it, with the help of another user on SO, was create a program that generates the trace tables of other Python programs.
The python debugger Pdb uses sys.settrace to analyse lines to debug.
Here's an c optimization/extension for pdb that also uses sys.settrace
https://bitbucket.org/jagguli/cpdb

Categories