print the value of variables after each line of code - python

I'm trying to understand how the following PyTorch code works. To know how each function works and what they output & to know the outputted variables value and size, I'm using print() after each line of code.
s = pc()
for _ in trange(max_length):
if self.onnx:
outputs = torch.tensor(self.decoder_with_lm_head.run(None, {"input_ids": generated.cpu().numpy(),
"encoder_hidden_states": encoder_outputs_prompt})[0][0])
print(f'decoder output -- {outputs}')
else:
outputs = self.decoder_with_lm_head(input_ids=generated, encoder_hidden_states=encoder_outputs_prompt)[0]
next_token_logits = outputs[-1, :] / (temperature if temperature > 0 else 1.0)
print(f'next_token_logits -- {next_token_logits}')
if int(next_token_logits.argmax()) == 1:
print(f'next token logits argmax -- {int(next_token_logits.argmax())}')
break
new_logits.append(next_token_logits)
print(f'new_logits { i } -- {new_logits}')
print(f'generated -- {generated}')
print(f'generated view list -- {set(generated.view(-1).tolist())}')
for _ in set(generated.view(-1).tolist()):
next_token_logits[_] /= repetition_penalty
print(f'ext_token_logits[_] -- {next_token_logits[_]}')
if temperature == 0: # greedy sampling:
next_token = torch.argmax(next_token_logits).unsqueeze(0)
print(f'next_token -- {next_token}')
else:
filtered_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p)
next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=1)
generated = torch.cat((generated, next_token.unsqueeze(0)), dim=1)
print(f'generated end -- {generated}')
new_tokens = torch.cat((new_tokens, next_token), 0)
print(f'new_tokens end -- {new_tokens}')
i += 1
print(f'--------------------------\n')
e = pc()
ap = e-s
print(ap)
print(timedelta(ap))
return self.tokenizer.decode(new_tokens), new_logit
my question, is there a more efficient way of tracking these values & their shape, or are there any libraries that handle this task.

after doing a lot of searches I found a library that fits my requirements. The library is pysnooper, instead of using print() function after each line of code, now I can just use pysnooper's decorator
#pysnooper.snoop()
def greedy_search(input_text, num_beam, max_length, max_context_length=512):
...
and it will print all the variable's values with their corresponding time of execution.
for more info refer to its GitHub page.
I feel like debuggers are a bit complicated & I also want to run it on notebooks. this is the simplest option I found. any suggestions are welcome.

The standard practice to monitor code is with a debugger. The documentation for the standard library debugger, "pdb", can be found here: https://docs.python.org/3/library/pdb.html
There are other options which might suit your needs a bit better, pdb is general purpose so while learning it isn't a bad idea, it might be more trouble than your current methods.
Thonny is a great tool for easily seeing what Python code is doing, as well as being a debugger. It will step through each line of code and show you each variable's value and what operations they undergo or functions they feed into.
Apart from those two options the IDE PyCharm has a built in visual debugger, and Visual Studio as well.

Related

Intel Vtune cannot find python source file

This is an old problem as is demonstrated as in https://community.intel.com/t5/Analyzers/Unable-to-view-source-code-when-analyzing-results/td-p/1153210. I have tried all the listed methods, none of them works, and I cannot find any more solutions on the internet. Basically vtune cannot find the custom python source file no matter what is tried. I am using the most recently version as of speaking. Please let me whether there is a solution.
For example, if you run the following program.
def myfunc(*args):
# Do a lot of things.
if __name__ = '__main__':
# Do something and call myfunc
Call this script main.py. Now use the newest vtune version (I have using Ubuntu 18.04), run the vtune-gui and basic hotspot analysis. You will not found any information on this file. However, a huge pile of information on Python and its other codes are found (related to your python environment). In theory, you should be able to find the source of main.py as well as cost on each line in that script. However, that is simply not happening.
Desired behavior: I would really like to find the source file and function in the top-down manual (or any really). Any advice is welcome.
VTune offer full support for profiling python code and the tool should be able to display the source code in your python file as you expected. Could you please check if the function you are expecting to see in the VTune results, ran long enough?
Just to confirm that everything is working fine, I wrote a matrix multiplication code as shown below (don't worry about the accuracy of the code itself):
def matrix_mul(X, Y):
result_matrix = [ [ 1 for i in range(len(X)) ] for j in range(len(Y[0])) ]
# iterate through rows of X
for i in range(len(X)):
# iterate through columns of Y
for j in range(len(Y[0])):
# iterate through rows of Y
for k in range(len(Y)):
result_matrix[i][j] += X[i][k] * Y[k][j]
return result_matrix
Then I called this function (matrix_mul) on my Ubuntu machine with large enough matrices so that the overall execution time was in the order of few seconds.
I used the below command to start profiling (you can also see the VTune version I used):
/opt/intel/oneapi/vtune/2021.1.1/bin64/vtune -collect hotspots -knob enable-stack-collection=true -data-limit=500 -ring-buffer=10 -app-working-dir /usr/bin -- python3 /home/johnypau/MyIntel/temp/Python_matrix_mul/mat_mul_method.py
Now open the VTune results in the GUI and under the bottom-up tab, order by "Module / Function / Call-stack" (or whatever preferred grouping is).
You should be able to see the the module (mat_mul_method.py in my case) and the function "matrix_mul". If you double click, VTune should be able to load the sources too.

Python unit test advice

Can I get some advice on writing a unit test for the following piece of code?
%python
import sys
import json
sys.argv = []
sys.argv.append('{"product1":{"brand":"x","type":"y"}}')
sys.argv.append('{"product1":{"brand":"z","type":"a"}}')
products = sys.argv
yy= {}
my_products = []
for n, i in enumerate(products[:]):
xx = json.loads(i)
for j in xx.keys():
yy["brand"] = xx[j]['brand']
yy["type"] = xx[j]["type"]
my_products.append(yy)
print my_products
As it stands there aren't any units to test!!!
A test might consist of:
packaging your program in a script
invoking your program from python unit test as a subprocess
piping the output of your command process to a buffer
asserting the buffer is what you except it to be
While the above would technically allow you to have an automated test on your code it comes with a lot of burden:
- multi processing
- weak assertions by not having types
- coarse interaction (have to invoke a script, can't just assert on the brand/type logic
One way to address those issues could be to package your code into smaller units, ie create a method to encapsulate:
for j in xx.keys():
yy["brand"] = xx[j]['brand']
yy["type"] = xx[j]["type"]
my_products.append(yy)
Import it, exercise it and assert on its output. Then there might be something to map the loading and application of xx.keys() loop to an array (which you could also encapsulate as a function).
And then there could be the highest level taking in args and composing the product mapper loader transformer. And since your code will be thoroughly unit tested at this point, you may get away with not having a test for your top level script?

Writing word backwards

I know there are possibilities :
sampleword[::-1]
or
reverse(string)
but I wanted to write it by myself. I don't get why my code doesn't work. Could you help me?
h=input('word\n\n');
rev(h)
def rev(h):
counter=len(h);
reverse="";
while counter>0:
reverse+=h[counter];
counter=counter-1;
return reverse
#print (reverse); ?
input();
There are a couple issues with your code, I pointed them out in the comments of this adjusted script:
def rev(h):
counter=len(h) - 1 # indexes of h go from 0 to len(h) - 1
reverse=""
while counter>=0: # changed to >=0
reverse+=h[counter]
counter -= 1
return reverse
h=input('word\n\n');
revers = rev(h) # put rev(h) after the definition of rev!
print(revers) # actually print the result
# deleted your last line
In addition, you don't need to terminate lines with ; in python and you can write counter=counter-1 as counter -= 1.
You have some issues/problems in your code.
You call rev() before it is defined
Indexing starts at 0, so you need >= 0 instead of > 0
You want counter to equal len(h) - 1 because again, indexing starts at 0
You do not need semicolons at the end of your lines
Here is a much simpler and faster way using recursion:
def reverse(text):
if len(text) <= 1:
return text
return reverse(text[1:]) + text[0]
As you use Python more you will come across the concept of "Pythonic" code ... code that uses Python features well and shows good programming style.
So, you have a good answer above showing how your code can be corrected to work correctly in Python. But the issue I want to point out is that's C style programming. Someone said you can write C in any language (especially true in C++ and C#), but if you do you are probably not using the features of the language well. In Python, writing this style of function and ignoring the available built-in, implemented in C, lightning fast methods is not Pythonic.
I guarantee you that reverse(string) or string[::-1] are both faster than your rev( ) python code. Python has a ton of built in functionality and extensive library of files you can import for extra already debugged functionality. See if you can find a good way to time the execution of reverse( string ) and your rev( ) function. There is a good way, in Python.

How to debug Python memory fault?

Edit: Really appreciate help in finding bug - but since it might prove hard to find/reproduce, any general debug help would be greatly appreciated too! Help me help myself! =)
Edit 2: Narrowing it down, commenting out code.
Edit 3: Seems lxml might not be the culprit, thanks! The full script is here. I need to go over it looking for references. What do they look like?
Edit 4: Actually, the scripts stops (goes 100%) in this, the parse_og part of it. So edit 3 is false - it must be lxml somehow.
Edit 5 MAJOR EDIT: As suggested by David Robinson and TankorSmash below, I've found a type of data content that will send lxml.etree.HTML( data ) in a wild loop. (I carelessly disregarded it, but find my sins redeemed as I've paid a price to the tune of an extra two days of debug! ;) A working crashing script is here. (Also opened a new question.)
Edit 6: Turns out this is a bug with lxml version 2.7.8 and below (at
least). Updated to lxml 2.9.0, and bug is gone. Thanks also to the fine folks over at this follow-up question.
I don't know how to debug this weird problem I'm having.
The below code runs fine for about five minutes, when the RAM is suddenly completely filled up (from 200MB to 1700MB during the 100% period - then when memory is full, it goes into blue wait state).
It's due to the code below, specifically the first two lines. That's for sure. But what is going on? What could possibly explain this behaviour?
def parse_og(self, data):
""" lxml parsing to the bone! """
try:
tree = etree.HTML( data ) # << break occurs on this line >>
m = tree.xpath("//meta[#property]")
#for i in m:
# y = i.attrib['property']
# x = i.attrib['content']
# # self.rj[y] = x # commented out in this example because code fails anyway
tree = ''
m = ''
x = ''
y = ''
i = ''
del tree
del m
del x
del y
del i
except Exception:
print 'lxml error: ', sys.exc_info()[1:3]
print len(data)
pass
You can try Low-level Python debugging with GDB. Probably there is a bug in Python interpreter or in lxml library and it is hard to find it without extra tools.
You can interrupt your script running under gdb when CPU usage goes to 100% and look at stack trace. It will probably help to understand what's going on inside script.
it must be due to some references which keep the documents alive. one must always be careful with string results from xpath evaluation. I see you have assigned None to tree and m but not to y,x and i .
Can you also assign None to y,x and i .
Tools are also helpful when trying to track down memory problems. I've found guppy to be a very useful Python memory profiling and exploration tool.
It is not the easiest to get started with due to a lack of good tutorials / documentation, but once you get to grips with it you will find it very useful. Features I make use of:
Remote memory profiling (via sockets)
Basic GUI for charting usage, optionally showing live data
Powerful, and consistent, interfaces for exploring data usage in a Python shell

What is the easiest way to generate a Control Flow-Graph for a method in Python?

I am writing a program that tries to compare two methods. I would like to generate Control flow graphs (CFG) for all matched methods and use either a topological sort to compare the two graphs.
RPython, the translation toolchain behind PyPy, offers a way of grabbing the flow graph (in the pypy/rpython/flowspace directory of the PyPy project) for type inference.
This works quite well in most cases but generators are not supported. The result will be in SSA form, which might be good or bad, depending on what you want.
There's a Python package called staticfg which does exactly the this -- generation of control flow graphs from a piece of Python code.
For instance, putting the first quick sort Python snippet from Rosseta Code in qsort.py, the following code generates its control flow graph.
from staticfg import CFGBuilder
cfg = CFGBuilder().build_from_file('quick sort', 'qsort.py')
cfg.build_visual('qsort', 'png')
Note that it doesn't seem to understand more advanced control flow like comprehensions.
I found py2cfg has a better representation of Control Flow Graph (CFG) than one from staticfg.
https://gitlab.com/classroomcode/py2cfg
https://pypi.org/project/py2cfg/
Let's take this function in Python:
def fib():
a, b = 0, 1
while True:
yield a
a, b = b, a + b
fib_gen = fib()
for _ in range(10):
next(fib_gen)
Image from StaticCFG:
Image from PY2CFG:
http://pycallgraph.slowchop.com/ looks like what you need.
Python trace module also have option --trackcalls that can be an entrypoint for call tracing machinery in stdlib.

Categories