How can I measure the coverage (in production system)? - python

I would like to measure the coverage of my Python code which gets executed in the production system.
I want an answer to this question:
Which lines get executed often (hot spots) and which lines are never used (dead code)?
Of course this must not slow down my production site.
I am not talking about measuring the coverage of tests.

I assume you are not talking about test suite code coverage which the other answer is referring to. That is a job for CI indeed.
If you want to know which code paths are hit often in your production system, then you're going to have to do some instrumentation / profiling. This will have a cost. You cannot add measurements for free. You can do it cheaply though and typically you would only run it for short amounts of time, long enough until you have your data.
Python has cProfile to do full profiling, measuring call counts per function etc. This will give you the most accurate data but will likely have relatively high impact on performance.
Alternatively, you can do statistical profiling which basically means you sample the stack on a timer instead of instrumenting everything. This can be much cheaper, even with high sampling rate! The downside of course is a loss of precision.
Even though it is surprisingly easy to do in Python, this stuff is still a bit much to put into an answer here. There is an excellent blog post by the Nylas team on this exact topic though.
The sampler below was lifted from the Nylas blog with some tweaks. After you start it, it fires an interrupt every millisecond and records the current call stack:
import collections
import signal
class Sampler(object):
def __init__(self, interval=0.001):
self.stack_counts = collections.defaultdict(int)
self.interval = interval
def start(self):
signal.signal(signal.VTALRM, self._sample)
signal.setitimer(signal.ITIMER_VIRTUAL, self.interval, 0)
def _sample(self, signum, frame):
stack = []
while frame is not None:
formatted_frame = '{}({})'.format(
frame.f_code.co_name,
frame.f_globals.get('__name__'))
stack.append(formatted_frame)
frame = frame.f_back
formatted_stack = ';'.join(reversed(stack))
self.stack_counts[formatted_stack] += 1
signal.setitimer(signal.ITIMER_VIRTUAL, self.interval, 0)
You inspect stack_counts to see what your program has been up to. This data can be plotted in a flame-graph which makes it really obvious to see in which code paths your program is spending the most time.

If i understand it right you want to learn which parts of your application is used most often by users.
TL;DR;
Use one of the metrics frameworks for python if you do not want to do it by hand. Some of them are above:
DataDog
Prometheus
Prometheus Python Client
Splunk
It is usually done by function level and it actually depends on application;
If it is a desktop app with internet access:
You can create a simple db and collect how many times your functions are called. For accomplish it you can write a simple function and call it inside every function that you want to track. After that you can define an asynchronous task to upload your data to internet.
If it is a web application:
You can track which functions are called from js (mostly preferred for user behaviour tracking) or from web api. It is a good practice to start from outer to go inner. First detect which end points are frequently called (If you are using a proxy like nginx you can analyze server logs to gather information. It is the easiest and cleanest way). After that insert a logger to every other function that you want to track and simply analyze your logs for every week or month.
But you want to analyze your production code line by line (it is a very bad idea) you can start your application with python profilers. Python has one already: cProfile.

Maybe make a text file and through your every program method just append some text referenced to it like "Method one executed". Run the web application like 10 times thoroughly as a viewer would and after this make a python program that reads the file and counts a specific parts of it or maybe even a pattern and adds it to a variable and outputs the variables.

Related

Looking for a way to check for unexpected windows during a for loop in python automated testing

I am running some automated tests on an POS application where a large number of sales are entered using a for loop. During the test there are times where it is possible for the application window to lose focus, and when this happened we get a pop up window and this causes the test to stop.
I currently have a lot of checks in the code to look for this window after multiple steps in the loop, but each one I add adds time to the sale and slows the test down. Is there a way in python to constantly check for something like these windows.
To note I am using TestComplete and I have looked into the event handlers and it does not appear I will be able to use them due to how the application was developed.
I recommend you try to use an interrupt schedule. The basic idea is that you trigger a function (in this case your function checking for unexpected windows) whenever some counter reaches a value.
If you don't need to check at very specific and short time intervals it is enough to use something like time.clock(). I you DO need the abovementioned there are libraries for that, specifically the signal module or the more advanced sched, which are both part of the standard library (as far as I know, Documentation of signal, Documentation of sched).
There are also other (external, non-standard) modules that can work, but sched and/or signal should be enough here.
Feel free to ask if any further question should arise.
This SO post might also help: real time interrupts in python

Python minecraft pyglet glClear() skipping frames

I recently downloaded fogleman's excellent "Minecraft in 500 lines" demo from https://github.com/fogleman/Craft. I used the 2to3 tool and corrected some details by hand to make it runnable under python3. I am now wondering about a thing with the call of self.clear() in the render method. This is my modified rendering method that is called every frame by pyglet:
def on_draw(self):
""" Called by pyglet to draw the canvas.
"""
frameStart = time.time()
self.clear()
clearTime = time.time()
self.set_3d()
glColor3d(1, 1, 1)
self.model.batch.draw()
self.draw_focused_block()
self.set_2d()
self.draw_label()
self.draw_reticle()
renderTime = time.time()
self.clearBuffer.append(str(clearTime - frameStart))
self.renderBuffer.append(str(renderTime - clearTime))
As you can see, I took the execution times of self.clear() and the rest of the rendering method. The call of self.clear() calls this method of pyglet, that can be found at .../pyglet/window/__init__.py:
def clear(self):
'''Clear the window.
This is a convenience method for clearing the color and depth
buffer. The window must be the active context (see `switch_to`).
'''
gl.glClear(gl.GL_COLOR_BUFFER_BIT | gl.GL_DEPTH_BUFFER_BIT)
So I basically do make a call to glClear().
I noticed some frame drops while testing the game (at 60 FPS), so I added the above code to measure the execution time of the commands, and especially that one of glClear(). I found out that the rendering itself never takes longer than 10 ms. But the duration of glClear() is a bit of a different story, here is the distribution for 3 measurements under different conditions:
Duration of glClear() under different conditions.
The magenta lines show the time limit of a frame. So everything behind the first line means there was a frame drop.
The execution time of glClear() seems to have some kind of "echo" after the first frame expires. Can you explain me why? And how can I make the call faster?
Unfortunately I am not an OpenGL expert, so I am thankful for every advice guys. ;)
Your graph is wrong. Well, at least it's not a suitable graph for the purpose of measuring performance. Don't ever trust a gl* function to execute when you tell it to, and don't ever trust it to execute as fast as you'd expect it to.
Most gl* functions aren't executed right away, they couldn't be. Remember, we're dealing with the GPU, telling it to do stuff directly is slow. So, instead, we write a to-do list (a command queue) for the GPU, and dump it into VRAM when we really need the GPU to output something. This "dump" is part of a process called synchronisation, and we can trigger one with glFlush. Though, OpenGL is user friendly (compared to, say, Vulkan, at least), and as such it doesn't rely on us to explicitly flush the command queue. Many gl* functions, which exactly depending on your graphics driver, will implicitly synchronise the CPU and GPU state, which includes a flush.
Since glClear usually initiates a frame, it is possible that your driver thinks it'd be good to perform such an implicit synchronisation. As you might imagine, synchronisation is a very slow process and blocks CPU execution until it's finished.
And this is probably what's going on here. Part of the synchronisation process is to perform memory transactions (like glBufferData, glTexImage*), which are probably queued up until they're flushed with your glClear call. Which makes sense in your example; the spikes that we can observe are probably the frames after you've uploaded a lot of block data.
But bear in mind, this is just pure speculation, and I'm not a total expert on this sort of stuff either, so don't trust me on the exact details. This page on the OpenGL wiki is a good resource on this sort of thing.
Though one thing is certain, your glClear call does not take as long as your profiler says it does. You should be using a profiler dedicated to profiling graphics.

Is it possible to step backwards in pdb?

After I hit n to evaluate a line, I want to go back and then hit s to step into that function if it failed. Is this possible?
The docs say:
j(ump) lineno
Set the next line that will be executed. Only available in the bottom-most frame. This lets you jump back and execute code again, or jump forward to skip code that you don’t want to run.
The GNU debugger, gdb: It is extremely slow, as it undoes single machine instruction at a time.
The Python debugger, pdb: The jump command takes you backwards in the code, but does not reverse the state of the program.
For Python, the extended python debugger prototype, epdb, was created for this reason. Here is the thesis and here is the program and the code.
I used epdb as a starting point to create a live reverse debugger as part of my MSc degree. The thesis is available online: Combining reverse debugging and live programming towards visual thinking in computer programming. In chapter 1 and 2 I also cover most of the historical approaches to reverse debugging.
PyPy has started to implement RevDB, which supports reverse debugging.
It is (as of Feb 2017) still at an alpha stage, only supports Python 2.7, only works on Linux or OS X, and requires you to build Python yourself from a special revision. It's also very slow and uses a lot of RAM. To quote the Bitbucket page:
Note that the log file typically grows at a rate of 1-2 MB per second. Assuming size is not a problem, the limiting factor are:
Replaying time. If your recorded execution took more than a few minutes, replaying will be painfully slow. It sometimes needs to go over the whole log several times in a single session. If the bug occurs randomly but rarely, you should run recording for a few minutes, then kill the process and try again, repeatedly until you get the crash.
RAM usage for replaying. The RAM requirements are 10 or 15 times larger for replaying than for recording. If that is too much, you can try with a lower value for MAX_SUBPROCESSES in _revdb/process.py, but it will always be several times larger.
Details are on the PyPy blog and installation and usage instructions are on the RevDB bitbucket page.
Reverse debugging (returning to previously recorded application state or backwards single-stepping debugging) is generally an assembly or C level debugger feature. E.g. gdb can do it:
https://sourceware.org/gdb/wiki/ReverseDebug
Bidirectional (or reverse) debugging
Reverse debugging is utterly complex, and may have performance penalty of 50.000x. It also requires extensive support from the debugging tools. Python virtual machine does not provide the reverse debugging support.
If you are interactively evaluation Python code I suggest trying IPython Notebook which provide HTML-based interactive Python shells. You can easily write your code and mix and match the order. There is no pdb debugging support, though. There is ipdb which provides better history and search facilities for entered debugging commands, but it doesn't do direct backwards jumps either as far as I know.
Though you may not be able to reverse the code execution in time, the next best thing pdb has are the stack frame jumps.
Use w to see where you're in the stack frame (bottom is the newest), and u(p) or d(own) to traverse up the stackframe to access the frame where the function call stepped you into the current frame.

Python: Run multiple instances at once with different value set for each instance

I'm starting up in Python and have done a lot of programming in the past in VB. Python seems much easier to work with and far more powerful. I'm in the process of ditching Windows altogether and have quite a few VB programs I've written and want to use them on Linux without having to touch anything that involves Windows.
I'm trying to take one of my VB programs and convert it to Python. I think I pretty much have.
One thing I never could find a way to do in VB was to use Program 1, a calling program, to call Program 2 and run in it multiple times. I had a program that would search a website looking for new updated material, everything was updated numerically(1234567890_1.mp3, for example). Not every value was used and I would have to search to find which files existed and which didn't. Typically the site would run through around 100,000 possible files a day with only 2-3 files actually being used each day. I had the program set up to search 10,000 files and if it found a file that existed it downloaded it and then moved to the next possible file and tested it. I would run this program, simultaneously 10 times and have each program set up to search a separate 10,000 file block. I always wanted to set up it so I could have a calling program that would have the user set the Main Block(1234) and the Secondary Block(5) with the Secondary Block possibly being a range of values. The calling program then would start up 10 separate programs(6, err 0-9 in reality) and would use Main Block and Secondary Block as the values to set up the call for each of the 10 Program 2s. When each one of the 10 programs got called, all running at the same time, they would be called with the appropriate search locations so they would be searching the website to find what new files had been added throughout the previous day. It would only take 35-45 minutes to complete each day versus multiple hours if I ran through everything in one long continuous program.
I think I could do this with Python using a Program 1(set) and Program 2(read) .txt file. I'm not sure if I would run into problems possibly with changing the set value before Program 2 could have read the value and started using it. I think I would have a to add a pause into the program to play it safe... I'm not really sure.
Is there another way I could pass a value from Program 1 to Program 2 to accomplish the task I'm looking to accomplish?
It seems simple enough to have a master class (a block as you would call it), that would contain all of the different threads/classes in a list. Communication wise, there could simply be a communication stucture like so:
Thread <--> Master <--> Thread
or
Thread <---> Thread <---> Thread
The latter would look more like a web structure.
The first option would probably be a bit more complicated since you have a "middle man". However, this does allow for mass communication. Also all that would need to passed to the other classes is the class Master, or a function that will provide communication
The second option allows for direct communication between threads. So if there is two threads that need to work together a lot, and you don't want to have to deal with other threads possibly interacting to the commands, then the second option is the best. However, the list of classes would have to be sent to the function.
Either way you are going to need to have a Master thread at the central of communication (for simplicity reasons). Otherwise you could get into sock and file stuff, but the other one is faster, more efficient, and is less likely to cause headaches.
Hopefully you understood what I was saying. It is all theoretical, but I have used similar systems in my programs.
There are a bunch of ways to do this; probably the simplest is to use os.spawnv to run multiple instances of Program2 and pass the appropriate values to each, ie
import os
import sys
def main(urlfmt, start, end, threads):
start, end, threads = int(start), int(end), int(threads)
per_thread = (end - start + 1) // threads
for i in range(threads):
s = str(start + i*per_thread)
e = str(min(s + per_thread - 1, end))
outfile = 'output{}.txt'.format(i+1)
os.spawnv(os.P_NOWAIT, 'program2.exe', [urlfmt, s, e, outfile])
if __name__=="__main__":
if len(sys.argv) != 4:
print('Usage: web_search.py urlformat start end threads')
else:
main(*(sys.argv))

GAE Backend fails to respond to start request

This is probably a truly basic thing that I'm simply having an odd time figuring out in a Python 2.5 app.
I have a process that will take roughly an hour to complete, so I made a backend. To that end, I have a backend.yaml that has something like the following:
-name: mybackend
options: dynamic
start: /path/to/script.py
(The script is just raw computation. There's no notion of an active web session anywhere.)
On toy data, this works just fine.
This used to be public, so I would navigate to the page, the script would start, and time out after about a minute (HTTP + 30s shutdown grace period I assume, ). I figured this was a browser issue. So I repeat the same thing with a cron job. No dice. Switch to a using a push queue and adding a targeted task, since on paper it looks like it would wait for 10 minutes. Same thing.
All 3 time out after that minute, which means I'm not decoupling the request from the backend like I believe I am.
I'm assuming that I need to write a proper Handler for the backend to do work, but I don't exactly know how to write the Handler/webapp2Route. Do I handle _ah/start/ or make a new endpoint for the backend? How do I handle the subdomain? It still seems like the wrong thing to do (I'm sticking a long-process directly into a request of sorts), but I'm at a loss otherwise.
So the root cause ended up being doing the following in the script itself:
models = MyModel.all()
for model in models:
# Magic happens
I was basically taking for granted that the query would automatically batch my Query.all() over many entities, but it was dying at the 1000th entry or so. I originally wrote it was computational only because I completely ignored the fact that the reads can fail.
The actual solution for solving the problem we wanted ended up being "Use the map-reduce library", since we were trying to look at each model for analysis.

Categories