I am developing unit test for code that use Google oauth2client.tools run_flow(). The problem is, this function will call Python webbrowser.open(), that will eventually (cmiiw), exit itself by calling sys.exit(). Therefore, even if I halt the code execution using threading.Event.wait(), it will eventually terminate, without continuing the rest of the unit testing code.
I have read this SO answer that previously was my problem, but now I still faced with this webbrowser.open() issue.
Any idea on how to solve this?
Related
I like using hypothesis for my unit tests. I also like using pdb for debugging when things go wrong. But trying to use these two together can be very annoying. If I set a breakpoint in a file that is run by hypothesis using pytest <PATH-TO-FILE> -s, it will stop at the breakpoint as expected, and I can do my analysis. But after I am done, I want to be able to exit out of the test. However, if I do ctrl+c from inside the breakpoint, the test doesn't quit, it simply goes to the next hypothesis test case. And I have to keep doing this until hypothesis is done generating all it's test cases.
I usually end up opening system monitor and killing the pytest process everytime I want to be able to quit the test.
I'm hoping there is a better way.
The issue can be reproduced by the following snippet -
import hypothesis
from hypothesis import strategies as st
#hypothesis.given(st.integers())
def test_some_function(some_arg):
breakpoint()
print(some_arg)
test_some_function()
I am using python 3.8 with hypothesis 5.37.0
This happens under Linux but not under Windows, and it's unclear whether or not that's a bug in Hypothesis, or in Pdb, or 'just' undesirable behaviour from a combination of features.
As a workaround, you can import os; os._exit(0) to skip all cleanup logic and exit instantly.
A better, albeit somewhat more involved, solution is to disable multi-bug reporting and the shrinking phase when you use a debugger, so that Hypothesis stops generating examples immediately after the first failure. You can create a settings profile for use with the debugger, and then activate it via the --hypothesis-profile= argument to pytest.
I have a Python script that uses multiprocessing, specifically Pool().map() from the package multiprocessing.
When I run the script locally on my machine, I remember to say if __name__ == "__main__" before running my code, and I run it by simply pressing run in my IDE.
It works, does everything I expect it to.
However, at work, we have this server (I believe it uses C#) which takes Python scripts and executes them. When I upload my script to this server, the multiprocessing part of the code fails.
Is is hard for me to tell what's going on (I do not have access to the server so I have limited info to work with, and really this problem is not my responsibility, so I am asking purely out of curiosity), however, it seems that the code does not throw an error, but rather it seems to get stuck in an infinite loop of some sort, creating and ending new sessions over and over. No work happens inside these sessions, they just begin and end.
Moreover, the code never actually enters the multiprocessed part of the code (i.e. the functions that I map to the Pool) , since if it did, it would fail (since for debugging, I threw a raise Exception as soon as the code enters the code that is meant to run parallel, just to check if it ever reaches it … but it never does).
Any clue what is going on, and how it is meant to be fixed?
This question is not about how to use sys.exit (or raising SystemExit directly), but rather about why you would want to use it.
If a program terminates successfully, I see no point in explicitly exiting at the end.
If a program terminates with an error, just raise that error. Why would you need to explicitly exit the program or why would you need an exit code?
Letting the program exit with an Exception is not user friendly. More exactly, it is perfectly fine when the user is a Python programmer, but if you provide a program to end users, they will expect nice error messages instead of a Python stacktrace which they will not understand.
In addition, if you use a GUI application (through tkinter or pyQt for example), the backtrace is likely to be lost, specially on Windows system. In that case, you will setup error processing which will provide the user with the relevant information and then terminate the application from inside the error processing routine. sys.exit is appropriate in that use case.
Something I've noticed is that when there is an error in our script (regardless of the programming language), it often takes longer to "execute" and then output the error, compared to its execution time when there are no errors in our script.
Why does this happen? Shouldn't outputting an error take less time because the script is not being fully run? Or does the computer still attempt to fully run the script regardless of whether there is an error or not?
For example, I have a Python script that takes approximately 10 seconds to run if there are no errors. When there is an error, however, it takes an average of 15 seconds. I've noticed something similar in NodeJS, so I'm just assuming that this is the case for many programming languages? Apologies if this is a bad question - I'm relatively new to programming and still lack some fundamental understandings.
The program doesn't attempt to run the script fully in case of an error, the execution is interrupted at the point where an error happens. This is by default but you can always set up your own exception handlers in your scripts which will execute some code.
Anyway, raising and handling (logging) exceptions also requires some code execution (internal code of the programming language) thus this also takes some time.
It's hard to tell why your script execution takes longer in case of an error without looking at your script though, I personally never noticed such differences in general but maybe I just didn't pay attention...
Will it is possible to run a small set of code automatically after a script was run?
I am asking this because for some reasons, if I added this set of code into the main script, though it works, it will displays a list of tab errors (its already there, but it is stating that it cannot find it some sort).
I realized that after running my script, Maya seems to 'load' its own setup of refreshing, along with some plugins done by my company. As such, if I am running the small set of code after my main script execution and the Maya/ plugins 'refresher', it works with no problem. I had like to make the process as automated as possible, all within a script if that is possible...
Thus is it possible to do so? Like a delayed sort of coding method?
FYI, the main script execution time depends on the number of elements in the scene. The more there are, it will takes longer...
Maya has a command Maya.cmds.evalDeferred that is meant for this purpose. It waits till no more Maya processing is pending and then evaluates itself.
You can also use Maya.cmds.scriptJob for the same purpose.
Note: While eval is considered dangerous and insecure in Maya context its really normal. Mainly because everything in Maya is inherently insecure as nearly all GUI items are just eval commands that the user may modify. So the second you let anybody use your Maya shell your security is breached.