Why does sys.stdin.close() work differently in IDLE vs script - python

try:
sys.stdin.close()
except:
pass
raise SystemExit(None)
The above seems to be the code for exit/quit function of site package.
While researching about it I realized sys.stdin.close() seems to be invoking the below window in IDLE
But when running scripts from other places like cmd, sys.stdin.close() is seemingly doing nothing and the program is closed by raise SystemExit(None)
Why is this the case??
I tried to find the reason behind this in code but could not find any
Searched in all places I could think of but no answer

IDLE is catching your SystemExit and creates a pop up to give you to opportunity to have the executing script stop while IDLE continues to run. If you hit escape or click the Cancel button, your script's execution will end and IDLE will continue. If you click OK your script will exit and IDLE will also exit.

Related

How to catch the stop button in PyCharm on Windows?

I want to create a program that does something in which someone terminates the script by clicking the stop button in PyCharm. I tried
from sys import exit
def handler(signal_received, frame):
# Handle any cleanup here
print('SIGINT or CTRL-C detected. Exiting gracefully')
exit(0)
if __name__ == '__main__':
signal(SIGINT, handler)
print('Running. Press CTRL-C to exit.')
while True:
# Do nothing and hog CPU forever until SIGINT received.
pass
from https://www.devdungeon.com/content/python-catch-sigint-ctrl-c.
I tried on both Mac and Windows. On the Mac, PyCharm behaved as expected, when I click the stop button it catches the SIGINT. But on Windows, I did exactly the same thing, but it just straightly returns to me a
Process finished with exit code -1. Is there something I can do to change to make the Windows behave like what on Mac?
Any help is appreciated!
I don't think it's a strange question at all. On unix systems, pycham sends a SIGTERM, waits one second, then send a SIGKILL. On windows, it does something else to end the process, something that seems untrappable. Even during development you need a way to cleanly shut down a process that uses native resources. In my case, there is a CAN controller that, if not shut down properly, can't ever be opened again. My work around was to build a simple UI with a stop button that shuts the process down cleanly. The problem is, out of habit, from using pycharm, goland, and intellij, is to just hit the red, square button. Every time I do that I have to reboot the development system. So I think it is clearly also a development time question.
This actually isnt a simple thing, because PyCharm sends SIGKILL with the stop button. Check the discussion here https://youtrack.jetbrains.com/issue/PY-13316
There is a comment that you can enable "kill windows process softly", however it didnt work for me. The one that does work is emulate terminal in the debug config, then use control c when you select the console window

Pycharm's "stop" does not run finally code

I am running a python project in pycharm. In the code we have a main "try-catch-finally" block e.g.
try:
# Some stuff like opening files and video streams
except SomePossibleExceptions:
# Handle possible exception
finally:
# Save, close and tidy up unfinished files / videos / output streams
If I run the program in the terminal it will reach the finally block when I press our quit button or "ctrl-c" and perform the post processing required. However, after pressing "stop" when using the run tool in PyCharm it just quits and does not reach the finally block.
The obvious answer is to just run in the terminal but is there a way to get PyCharm to run the finally block after pressing "stop" in the run tool?
No, using the red stop button terminates the process immediately.
I think you alluded to this, but the only workaround I have found is to edit the run configuration:
run > edit configurations > tick the box for 'Emulate terminal in output console' or 'Run with Python Console'.
With either of those selections ticked, the run window will accept ctrl-C inputs which will allow your finally block to execute.

How to hard suspend or pause a python script after it runs so it doesn’t force close upon completion?

Hi so I’m working on a python script that involves a loop function, so far the loop function process is failing for some reason(although I kinda know why) but the problem I’ve got os.system(‘pause’) and also input(“prompt:”) at end of the code in order to pause all activity so I can read the error messages prior to script completion and termination but the script still shuts down, I need a way to HARD pause it or freeze before the window closes abruptly. Need help and any further insight.
Ps. Let me know if you need any more info to better describe this problem.
I assume you are just 'double clicking' the icon on Window Explorer. This has the disadvantage which you are encountering here in that the shell (terminal window) closes when the process finishes so you can't tell what went wrong if it terminated due to an error.
A better method would be to use the command prompt. If you are not familiar with this, there are many tutorials online.
The reason this will help with your problem is that, once navigating to the script's containing directory, you can use python your_script.py (assuming python is in your path environmental variable) to run the script within the same window.
Then, even if it fails, you can read the error messages as you will only be returned to the command line.
An alternative hacky method would be to create a script called something like run_pythons.py which will use the subprocess module to call your actual script in the same window, and then (no matter how it terminates), wait for your input before terminating itself so that you can read the error messages.
So something like:
import subprocess
subprocess.call(('python', input('enter script name: ')))
input('press ENTER to kill me')
I needed something like this at one point. I had a wrapper that loaded a bunch of modules and data and then waited for a prompt to run something. If I had a stupid mistake in a module, it would quit, and that time that it spent loading all that data into memory would be wasted, which was >1min. For me, I wanted a way to keep that data in memory even if I had an error in a module so that I could edit the module and rerun the script.
To do this:
while True:
update = raw_input("Paused. Enter = start, 'your input' = update params, C-C = exit")
if update:
update = update.split()
#unrelevant stuff used to parse my update
#custom thing to reload all my modules
fullReload()
try:
#my main script that needed all those modules and data loaded
model_starter.main(stuff, stuff2)
except Exception as e:
print(e)
traceback.print_exc()
continue
except KeyboardInterrupt:
print("I think you hit C-C. Do it again to exit.")
continue
except:
print("OSERROR? sys.exit()? who knows. C-C to exit.")
continue
This kept all the data loaded that I grabbed from before my while loop started, and prevented exiting on errors. It also meant that I could still ctrl+c to quit, I just had to do it from this wrapper instead of once it got to the main script.
Is this somewhat what you're looking for?
The answer is basically, you have to catch all your exceptions and have a method to restart your loop once you figured out and fixed the issue.

how to halt python program after pdb.set_trace()

When debugging scripts in Python (2.7, running on Linux) I occasionally inject pdb.set_trace() (note that I'm actually using ipdb), e.g.:
import ipdb as pdb
try:
do_something()
# I'd like to look at some local variables before running do_something_dangerous()
pdb.set_trace()
except:
pass
do_something_dangerous()
I typically run my script from the shell, e.g.
python my_script.py
Sometimes during my debugging session I realize that I don't want to run do_something_dangerous(). What's the easiest way to halt program execution so that do_something_dangerous() is not run and I can quit back to the shell?
As I understand it pressing ctrl-d (or issuing the debugger's quit command) will simply exit ipdb and the program will continue running (in my example above). Pressing ctrl-c seems to raise a KeyboardInterrupt but I've never understood the context in which it was raised.
I'm hoping for something like ctrl-q to simply take down the entire process, but I haven't been able to find anything.
I understand that my example is highly contrived, but my question is about how to abort execution from pdb when the code being debugged is set up to catch exceptions. It's not about how to restructure the above code so it works!
I found that ctrl-z to suspend the python/ipdb process, followed by 'kill %1' to terminate the process works well and is reasonably quick for me to type (with a bash alias k='kill %1'). I'm not sure if there's anything cleaner/simpler though.
From the module docs:
q(uit)
Quit from the debugger. The program being executed is aborted.
Specifically, this will cause the next debugger function that gets called to raise a BdbQuit exception.

Unable to exit with ^C

I am using pytest to run tests and, during the execution of a test, interrupted with ctrl-C.
No matter how many times I ctrl-C to get out of the test session (I've also tried ctrl-D to get out of the environment I'm using), my terminal prompt does not return.
I accidentally pressed F as well... test.py ^CF^C Does the F have something to do with my being stuck in the captured stderr section and the prompt not returning?
Are there any logic explanations why I'm stuck here, and if so, are there any alternatives to exiting this state without closing the window and force exiting the session?
I would suggest trying control-Z. That should suspend it; you can then do kill %1 (or kill -9 %1) to kill it (assuming you don't have anything else running in the background)
What I'm guessing is happening (from personal experience) is that one of your tests is running in a try / except that is catching all exceptions (including the keyboard interrupt which control c triggers) and is inside a while loop / ignoring the exception.

Categories