I have a bug in my program and want to check it out using debug. In my IDE (WingIDE) I have a debug functionality. But I can not use that call the program from shell. So I use the Python module pdb. My application is single threaded.
I have looked into Code is behaving differently in Release vs Debug Mode but that seems something different to me.
I limited it down this the following code.
What I did :
I created a short method it will only be called when using no IDE.
def set_pdb_trace():
run_in_ide = not sys.stdin.isatty()
if not run_in_ide:
import pdb; pdb.set_trace() # use only in python interpreter
This work fine, I used it in many situations.
I want to debug the following method :
import sys
import os
import subprocess32
def call_backported():
command = 'lsb_release -r'
timeout1 = 0.001 # make value too short, so time-out will enforced
try:
p = subprocess32.Popen(command, shell=True,
stdout=subprocess32.PIPE,
stderr=subprocess32.STDOUT)
set_pdb_trace()
tuple1 = p.communicate(input=b'exit %errorlevel%\r\n', timeout=timeout1)
print('No time out')
value = tuple1[0].decode('utf-8').strip()
print('Value : '+ value)
except subprocess32.TimeoutExpired, e:
print('TimeoutExpired')
Explanation.
I want to call subprocess with a timeout. For Python 3.3+ it is build in, but my application has be able to run using Python2.7 also. So I used https://pypi.python.org/pypi/subprocess32/3.2.6 as a backport.
To read the returned value I used How to retrieve useful result from subprocess?
Without timeout, setting timeout to f.e. 1 sec the method works as expected. The result value and 'No time out' is printed.
I want to enforce a timeout so I set the timeout very short time 0.001 . So now only 'TimeoutExpired' should be printed.
I want to execute this is shell.
When if first comment out line #set_pdb_trace() 'TimeoutExpired' is printed, so expected behaviour.
Now I uncomment set_pdb_trace() and execute in shell.
The debugger displays, I press 'c' (continue) and 'No time out' with the result is printed. This result is different then without debug. The generate output is :
bernard#bernard-vbox2:~/clones/it-should-work/unit_test$ python test_subprocess32.py
--Return--
> /home/bernard/clones/it-should-work/unit_test/test_subprocess32.py(22)set_pdb_trace()->None
-> import pdb; pdb.set_trace() # use only in python interpreter
(Pdb) c
No time out
Value : Release: 13.10
bernard#bernard-vbox2:~/clones/it-should-work/unit_test$
How is this possible? And how to solve?
You introduced a delay between opening the subprocess and writing to it.
When you create the Popen() object, the child process is started immediately. When you then call p.communicate() and try to write to it, the process is not quite ready yet to receive input, and that delay together with the time it takes to read the process output is longer than your 0.0.1 timeout.
When you insert the breakpoint, the process gets a chance to spin up; the lsb_release command doesn't wait for input and produces its output immediately. By the time p.communicate() is called there is no need to wait for the pipe anymore and the output is produced immediately.
If you put your breakpoint before the Popen() call, then hit c, you'll see the timeout trigger again.
Related
I'm working on a BCP wrapper method in Python, but have run into an issue invoking the command with subprocess.
As far as I can tell, the BCP command doesn't return any value or indication that it has completed outside of what it prints to the terminal window, which causes subprocess.call or subprocess.run to hang while they wait for a return.
subprocess.Popen allows a manual .terminate() method, but I'm having issues getting the table to write afterwards.
The bcp command works from the command line with no issues, it loads data from a source csv according to a .fmt file and writes an error log file. My script is able to dismount the file from log path, so I would consider the command itself irrelevant and the question to be around the behavior of the subprocess module.
This is what I'm trying at the moment:
process = subprocess.Popen(bcp_command)
try:
path = Path(log_path)
sleep_counter = 0
while path.is_file() == False and sleep_counter < 16:
sleep(1)
sleep_counter +=1
finally:
process.terminate()
self.datacommand = datacommand
My idea was to check that the error log file has been written by the bcp command as a way to tell that the process had finished, however while my script no longer freezes with this, and the files are apparently being successfully written and dismounted later on in the script. The script terminates in less than the 15 seconds that the sleep loop would use to end it as well.
When the process froze my Spyder shell (and Idle, so it's not the IDE), I could force terminate it by closing the console itself and it would write to the server at least.
However it seems like by using the .terminate() the command isn't actually writing anything to the server.
I checked if a dumb 15 second time-out (it takes about 2 seconds to do the BCP with this data) would work as well, in case it was writing an error log before the load finished.
Still resulted in an empty table on SQL server.
How can I get subprocess to execute a command without hanging?
Well, it seems to be a more general issue about calling helper functions with Popen
as seen here:
https://github.com/dropbox/pyannotate/issues/67
I was able to fix the hanging issue by changing it to:
subprocess.Popen(bcp_command, close_fds = True)
I am running an interactive ssh python program, but I'm not always technically interacting with it (piping stdout and stdin). I need to print a string to signal to pexpect on my local machine that a breakpoint has been called and to enter interactive mode.
On my local machine I have:
p: pexpect.spawn
i = p.expect([pexpect.EOF,'__INTERACT__'])
if i==1:
p.interact()
The reason I do this is because I cannot simply interact() the whole time it is running. This interferes with stdout on the local process.
So on the remote process, I breakpoint with
print('__INTERACT__')
breakpoint()
This works, however I want to have single-line breakpoints. So I tried:
def remote_breakpoint():
print('__INTERACT__')
import pdb; pdb.set_trace()
sys.breakpointhook = remote_breakpoint
This allows me to just write breakpoint(). It's also allows me to disable the extra print statement when running locally:
def remote_breakpoint():
if platform.system() != 'Darwin':
print('__INTERACT__')
import pdb; pdb.set_trace()
It works, But now pdb starts inside the remote_breakpoint function and I have to hit r to get out every time. How can I tell pdb.set_trace to start one function up the stack, or make it wait to start the interactive process until remote_breakpoint has returned?
Even if I just call remote_breakpoint() (instead of using sys.breakpointhook), I still have the same problem.
I'm writing some code in Python 3 on Windows that looks like this:
try:
do something that takes a long time
(training a neural network in TensorFlow, as it happens)
except KeyboardInterrupt:
print('^C')
print a summary of results
still useful even if the training was cut short early
This works perfectly if run directly from the console with python foo.py.
However, if the call to Python was within a batch file, it ends up doing all the above but then still spamming the console with the 'terminate batch job' prompt.
Is there a way to stop that happening? By fully eating the ^C within Python, jumping all the way out of the batch file or otherwise?
Use the break (More info here) command in the batch file, which will disable CTRL+C halting the file
EDIT: According to this site of the break command
Newer versions of Windows (Windows ME, Windows 2000, Windows XP, and higher) only include this command for backward compatibility and turning the break off has no effect.
I personally tested this, and can confirm, I will edit when I find a workaround
EDIT #2: If you could have a second batch script that runs start "" /b /wait cmd /c "yourfile.bat" although, this is known to cause glitches with other nested batch files
The flag to disable Ctrl+C is inherited by child processes, so Python will no longer raise a KeyboardInterrupt. Plus we still have bugs here in Python if reading from the console gets interrupted by Ctrl+C without getting a SIGINT from the CRT. The Python script should manually enable Ctrl+C via ctypes. Use import ctypes; kernel32 = ctypes.WinDLL('kernel32', use_last_error=True); success = kernel32.SetConsoleCtrlHandler(None, False)
EDIT #3 As pointed by Eryksyn (in the comments), you can use cytpes to ENABLE it;
import ctypes;
kernel32 = ctypes.WinDLL('kernel32', use_last_error=True); success = kernel32.SetConsoleCtrlHandler(None, False)
EDIT #4: I think I found it, try this (Although it may not work) Can you use the threading import?
import time
from threading import Thread
def noInterrupt():
for i in xrange(4):
print i
time.sleep(1)
a = Thread(target=noInterrupt)
a.start()
a.join()
print "done"
I'm sure I'm missing something simple, but when using the subprocess module, there is a very significant wait (> 10 seconds) to starting the first subprocess. The second one starts shortly after the first. Is there any way to fix this? Code below:
EDIT: To add, HWAccess (in proc.py) links a dll. Could this have anything to do with it?
EDIT2: I've boiled the test down to starting a SINGLE subprocess and it takes significantly longer to import HWAccess than if I just run proc.py directly from cmd prompt. I don't see how this has anything to do with the dll specifically if it loads fast from cmd, but not as a sub-process through test.py
test.py:
import subprocess
import os
import time
print 'STARTING'
proc0 = subprocess.Popen(['python','proc.py','0'])
proc1 = subprocess.Popen(['python','proc.py','1'])
while True:
try: pass
except KeyboardInterrupt:
os._exit(0)
except ValueError:
pass
proc.py:
print 'Process starting...'
import HWAccess
print 'HWAccess imported...'
import sys
print 'sys imported...'
import time
print 'time imported...'
print 'hi from ',sys.argv[1]
Edit: After putting the prints in, there is around 5s to reach the first 'Process starting...', the second process prints 'Process starting...' immediately afterwards. Then there is a ~30 second pause to import HWAccess (takes a matter of seconds running on an individual process), the second process then immediately prints that it too has imported HWAccess... from then on execution is fast. HWAccess links a .dll so I'm wondering if two processes trying to import HWAccess result in some sort of race condition that takes a while to negotiate.
I am not sure if this is the right track, but I remember seeing such delays when starting a process wayyy back (and not at all Python related), and it turned out they were related to some badly configured network settings on my computer. Upon subprocess start-up, it has to set up interprocess communication, and those settings might interfere.
I remember my problems were related to using a false hostname for the machine, which was not properly configured on the network - can you check to see if it is your case? If it is not a production machine, try not setting a hostname at all, leaving it as "localhost".
I have a small script that launches and, every half hour, feeds a command to a java program (game server manager) as if the user was typing it. However, after reading documentation and experimenting, I can't figure out how I can get two things:
1) A version which allows the user to type commands into the terminal windoe and they will be sent to the server manager input just as the "save-all" command is.
2) A version which remains running, but sends any new input to the system itself, removing the need for a second terminal window. This one is actually half-happening right now as when something is typed, there is no visual feedback, but once the program is ended, it's clear the terminal has received the input. For example, a list of directory contents will be there if "dir" was typed while the program was running. This one is more for understanding than practicality.
Thanks for the help. Here's the script:
from time import sleep
import sys,os
import subprocess
# Launches the server with specified parameters, waits however
# long is specified in saveInterval, then saves the map.
# Edit the value after "saveInterval =" to desired number of minutes.
# Default is 30
saveInterval = 30
# Start the server. Substitute the launch command with whatever you please.
p = subprocess.Popen('java -Xmx1024M -Xms1024M -jar minecraft_server.jar',
shell=False,
stdin=subprocess.PIPE);
while(True):
sleep(saveInterval*60)
# Comment out these two lines if you want the save to happen silently.
p.stdin.write("say Backing up map...\n")
p.stdin.flush()
# Stop all other saves to prevent corruption.
p.stdin.write("save-off\n")
p.stdin.flush()
sleep(1)
# Perform save
p.stdin.write("save-all\n")
p.stdin.flush()
sleep(10)
# Allow other saves again.
p.stdin.write("save-on\n")
p.stdin.flush()
Replace your sleep() with a call to select((sys.stdin, ), (), (), saveInterval*60) -- that will have the same timeout but listens on stdin for user commands. When select says you have input, read a line from sys.stdin and feed it to your process. When select indicates a timeout, perform the "save" command that you're doing now.
It won't completely solve your problem, but you might find python's cmd module useful. It's a way of easily implementing an extensible command line loop (often called a REPL).
You can run the program using screen, then you can send the input to the specific screen session instead of to the program directly (if you are in Windows just install cygwin).