I am making a task scheduler with pexpect of Python.
This was implemented with a simple idea:
term = spawnu('tcsh') # I need a tcsh instead of its default bash
term.sendline('FIRST_TASK')
term.expect('MY_SHELL_PROMPT') # When parent receive prompt means end of previous task.
term.sendline('SECOND_TASK')
...(and so on)
But I found pexpect.expect did not block this line:
term.expect('MY_SHELL_PROMPT') # Go through this line before finish of previous task.
Since it works with matching pattern set to the last output of previous task. I suspect the pexpect.expect matched MY_SHELL_PROMPT before the child starts its job. I have add some delay before matching. However, this happens even if I add delay before pexect.expect.
time.sleep(2) # delay for 2 second
term.expect('MY_SHELL_PROMPT')
Does anyone know how to debug this? Any help would be appreciate.
I think I found the answer myself.
pexpect does not distinguish echoed command and output from child.
So it is difficult to accomplish this with my previous attempt.
I workarounded this with saving my check code in a txt file.
Such file could be feedback by child with calling 'cat' in child.
For Example:
#check_code.txt
----YOUR JOB IS DONE----
#In testPexpect.py
term.sendline('cat check_code.txt') # this prevents matching its echoed command
term.expect('----YOUR JOB IS DONE----') # blocks and matches successfully
Related
This question already has answers here:
Pipe output(stdout) from running process Win32Api
(3 answers)
Closed 7 years ago.
How can i read the output of a running console process ? i found a snippet that shows how to do it for a starting process by using ReadFile() on the process handle obtained by CreateProcess(), but my question is, how can i achieve this for a running process ? thanks.
What I have tried is, using OpenProcess() on the Console app (i hardcoded the pid just to test) and then i used ReadFile() on it, but i get gibbrish letters or not showing me anything at all.
Edit: Here's the code i tried, PID is hardcoded just for test.
procedure TForm1.Button1Click(Sender: TObject);
var
hConsoleProcess: THandle;
Buffer: Array[0..512] of ansichar;
MyBuf: Array[0..512] of ansichar;
bytesReaded: DWORD;
begin
hConsoleProcess := OpenProcess(PROCESS_ALL_ACCESS, False, 6956);
ReadFile(hConsoleProcess, Buffer, sizeof(Buffer), bytesReaded, nil);
OemToCharA(Buffer, MyBuf);
showmessage(string(MyBuf));
// ShellExecute(Handle, 'open', 'cmd.exe', '/k ipconfig', nil, SW_SHOWNORMAL);
end;
It's unrealistic to expect to be able to do this. Perhaps it possible to hack it, but no good will come of doing so. You could inject into the process, obtain its standard output handle with GetStdHandle. And read from that. But no good will come of that, as I said.
Why will no good come of this? Well, standard input/output is designed for a single reader, and a single writer. If you have two readers, then one, or both, of the readers are going to miss some of the text. In fact I'd be surprised if two blocking synchronous calls to ReadFile were allowed by the system. I'd expect the second one to fail. [Rob's comment explains that this is allowed, but it's more like first come, first served.]
What you could perhaps do is to create a multi-casting program to listen to the output of the main program. Pipe the output of the main program into the multi-caster. Have the multi-caster echo to its standard output and to one or more other pipes.
The bottom line here is that whatever your actual problem is, hooking up multiple readers to the standard out is not the solution.
The Task
I'm building a Python script, the purpose of which is to audit a number of .tex files, and one step in this auditing process is to test whether each file will compile, each file being compiled using the terminal command xelatex filename.tex.
These are the methods with which I'm testing whether a given file compiles:
def run_xelatex(self):
""" Ronseal. """
self.latex_process = Popen(["xelatex", "current.tex"], stdout=PIPE)
lines = self.latex_process.stdout.readlines()
for line in self.latex_process.stdout:
self.screentext = self.screentext+line.decode("utf-8")+"\n"
def attempt_to_compile(self):
""" Attempt to compile an article, and kill the process if
necessary. """
thread = Thread(target=self.run_xelatex())
thread.start()
thread.join(3)
if thread.is_alive():
self.latex_process.kill()
thread.join()
return False
return True
In English: I create a thread, which in turn creates a process, which in turn tries to compile a given file. If the thread times out, then that file is marked as being uncompilable.
The Problem
The problem is that, if xelatex finds some bad syntax, it asks the user for manual input in order to resolve the issue. But then, for some reason, the thread does not time out when the process is waiting for user input. This means that, when I try to run the script, it stops in mid-flow at several points, until I mash the return key to get things going again. This is not ideal.
What I Want
An explanation of why a thread fails to time out when a process within it asks for user input.
A solution to the problem, either by forcing the thread to time out in the above circumstances, or by preventing xelatex from asking for user input.
Alternatively, an explanation for why what I'm trying to achieve is totally insane, and a suggestion for a better line of attack.
I've been struggling for many days now with a class PublicationSaver() that I wrote that has a method for loading xml documents as strings (not shown here) and then it passes each loaded string to self.savePublication(self, publication, myDirPath).
Every time I have used it crashed after about 25.000 strings and it saves the last string on which it crashes, I was able parse that string separately so I suppose that the problem is not bad XML.
I asked here but no answers.
I goggled a lot and it seems that I'm not the only one having this problem: here
So, since I really need to complete this task, I thought this: can I wrap all with a Thread set in main, so that when lxml parse throws an exception I get it and send a result to main to kill the thread and start it again?
#threading
result_q = Queue.Queue()
# Create the thread
xmlSplitter = XmlSplitter_Thread(result_q=result_q)
xmlSplitter.run(toSplit_DirPath, target_DirPath)
print "Hello !!!\n"
toSplitDirEmptyB=False
while not toSplitDirEmptyB:
splitterAlive=True
while splitterAlive:
sleep(120)
splitterAlive=result_q.get()
xmlSplitter.join()
print "*** KILLED XmlSplitter_Thread !!! ***\n"
if not os.listdir(toSplit_DirPath):
toSplitDirEmptyB=True
else:
xmlSplitter.run(toSplit_DirPath, target_DirPath)
Is this a valid approach ? When I run the code above at the moment is not working; I mean I never get the "Hello !!" displayed and the xmlSplitter just keep going even when it starts to fail (there's an exception rule that keeps it going).
Probably the thread fails and its blocking on join method. take a look here . Split the xml into chunks and try to parse the chunk to avoid memory errors.
I am currently running a program, which i expect to go on for an hour or two. I need to break out of the loop right now, so that rest of the program continues.
This is a part of the code:
from nltk.corpus import brown
from nltk import word_tokenize, sent_tokenize
from operator import itemgetter
sentences = []
try:
for i in range(0,55000):
try:
sentences.append(brown.sents()[i])
print i
except:
break
except:
pass
the loop is currently around 30,000. I want to exit and continue with the code (not shown here). Please suggest me how to such that, the program doesn't break exit completely. (Not like keyboard interrupt)
Since it is already running, you can't modify the code. Unless you invoked it under pdb, you can't break into the Python debugger to alter the condition to leave the loop and continue with the rest of the program. So none of the normal avenues are open to you.
There is one outside solution, which requires intimate knowledge of the Python interpreter and runtime. You can attach the gdb debugger to the Python process (or VisualStudio if you are on Windows). Then when you break in, examine the stack trace of the main thread. You will see a whole series of nested PyEval_* calls and so on. If you can figure out where the loop is in the stack trace, then identify the loop. Then you will need to find the counter variable (an integer wrapped in a PyObject) and set it to a large enough value to trigger the end of the loop, then let the process continue. Not for the faint of heart! Some more info is here:
Tracing the Python stack in GDB
Realistically, you just need to decide if you either leave it alone to finish, or kill it and restart.
It's probably easiest to simply kill the process, modify your code so that the loop is interruptible (as #fedorSmirnov suggests) with the KeyboardInterrupt exception, then start again. You will lose the processing time you have invested already, but consider it a sunken cost.
There's lots of useful information here on how to add support to your program for debugging the running process:
Showing the stack trace from a running Python application
I think you could also put the for loop in a try block and catch the keyBoardInterrupt exception by proceeding with the rest of the program. With this approach, you should be able to break out of the loop by hitting ctrl + C while staying inside your program. The code would look similar to this:
try:
# your for loop
except KeyboardInterrupt:
print "interrupted"
# rest of your program
You can save the data with pickle before the break command. Next time load the data and continue the loop.
I have my code and it does go run to infinity. What I want is that if on the unix command window if the user inputs a ctrl C, I want the program to finish the current loop it in and then come out of the loop. So I want it to break, but I want it to finish the current loop. Is using ctrl C ok? Should I look to a different input?
To do this correctly and exactly as you want it is a bit complicated.
Basically you want to trap the Ctrl-C, setup a flag, and continue until the start of the loop (or the end) where you check that flag. This can be done using the signal module. Fortunately, somebody has already done that and you can use the code in the example linked.
Edit: Based on your comment below, a typical usage of the class BreakHandler is:
ih = BreakHandler()
ih.enable()
for x in big_set:
complex_operation_1()
complex_operation_2()
complex_operation_3()
# Check whether there was a break.
if ih.trapped:
# Stop the loop.
break
ih.disable()
# Back to usual operation