I'm currently passing some input to a process with pexpect with the following code:
p = pexpect.spawn('cat', timeout=5.0 )
p.maxread = 5000
p.setecho(False) # prevent the process from echoing stdin back to us
INPUT_LEN = 1024
p.sendline('a'*INPUT_LEN)
print p.readline() # pexpect.TIMEOUT: Timeout exceeded in read_nonblocking().
When INPUT_LEN < 1024, everything works fine, but for >= 1024 characters, the process does not receive the full input, causing raising a "pexpect.TIMEOUT" error on p.readline().
I've tried splitting my input into pieces smaller than 1024 characters, but this has the same issue:
p = pexpect.spawn('cat', timeout=5.0 )
p.maxread = 5000
p.setecho(False)
INPUT_LEN = 1024
p.send('a'*1000)
p.sendline('a'*(INPUT_LEN-1000))
print p.readline() # pexpect.TIMEOUT: Timeout exceeded in read_nonblocking().
Does anyone know how to make pexpect work with inputs over 1024 characters? I tried looking at the source, but it just seems to be calling os.write(...).
(As a side note, I've noticed the same truncation error occurs when I run "cat" from a shell and attempt to paste in >=1024 characters with "Cmd+V". However, everything works fine if I run "pbpaste | cat" .)
Thanks!
Update:
The call to "os.write()" returns 1025, indicating a successful write, but os.read() returns "\x07" (the single character BEL), and then hangs on the next call, resulting in the timeout.
Dividing the os.write() call into two sub-1024 byte write()s, separated by a call to os.fsync(), does not change anything.
Your problem seems to be MacOS related, take a look at MacOSX 10.6.7 cuts off stdin at 1024 chars.
It basically says that 1024 is your tty buffer limit.
I'm not an expert on Mac OS, but maybe others can give you more informations about this.
I realise it is very very late, but I am posting a solution for someone who stumbles on to this question with the very same problem (as I did earlier today).
Based on some of the answers/comments, I have written a pexpect like package that uses stdin.write and stdout.read instead of whatever it is that pexpect uses. I have not had a chance to test it would very thoroughly but to this point, it has stood up to the challenge.
You can find the code here: https://github.com/tayyabt/tprocess
In my case (Debian Linux) the limit (4096 chars) was related to the canonical processing input mode of the terminal. There are some comments about that in the pexpect documentation.
I solved my problem by turning off canon mode before sending my data:
p.sendline('stty -icanon')
p.sendline('a'*5000)
Related
Hi I have a problem that I cannot seem to find any solution for.
(Maybe i'm just horrible at phrasing searches correctly in english)
I'm trying to execute a binary from python using pwntools and reading its output completely before sending some input myself.
The output from my binary is as follows:
Testmessage1
Testmessage2
Enter input: <binary expects me to input stuff here>
Where I would like to read the first line, the second line and the output part of the third line (with ':' being the last character).
The third line of the output does not contain a newline at the end and expects the user to make an input directly. However, I'm not able to read the output contents that the third line starts with, no matter what I try.
My current way of trying to achieve this:
from pwn import *
io = process("./testbin")
print io.recvline()
print io.recvline()
print io.recvuntil(":", timeout=1) # this get's stuck if I dont use a timeout
...
# maybe sending data here
# io.send(....)
io.close()
Do I missunderstand something about stdin and stdout? Is "Enter input:" of the third line not part of the output that I should be able to receive before making an input?
Thanks in advance
I finally figured it out.
I got the hint I needed from
https://github.com/zachriggle/pwntools-glibc-buffering/blob/master/demo.py
It seems that Ubuntu is doing lots of buffering on its own.
When manually making sure that pwnTools uses a pseudoterminal for stdin and stdout it works!
import * from pwn
pty = process.PTY
p = process(stdin=pty, stdout=pty)
You can use the clean function which is more reliable and which can be used for remote connections: https://docs.pwntools.com/en/dev/tubes.html#pwnlib.tubes.tube.tube.clean
For example:
def start():
p = remote("0.0.0.0", 4000)
return p
io = start()
io.send(b"YYYY")
io.clean()
io.send(b"ZZZ")
I'm writing a project using an STM32F407_VG board that uses an RS232 connection to send batch of data of different sizes (~400 bytes) on a serial port, and those data must be written on file. On desktop side I'm using a python 3 script with pyserial 3.3.
I've tried reading a single byte at time with ser.read() but I think it's too slow because I'm losing some of the data. So I'm trying to send the size of the batch as an integer before the batch itself, in order to reduce the overhead, and write data to file during time interval within a batch and the following one.
PROBLEM IS: ser.read(n) behave in a very strange way, and 99% of the times it blocks when it's time to read the batch and do not return. It also happens that sometimes it can read the first batch and writes it to file successfully, but it blocks at the second loop iteration. It's strange because I can use ser.read(4) to get the batch size with zero problem, and I use ser.readline() at the beginning of the script when listening to a starting signal, but I cannot read the data.
I'm sure that data are there and are well formed because I checked with a logic analyzer, and I've already tried with enabling and disabling flow control or set different baud rates on both the board and the script. I think it could be a config problem of python, but actually I've run out of ideas.
PYTHON SCRIPT CODE -SNIPPET
ser = serial.Serial(str(sys.argv[1]), \
int(sys.argv[2]), \
stopbits=serial.STOPBITS_ONE, \
parity=serial.PARITY_NONE, \
bytesize=serial.EIGHTBITS, \
timeout=None \
)
outputFile = open(sys.argv[3],"wb")
# wait for begin string
beginSignal = "ready"
word = ""
while word != beginSignal:
word = ser.readline().decode()
word = sample.split("\n")[0]
print("Started receiving...")
while True:
# read size of next batch
nextBatchSize = ser.read(4)
nextBatchSize = int.from_bytes(nextBatchSize,byteorder='little', signed=True)
# reads the batch:
# THIS IS THE ONE THAT CREATES PROBLEMS
batch = ser.read(nextBatchSize)
# write data to file
outputFile.write(batch)
BOARD CODE - SNIPPET
// this function sends the size of the batch and the batch itself
void sendToSerial(unsigned char* mp3data, int size){
// send actual size of the batch
write(STDOUT_FILENO,&size,sizeof(int));
// send the batch of data
write(STDOUT_FILENO,mp3data,size);
}
Any idea? Thanks!
You might have already tried this, but print out nextBatchSize to confirm that it is what you expect, just in case the byte order is reversed. If this is wrong your Python code could be trying to read too many bytes, and would therefore block.
Also you can check ser.in_waiting to see how many bytes are available to be read from the input buffer before your read attempt:
print(ser.in_waiting)
batch = ser.read(nextBatchSize)
You should also check the return value of write() in your C code, which is the number of bytes actually written, or -1 if there is an error. write() is not guaranteed to write all of the bytes in the given buffer, so there might remain some that have not been written. Or there might have been an error.
Data loss suggests a flow control issue. You need to ensure that it is enabled at both ends. You've said that you tried it, but in your posted code it is not enabled. How are you opening and configuring the serial port at the board end?
I am using pexpect with python to create a program that allows a user to interact with a FORTRAN program through a website. From the FORTRAN program I am receive the error:
open: Permission denied apparent state: unit 4 named subsat.out.55 last format: list io lately writing sequential formatted external IO 55
when I attempt to:
p = pexpect.spawn(myFortranProgram,[],5)
p.logfile_read = sys.stdout
p.expect("(.*)")
p.sendline("55")
From what I understand, I am likely sending the 55 to the wrong input unit. How do I correctly send input to a FORTRAN program using pexpect in Python?
Thank You.
Edit: When p.sendline's parameter is empty (e.g. p.sendline()) or only contains spaces, the program proceeds as expected. In sending non-space values to a FORTRAN program, do I need to specify the input format somehow?
The pexpect module is something I'd not used before, but could be useful to me, so I tried this.
Edit:
I've not been able to duplicate the error you're reporting. Looking at this error leads me to believe that it has something to do with reading from a file, which may be a result of other issues. From what I've seen, this isn't what pexpect is designed to handle directly; however, you may be able to make it work with a pipe, like the example in my original answer, below.
I'm having no problem sending data to Fortran's I/O stream 5 (stdin). I created a Fortran program called regurgitate which issues a " Your entry? " prompt, then gets a line of input from the user on I/O stream 5, then prints it back out. The following code works with that program:
import pexpect
child = pexpect.spawn('./regurgitate')
child.setecho(False)
ndx = child.expect('.*Your entry?.*')
child.sendline('42')
child.expect([pexpect.EOF])
print child.before
child.close()
The output is simply:
42
Exactly what I expected. However, if my Fortran program says something different (such as "Your input?"), the pexpect just hangs or times out.
Original suggestion:
Maybe this pexpect.run() sample will help you. At least it seems to run my regurgitate program (a simple Fortran program that accepts an input and then prints it out):
import pexpect
out = pexpect.run('/bin/bash -c "/bin/cat forty-two | ./regurgitate"')
print out
The output was:
Your entry?
42
Where regurgitate prints out a "Your entry?" prompt and the forty-two file contains "42" (without quotes in both cases).
I'm trying to write a secure transfer file program using Python and AES and i've got a problem i don't totally understand. I send my file by parsing it with 1024 bytes chunks and sending them over but the server side who receive the data crashes ( I use AES CBC therefore my data length must be a multiple of 16 bytes ) and the error i get says that it is not.
I tried to print the length of the data sent by the client on the client side and the length of the data received on the server and it shows that the client is sending exactly 1024 bytes each time like it's supposed to, but the server side shows that at some point in time, a received packet is not and so less than 1024 bytes ( for example 743 bytes ).
I tried to put a time.sleep(0.5) between each socket send on the client side and it seems to work. Is it possible that it is some kind of socket buffer failure on the server side ? That too much data is being send too fast by the client and that it breaks somehow the socket buffer on the server side so the data is corrupted or vanish and the recv(1024) only receive a broken chunk? That's the only thing i could think of, but this may also be completely false, if anyone has an idea of why this is not working properly it would be great ;)
Following my idea i tried :
self.s.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 32768000)
print socket.SO_RCVBUF
I tried to put a 32mbytes buffer on the server side but On Windows XP it shows 4098 on the print and on linux it shows only 8. I don't know how i must interpret this, the only thing i know is that it seems that it doesn't have a 32mbytes buffer so the code doesn't work.
Well it's been a really long post, i hope some of you had the courage to read it all to here ! i'm totally lost there so if anyone has any idea about this please share it :D
Thanks to Faisal my code is here :
Server Side: ( count is my filesize/1024 )
while 1:
txt=self.s.recv(1024)
if txt == " ":
break
txt = self.cipher.decrypt(txt)
if countbis == count:
txt = txt.rstrip()
tfile.write(txt)
countbis+=1
Client side :
while 1:
txt= tfile.read(1024)
if not txt:
self.s.send(" ")
break
txt += ' ' * (-len(txt) % 16)
txt = self.cipher.encrypt(txt)
self.s.send(txt)
Thanks in advance,
Nolhian
Welcome to network programming! You've just fallen into the same mistaken assumption that everyone makes the first time through in assuming that client sends & server recives should be symmetric. Unfortunately, this is not the case. The OS allows reception to occur in arbitrarily sized chunks. It's fairly easy to work around though, just buffer your data until the amount you've read in equals the amount you wish to receive. Something along the lines of this will do the trick:
buff=''
while len(buff) < 1024:
buff += s.recv( 1024 - len(buff) )
TCP is a stream protocol, it doesn't conserve message boundaries, as you have just discovered.
As others have pointed out you're probably processing an incomplete message. You need to either have fixed sized messages or have a delimiter (don't forget to escape your data!) so you know when a complete message has been received.
What TCP can guarantee is that all your data arrives, in the right order, at some point. (Unless something unexpected happens, by which it won't arrive.) But it's very possible that the data you send will still arrive in chunks. Much of it is because of limited send- and receive-buffers. What you should do is to continue doing your recv calls until you have enough data to process it. You might might have to call send multiple times; use its return value to keep track of how much data has been sent/buffered so far.
When you do print socket.SO_RCVBUF, you actually print the symbolic SO_RCVBUF contant (except that Python doesn't really have constants); the one used to tell setsockopt what you want to change. To get the current value, you should instead call getsockopt.
Not related to TCP (as that has been answered already), but appending to a string repeatedly will be rather inefficient if you're expecting to receive a lot. It might be better to append to a list and then turn the list into a string when you finished receiving by using ''.join(list).
For many applications, the complexities of TCP are neatly abstracted by Python's asynchat module.
Here is the nice snippet of code that I wrote some time ago, may be not the best , but it could be good example of big files transfer over the local network. http://setahost.com/sending-files-in-local-network-with-python/
As mentioned above
TCP is a stream protocol
You can try this code, where the data is your original data, you can read it from the file or user input
Sender
import socket as s
sock = s.socket(s.AF_INET, s.SOCK_STREAM)
sock.connect((addr,5000))
sock.sendall(data)
finish = t.time()
Receiver
import socket as s
sock = s.socket(s.AF_INET, s.SOCK_STREAM)
sock.setsockopt(s.SOL_SOCKET, s.SO_REUSEADDR, 1)
sock.bind(("", 5000))
sock.listen(1)
conn, _ = sock.accept()
pack = []
while True:
piece = conn.recv(8192)
if not piece:
break
pack.append(piece.decode())
I am trying to use os.system() to call another program that takes an input and an output file. The command I use is ~250 characters due to the long folder names.
When I try to call the command, I'm getting an error: The input line is too long.
I'm guessing there's a 255 character limit (its built using a C system call, but I couldn't find the limitations on that either).
I tried changing the directory with os.chdir() to reduce the folder trail lengths, but when I try using os.system() with "..\folder\filename" it apparently can't handle relative path names. Is there any way to get around this limit or get it to recognize relative paths?
Even it's a good idea to use subprocess.Popen(), this does not solve the issue.
Your problem is not the 255 characters limit, this was true on DOS times, later increased to 2048 for Windows NT/2000, and increased again to 8192 for Windows XP+.
The real solution is to workaround a very old bug in Windows APIs: _popen() and _wpopen().
If you ever use quotes during the command line you have to add the entire command in quoates or you will get the The input line is too long error message.
All Microsoft operating systems starting with Windows XP had a 8192 characters limit which is now enough for any decent command line usage but they forgot to solve this bug.
To overcome their bug just include your entire command in double quotes, and if you want to know more real the MSDN comment on _popen().
Be careful because these works:
prog
"prog"
""prog" param"
""prog" "param""
But these will not work:
""prog param""
If you need a function that does add the quotes when they are needed you can take the one from http://github.com/ssbarnea/tendo/blob/master/tendo/tee.py
You should use the subprocess module instead. See this little doc for how to rewrite os.system calls to use subprocess.
You should use subprocess instead of os.system.
subprocess has the advantage of being able to change the directory for you:
import subprocess
my_cwd = r"..\folder\"
my_process = subprocess.Popen(["command name", "option 1", "option 2"], cwd=my_cwd)
my_process.wait() # wait for process to end
if my_process.returncode != 0:
print "Something went wrong!"
The subprocess module contains some helper functions as well if the above looks a bit verbose.
Assuming you're using windows, from the backslashes, you could write a .bat file from python and then os.system() on that. It's a hack.
Make sure when you're using '\' in your strings that they're being properly escaped.
Python uses the '\' as the escape character, so the string "..\folder\filename" evaluates to "..folderfilename" since an escaped f is still an f.
You probably want to use
r"..\folder\filename"
or
"..\\folder\\filename"
I got the same message but it was strange because the command was not that long (130 characters) and it used to work, it just stopped working one day.
I just closed the command window and opened a new one and it worked.
I have had the command window opened for a long time (maybe months, it's a remote virtual machine).
I guess is some windows bug with a buffer or something.