pySerial Capturing a long response - python

Hi guys I'm working a on script that will get data from a host using the Data Communications Standard (Developed by: Data Communication Standard Committee Lens Processing Division of The Vision Council), by serial port and pass the data into ModBus Protocol for the device to perform it's operations.
Since I don't fiscally have access to the host machine I'm trying to develop a secondary script to emulate the host. I am currently on the stage where I need to read a lot of information from the serial port and I get only part of the data. I was hoping to get the whole string sent on the send_job() function on my host emulator script.
Guys also can any of you tell me if this would be a good approach? the only thing the machine is supposed to do is grab 2 values from the host response and assign them to two modbus holding registers.
NOTE: the initialization function is hard coded because it will always be the same and the actual response data will not matter except for status. Also the job request is hard coded i only pass the job # that i get from a modbus holding register, the exact logic on how the host resolved this should not matter i only need to send the job number scanned from the device in this format.
main script:
def request_job_modbus(job):
data = F'[06][1c]req=33[0d][0a]job={job}[0d][0a][1e][1d]'.encode('ascii')
writer(data)
def get_job_from_serial():
response = serial_client.read_all()
resp = response.decode()
return resp
# TODO : SEND INIT SEQUENCE ONCE AND VERIFY IF REQUEST status=0
initiation_request()
init_response_status = get_init_status()
print('init method being active')
print(get_init_status())
while True:
# TODO: get job request data
job_serial = get_job_from_serial()
print(job_serial)
host emulation script:
def send_job():
job_response = '''[06][1c]ans=33[0d]job=30925[0d]status=0;"ok"[0d]do=l[0d]add=;2.50[0d]ar=1[0d]
bcerin=;3.93[0d]bcerup=;-2.97[0d]crib=;64.00[0d]do=l[0d]ellh=;64.00[0d]engmask=;613l[0d]
erdrin=;0.00[0d]erdrup=;10.00[0d]ernrin=;2.00[0d]ernrup=;-8.00[0d]ersgin=;0.00[0d]
ersgup=;4.00[0d]gax=;0.00[0d]gbasex=;-5.30[0d]gcrosx=;-7.96[0d]kprva=;275[0d]kprvm=;0.55[0d]
ldpath=\\uscqx-tcpmain-at\lds\iot\do\800468.sdf[0d]lmatid=;151[0d]lmatname=;f50[0d]
lnam=;vsp_basic_fh15[0d]sgerin=;0.00[0d]sgerup=;0.00[0d]sval=;5.18[0d]text_11=;[0d]
text_12=;[0d]tind=;1.53[0d][1e][1d]'''.encode('ascii')
writer(job_response)
def get_init_request():
req = p.readline()
print(req)
request = req.decode()[4:11]
# print(request)
if request == 'req=ini':
print('request == req=ini??? <<<<<<< cumple condicion y enviala respuesta')
send_init_response()
send_job()
while True:
# print(get_init_request())
get_init_request()
what I get in screen: main script
init method being active
bce
erd
condition was met init status=0
outside loop
ers
condition was met init status=0
inside while loop
trigger reset <<<--------------------
5782
`:lmatid=;151[0d]lmatname=;f50[0d]
lnam=;vsp_basic_fh15[0d]sgerin=;0.00[0d]sgerup=;0.00[0d]sval=;5.18[0d]text_11=;[0d]
text_12=;[0d]tind=;1.53[0d][1e][1d]
outside loop
condition was met init status=0
outside loop
what I get in screen: host emulation script
b'[1c]req=ini[0d][0a][1e][1d]'
request == req=ini??? <<<<<<< cumple condicion y enviala respuesta
b''
b'[06][1c]req=33[0d][0a]job=5782[0d][0a][1e][1d]'
b''
b''
b''
b''
b''
b''

I'm suspect you're trying to write too much at once to a hardware buffer that is fairly small. Especially when dealing with low power hardware, assuming you can stuff an entire message into a buffer is not often correct. Even full modern PC's sometimes have very small buffers for legacy hardware like serial ports. You may find when you switch from development to actual hardware, that the RTS and DTR lines need to be used to determine when to send or receive data. This will be up to whoever designed the hardware unfortunately, as they are often also ignored.
I would try chunking your data transfer into smaller bits as a test to see if the whole message gets through. This is a quick and dirty first attempt that may have bugs, but it should get you down the right path:
def get_job_from_serial():
response = b'' #buffer for response
while True:
try:
response += serial_client.read() #read any available data or wait for timeout
#this technically could only be reading 1 char at a time, but any
#remotely modern pc should easily keep up with 9600 baud
except serial.SerialTimeoutException: #timeout probably means end of data
#you could also presumably check the length of the buffer if it's always
#a fixed length to determine if the entire message has been sent yet.
break
return response
def writer(command):
written = 0 #how many bytes have we actually written
chunksize = 128 #the smaller you go, the less likely to overflow
# a buffer, but the slower you go.
while written < len(command):
#you presumably might have to wait for p.dtr() == True or similar
#though it's just as likely to not have been implemented.
written += p.write(command[written:written+chunksize])
p.flush() #probably don't actually need this
P.S. I had to go to the source code for p.read_all (for some reason I couldn't find it online), and it does not do what I think you expect it does. The exact code for it is:
def read_all(self):
"""\
Read all bytes currently available in the buffer of the OS.
"""
return self.read(self.in_waiting)
There is no concept of waiting for a complete message, it just a shorthand for grab everything currently available.

Related

Python 2.7 Script works with breakpoint in Debug mode but not when Run

def mp_worker(row):
ip = row[0]
ip_address = ip
tcp_port = 2112
buffer_size = 1024
# Read the reset message sent from the sign when a new connection is established
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
print('Connecting to terminal: {0}'.format(ip_address))
s.connect((ip_address, tcp_port))
#Putting a breakpoint on this call in debug makes the script work
s.send(":08a8RV;")
#data = recv_timeout(s)
data = s.recv(buffer_size)
strip = data.split("$", 1)[-1].rstrip()
strip = strip[:-1]
print(strip)
termStat = [ip_address, strip]
terminals.append(termStat)
except Exception as exc:
print("Exception connecting to: " + ip_address)
print(exc)
The above code is the section of the script that is causing the problem. It's a pretty simple function that connects to a socket based on a passed in IP from a DB query and receives a response that indicates the hardware's firmware version.
Now, the issue is that when I run it in debug with a breakpoint on the socket I get the entire expected response from the hardware, but if I don't have a breakpoint in there or I full on Run the script it only responds with part of the expected message. I tried both putting a time.sleep() in after the send to see if it would get the entire response and I tried using the commented out recv_timeout() method in there which uses a non-blocking socket and timeout to try to get an entire response, both with the exact same results.
As another note, this works in a script with everything in one main code block, but I need this part separated into a function so I can use it with the multiprocessing library. I've tried running it on both my local Windows 7 machine and on a Unix server with the same results.
I'll expands and reiterate on what I've put into a comment moment ago. I am still not entirely sure what is behind the different behavior in either scenario (apart from timing guess apparently disproved by an attempt to include sleep.
However, it's somewhat immaterial as stream sockets do not guarantee you get all the requested data at once and in chunks as requested. This is up for an application to deal with. If the server closes the socket after full response was sent, you could replace:
data = s.recv(buffer_size)
with recv() until zero bytes were received, this would be equivalent of getting 0 (EOF) from from the syscall:
data = ''
while True:
received = s.recv(buffer_size)
if len(received) == 0:
break
data += received
If that is not the case, you would have to rely on fixed or known (sent in the beginning) size you want to consider together. Or deal with this on protocol level (look for characters, sequences used to signal message boundaries.
I just recently found out a solution here, and thought I'd post it in case anyone else has issue, I just decided to try and call socket.recv() before calling socket.send() and then calling socket.recv() again afterwards and it seems to have fixed the issue; I couldn't really tell you why it works though.
data = s.recv(buffer_size)
s.send(":08a8RV;")
data = s.recv(buffer_size)

Python's pyserial with interrupt mode

I have a device which works on serial communication. I am writing python code which will send some commands to get the data from the device.
There are three commands.
1.COMMAND - sop
Device does its internal calculation and sends below data
Response - "b'SOP,0,921,34,40,207,0,x9A\r\n'"
2.COMMAND - time
This gives a date time values which normally do not change untill the device is restarted
3.START - "\r\r" or (<cr><cr>)
This command puts the device in responsive mode after which it responds to above commands. This command is basically entering <enter> twice & only have to do once at the start.
Now the problem which I am facing is that, frequency of data received from sop command is not fixed and hence the data is received anytime. This command can also not be stopped once started, so if I run another command like time, and read the data, I do not receive time values and they are merged with the sop data sometime. Below is the code, I am using:
port = serial.Serial('/dev/ttyS0',115200) #Init serial port
port.write(("\r\r".encode())) #Sending the start command
bytesToRead = port.in_waiting #Checking data bytesize
res = port.read(bytesToRead) #Reading the data which is normally a welcome msg
port.reset_input_buffer() #Clearing the input serial buffer
port.reset_output_buffer() #Clearing the output serial buffer
port.write(("sop\r".encode())) #Sending the command sop
while True:
time.sleep(5)
bytesToRead = port.in_waiting
print(bytesToRead)
res = port.read(bytesToRead)
print(res)
port.reset_input_buffer()
port.write(("time\r".encode()))
res = port.readline()
print(res)
Using the above command I sometimes do not receive the value of time after executing its command or sometimes it is merged with the sop command. Also with the sop command, I received a lot of data during the sleep(5) out of which I need to get the latest data. If I do not include sleep(5), I miss the sop data and it is then received after executing the time command.
I was hoping if anyone can point me to right direction of how to design it in a better way. Also, I think this can easily be done using interrupt handler but I didn't found any code about pyserial interrupts. Can anyone please suggest some good code for using interrupts in pyserial.
Thanks
Instead of using time.sleep(), its preferred to use serialport.in_waiting which help to check the number of bytes available in rcv buffer.
So once there is some data is rcv buffer then only read the data using read function.
so following code sequence can be followed without having any delay
while True:
bytesToRead = port.in_waiting
print(bytesToRead)
if(bytestoRead > 0):
res = port.read(bytesToRead)
print(res)
port.reset_input_buffer()
# put some check or filter then write data on serial port
port.write(("time\r".encode()))
res = port.readline()
print(res)
I am taking a stab here: Your time.sleep(5) might be too long. Have you tried making the sleep really short, for example time.sleep(.300)? If the time data gets written back between the sop returns you will catch it before it gets merged with sop, but I am making an assumption here that it will send time data back, else there is anyway nothing more you can do on the server side (the python) code. I do believe it won't hurt to make the sleep less, it is anyway just sitting there waiting (polling) for communication.
Not having the having the same environment on my side, makes it difficult to answer, because I can't test my answer, so I hope this might help.

Socket issues in Python

I'm building a simple server-client app using sockets. Right now, I am trying to get my client to print to console only when it received a specific message (actually, when it doesn't receive a specific message), but for some reason, every other time I run it, it goes through the other statement in my code and is really inconsistent - sometimes it will work as it should and then it will randomly break for a couple uses.
Here is the code on my client side:
def post_checker(client_socket):
response = client_socket.recv(1024)
#check if response is "NP" for a new post from another user
if response == "NP":
new_response = client_socket.recv(1024)
print new_response
else: #print original message being sent
print response
where post_checker is called in the main function as simply "post_checker(client_socket)" Basically, sometimes I get "NPray" printed to my console (when the client only expects to receive the username "ray") and other times it will print correctly.
Here is the server code correlated to this
for sublist in user_list:
client_socket.send("NP")
client_socket.send(sublist[1] + " ")
where user_list is a nested list and sublist[1] is the username I wish to print out on the client side.
Whats going on here?
The nature of your problem is that TCP is a streaming protocol. The bufsize in recv(bufsize) is a maximum size. The recv function will always return data when available, even if not all of the bytes have been received.
See the documentation for details.
This causes problems when you've only sent half the bytes, but you've already started processing the data. I suggest you take a look at the "recvall" concept from this site or you can also consider using UDP sockets (which would solve this problem but may create a host of others as UDP is not a guaranteed protocol).
You may also want to let the python packages handle some of the underlying framework for you. Consider using a SocketServer as documented here:
buffer = []
def recv(sock):
global buffer
message = b""
while True:
if not (b"\r\n" in b"".join(buffer)):
chunk = sock.recv(1024)
if not chunk:
break
buffer.append(chunk)
concat = b"".join(buffer)
if (b"\r\n" in concat):
message = concat[:concat.index(b"\r\n")]
concat = concat[concat.index(b"\r\n") + 2:]
buffer = [concat]
break
return message
def send(sock, data):
sock.send(data + b"\r\n")
I have tested this, and in my opinion, it works perfectly
My use case: I have two scripts that send data quickly, it ends up that one time or another, the buffers receive more than they should, and gather the data, with this script it leaves everything that receives more saved, and continues receiving until there is a new line between the data, and then, it gathers the data, divides in the new line, saves the rest and returns the data perfectly separated
(I translated this, so please excuse me if anything is wrong or misunderstood)

Can I avoid a threaded UDP socket in Python dropping data?

First off, I'm new to Python and learning on the job, so be gentle!
I'm trying to write a threaded Python app for Windows that reads data from a UDP socket (thread-1), writes it to file (thread-2), and displays the live data (thread-3) to a widget (gtk.Image using a gtk.gdk.pixbuf). I'm using queues for communicating data between threads.
My problem is that if I start only threads 1 and 3 (so skip the file writing for now), it seems that I lose some data after the first few samples. After this drop it looks fine. Even by letting thread 1 complete before running thread 3, this apparent drop is still there.
Apologies for the length of code snippet (I've removed the thread that writes to file), but I felt removing code would just prompt questions. Hope someone can shed some light :-)
import socket
import threading
import Queue
import numpy
import gtk
gtk.gdk.threads_init()
import gtk.glade
import pygtk
class readFromUDPSocket(threading.Thread):
def __init__(self, socketUDP, readDataQueue, packetSize, numScans):
threading.Thread.__init__(self)
self.socketUDP = socketUDP
self.readDataQueue = readDataQueue
self.packetSize = packetSize
self.numScans = numScans
def run(self):
for scan in range(1, self.numScans + 1):
buffer = self.socketUDP.recv(self.packetSize)
self.readDataQueue.put(buffer)
self.socketUDP.close()
print 'myServer finished!'
class displayWithGTK(threading.Thread):
def __init__(self, displayDataQueue, image, viewArea):
threading.Thread.__init__(self)
self.displayDataQueue = displayDataQueue
self.image = image
self.viewWidth = viewArea[0]
self.viewHeight = viewArea[1]
self.displayData = numpy.zeros((self.viewHeight, self.viewWidth, 3), dtype=numpy.uint16)
def run(self):
scan = 0
try:
while True:
if not scan % self.viewWidth: scan = 0
buffer = self.displayDataQueue.get(timeout=0.1)
self.displayData[:, scan, 0] = numpy.fromstring(buffer, dtype=numpy.uint16)
self.displayData[:, scan, 1] = numpy.fromstring(buffer, dtype=numpy.uint16)
self.displayData[:, scan, 2] = numpy.fromstring(buffer, dtype=numpy.uint16)
gtk.gdk.threads_enter()
self.myPixbuf = gtk.gdk.pixbuf_new_from_data(self.displayData.tostring(), gtk.gdk.COLORSPACE_RGB,
False, 8, self.viewWidth, self.viewHeight, self.viewWidth * 3)
self.image.set_from_pixbuf(self.myPixbuf)
self.image.show()
gtk.gdk.threads_leave()
scan += 1
except Queue.Empty:
print 'myDisplay finished!'
pass
def quitGUI(obj):
print 'Currently active threads: %s' % threading.enumerate()
gtk.main_quit()
if __name__ == '__main__':
# Create socket (IPv4 protocol, datagram (UDP)) and bind to address
socketUDP = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
host = '192.168.1.5'
port = 1024
socketUDP.bind((host, port))
# Data parameters
samplesPerScan = 256
packetsPerSecond = 1200
packetSize = 512
duration = 1 # For now, set a fixed duration to log data
numScans = int(packetsPerSecond * duration)
# Create array to store data
data = numpy.zeros((samplesPerScan, numScans), dtype=numpy.uint16)
# Create queue for displaying from
readDataQueue = Queue.Queue(numScans)
# Build GUI from Glade XML file
builder = gtk.Builder()
builder.add_from_file('GroundVue.glade')
window = builder.get_object('mainwindow')
window.connect('destroy', quitGUI)
view = builder.get_object('viewport')
image = gtk.Image()
view.add(image)
viewArea = (1200, samplesPerScan)
# Instantiate & start threads
myServer = readFromUDPSocket(socketUDP, readDataQueue, packetSize, numScans)
myDisplay = displayWithGTK(readDataQueue, image, viewArea)
myServer.start()
myDisplay.start()
gtk.gdk.threads_enter()
gtk.main()
gtk.gdk.threads_leave()
print 'gtk.main finished!'
UDP doesn't verify the target received it (like TCP does) - you must implement retransmission and such in your applications if you want to ensure all of the data arrives. Do you control the sending UDP source?
UDP is, by definition, unreliable. You must not write programs that expect UDP datagrams to always get through.
Packets are dropped all the time in TCP too, but your program does not need to care, because TCP applications cannot process packets; the TCP stack shows your application a stream of bytes. There is a lot of machinery there to make sure that if you send bytes 'ABCD', you will see 'A' 'B' 'C' 'D' on the end. You may get any possible collection of packets, of course: 'ABC', 'D', or 'AB', CD', etc. Or you may just see 'ABC', and then nothing.
TCP isn't "reliable" because it can magically make your network cables never fail or break; the guarantee that it provides is that up until the point where the stream breaks, you will see everything in order. And after the stream breaks, you'll see nothing.
In UDP there is no such guarantee. If you send four UDP datagrams, 'AB', 'CD', 'EF' 'GH', you may receive all of them, or none of them, or half of them, or just one of them. You may receive them in any order. The only guarantee that UDP tries to provide is that you won't see a message with 'ABCD' in it, because those bytes are in different datagrams.
To sum up: this has nothing to do with Python, or threads, or GTK. It's just a basic fact of life on networks based in physical reality: sometimes the electrical characteristics of your wires are not conducive to getting your messages all the way across them.
You may be able to reduce the complexity of your program by using Twisted, specifically, the listenUDP API, because then you won't be needing to juggle threads or their interaction with GTK: you can just call methods directly on the widget in question from your datagramReceived method. But this won't fix your underlying problem: UDP just drops data sometimes, period. The real solution is to convince your data source to use TCP instead.
Firstly; can you set the recv buffer size for the socket? If so, set it to something very large as this will let the UDP stack buffer more datagrams for you.
Secondly; if you can use asynchronous I/O then post multiple recv calls at once (again this allows the stack to service more datagrams before it starts to drop them).
Thirdly; you could try unrolling your loop a little and reading multiple datagrams before placing them in your queue; could the locking on the queue be causing the recv thread to run slowly??
Finally; the datagrams may be being dropped elsewhere on the network, there may be nothing that you can do, that the U in UDP...
Edit - Struck out listen/accept sentence, thanks Daniel, I was just coming to remove it when I saw your comment :)
I'd suggest that this is a network programming issue, rather than python per-se.
You've set a packet-per-second rate and a duration to define the number of recv calls you make to your UDP socket. I don't see a listen or accept call to the socket, I'll assume that recv handles that as you say you receive some data. You've not mentioned the generation of the data.
You've defined how many reads you're expecting to make, so I'd assume that the code makes that many receives before exiting, so my conclusion would be that your recv packetSize is insufficient and therefore one read isn't pulling an entire datagram, then the subsequent recv is pulling the next part of the previous datagram.
Can't you look at the data you have received and determine what is missing? What data are you "losing"? How do you know it's lost?
Furthermore, you could use wireshark to verify that your host is actually receiving the data at the same time as verifying the size of the datagrams. Match the capture against the data your recv thread is providing.
Update
You say that you're losing data, but not what it is. I see two possibilities for data-loss:
Truncating packets
Dropping packets
You've said that the payload size is the same size as that which you are passing to recv, so I'll take it that you're not truncating.
So the factors for dropping packets are a combination of rate of receipt, rate of read-from-receive-buffer and receive-buffer size.
Your calls to Queue.put may be slowing down your rate of read.
So, first determine that you can read 1200 packets per second by modifying readFromUDPSocket to not Queue.put, but count the number of receives and report time taken.
Once you've determined that you can call recv fast enough, the next step is working out what is slowing you down. I suspect it may be your use of Queue, I suggest batching payloads in N-sized groups for placing on the Queue so that you're not trying to call put at 12Hz.
Seeing as you want to sustain a rate of 1200 reads per second I don't think you'll get very far by increasing the receive buffer on the socket.
It seems that the problem is with the source. There are two issues:
Looking at Wireshark the source is not consistently transmitting 1200 packets-per-second. Possibly, as Len pointed out, a problem with the outbound stack dropping data. BTW the source is a programmable card with an ethernet port connected to my machine.
The other issue is the after the first 15 packets or so of data there is always a drop. What I discovered is that if I recv 20 packets in the initialisation part of the readFromUDPSocket thread, I can then read the data fine, e.g.
class readFromUDPSocket(threading.Thread):
def __init__(self, socketUDP, readDataQueue, packetSize, numScans):
threading.Thread.__init__(self)
self.socketUDP = socketUDP
self.readDataQueue = readDataQueue
self.packetSize = packetSize
self.numScans = numScans
for i in range(0, 20):
buffer = self.socketUDP.recv(self.packetSize)
def run(self):
for scan in range(1, self.numScans + 1):
buffer = self.socketUDP.recv(self.packetSize)
self.readDataQueue.put(buffer)
self.socketUDP.close()
print 'myServer finished!'
Not sure what this points to?! I think all of this rules out not being able to recv and put fast enough though.

non-blocking read/log from an http stream

I have a client that connects to an HTTP stream and logs the text data it consumes.
I send the streaming server an HTTP GET request... The server replies and continuously publishes data... It will either publish text or send a ping (text) message regularly... and will never close the connection.
I need to read and log the data it consumes in a non-blocking manner.
I am doing something like this:
import urllib2
req = urllib2.urlopen(url)
for dat in req:
with open('out.txt', 'a') as f:
f.write(dat)
My questions are:
will this ever block when the stream is continuous?
how much data is read in each chunk and can it be specified/tuned?
is this the best way to read/log an http stream?
Hey, that's three questions in one! ;-)
It could block sometimes - even if your server is generating data quite quickly, network bottlenecks could in theory cause your reads to block.
Reading the URL data using "for dat in req" will mean reading a line at a time - not really useful if you're reading binary data such as an image. You get better control if you use
chunk = req.read(size)
which can of course block.
Whether it's the best way depends on specifics not available in your question. For example, if you need to run with no blocking calls whatever, you'll need to consider a framework like Twisted. If you don't want blocking to hold you up and don't want to use Twisted (which is a whole new paradigm compared to the blocking way of doing things), then you can spin up a thread to do the reading and writing to file, while your main thread goes on its merry way:
def func(req):
#code the read from URL stream and write to file here
...
t = threading.Thread(target=func)
t.start() # will execute func in a separate thread
...
t.join() # will wait for spawned thread to die
Obviously, I've omitted error checking/exception handling etc. but hopefully it's enough to give you the picture.
You're using too high-level an interface to have good control about such issues as blocking and buffering block sizes. If you're not willing to go all the way to an async interface (in which case twisted, already suggested, is hard to beat!), why not httplib, which is after all in the standard library? HTTPResponse instance .read(amount) method is more likely to block for no longer than needed to read amount bytes, than the similar method on the object returned by urlopen (although admittedly there are no documented specs about that on either module, hmmm...).
Another option is to use the socket module directly. Establish a connection, send the HTTP request, set the socket to non-blocking mode, and then read the data with socket.recv() handling 'Resource temporarily unavailable' exceptions (which means that there is nothing to read). A very rough example is this:
import socket, time
BUFSIZE = 1024
s = socket.socket()
s.connect(('localhost', 1234))
s.send('GET /path HTTP/1.0\n\n')
s.setblocking(False)
running = True
while running:
try:
print "Attempting to read from socket..."
while True:
data = s.recv(BUFSIZE)
if len(data) == 0: # remote end closed
print "Remote end closed"
running = False
break
print "Received %d bytes: %r" % (len(data), data)
except socket.error, e:
if e[0] != 11: # Resource temporarily unavailable
print e
raise
# perform other program tasks
print "Sleeping..."
time.sleep(1)
However, urllib.urlopen() has some benefits if the web server redirects, you need URL based basic authentication etc. You could make use of the select module which will tell you when there is data to read.
Yes when you catch up with the server it will block until the server produces more data
Each dat will be one line including the newline on the end
twisted is a good option
I would swap the with and for around in your example, do you really want to open and close the file for every line that arrives?

Categories