Socket issues in Python - python

I'm building a simple server-client app using sockets. Right now, I am trying to get my client to print to console only when it received a specific message (actually, when it doesn't receive a specific message), but for some reason, every other time I run it, it goes through the other statement in my code and is really inconsistent - sometimes it will work as it should and then it will randomly break for a couple uses.
Here is the code on my client side:
def post_checker(client_socket):
response = client_socket.recv(1024)
#check if response is "NP" for a new post from another user
if response == "NP":
new_response = client_socket.recv(1024)
print new_response
else: #print original message being sent
print response
where post_checker is called in the main function as simply "post_checker(client_socket)" Basically, sometimes I get "NPray" printed to my console (when the client only expects to receive the username "ray") and other times it will print correctly.
Here is the server code correlated to this
for sublist in user_list:
client_socket.send("NP")
client_socket.send(sublist[1] + " ")
where user_list is a nested list and sublist[1] is the username I wish to print out on the client side.
Whats going on here?

The nature of your problem is that TCP is a streaming protocol. The bufsize in recv(bufsize) is a maximum size. The recv function will always return data when available, even if not all of the bytes have been received.
See the documentation for details.
This causes problems when you've only sent half the bytes, but you've already started processing the data. I suggest you take a look at the "recvall" concept from this site or you can also consider using UDP sockets (which would solve this problem but may create a host of others as UDP is not a guaranteed protocol).
You may also want to let the python packages handle some of the underlying framework for you. Consider using a SocketServer as documented here:

buffer = []
def recv(sock):
global buffer
message = b""
while True:
if not (b"\r\n" in b"".join(buffer)):
chunk = sock.recv(1024)
if not chunk:
break
buffer.append(chunk)
concat = b"".join(buffer)
if (b"\r\n" in concat):
message = concat[:concat.index(b"\r\n")]
concat = concat[concat.index(b"\r\n") + 2:]
buffer = [concat]
break
return message
def send(sock, data):
sock.send(data + b"\r\n")
I have tested this, and in my opinion, it works perfectly
My use case: I have two scripts that send data quickly, it ends up that one time or another, the buffers receive more than they should, and gather the data, with this script it leaves everything that receives more saved, and continues receiving until there is a new line between the data, and then, it gathers the data, divides in the new line, saves the rest and returns the data perfectly separated
(I translated this, so please excuse me if anything is wrong or misunderstood)

Related

PyQt readyRead: set text from serial to multiple labels

In PyQt5, I want to read my serial port after writing (requesting a value) to it. I've got it working using readyRead.connect(self.readingReady), but then I'm limited to outputting to only one text field.
The code for requesting parameters sends a string to the serial port. After that, I'm reading the serial port using the readingReady function and printing the result to a plainTextEdit form.
def read_configuration(self):
if self.serial.isOpen():
self.serial.write(f"?request1\n".encode())
self.label_massGainOutput.setText(f"{self.serial.readAll().data().decode()}"[:-2])
self.serial.write(f"?request2\n".encode())
self.serial.readyRead.connect(self.readingReady)
self.serial.write(f"?request3\n".encode())
self.serial.readyRead.connect(self.readingReady)
def readingReady(self):
data = self.serial.readAll()
if len(data) > 0:
self.plainTextEdit_commandOutput.appendPlainText(f"{data.data().decode()}"[:-2])
else: self.serial.flush()
The problem I have, is that I want every answer from the serial port to go to a different plainTextEdit form. The only solution I see now is to write a separate readingReady function for every request (and I have a lot! Only three are shown now). This must be possible in a better way. Maybe using arguments in the readingReady function? Or returning a value from the function that I can redirect to the correct form?
Without using the readyRead signal, all my values are one behind. So the first request prints nothing, the second prints the first etc. and the last is not printed out.
Does someone have a better way to implement this functionality?
QSerialPort has asyncronous (readyRead) and syncronous API (waitForReadyRead), if you only read configuration once on start and ui freezing during this process is not critical to you, you can use syncronous API.
serial.write(f"?request1\n".encode())
serial.waitForReadyRead()
res = serial.read(10)
serial.write(f"?request2\n".encode())
serial.waitForReadyRead()
res = serial.read(10)
This simplification assumes that responces comes in one chunk and message size is below or equal 10 bytes which is not guaranteed. Actual code should be something like this:
def isCompleteMessage(res):
# your code here
serial.write(f"?request2\n".encode())
res = b''
while not isCompleteMessage(res):
serial.waitForReadyRead()
res += serial.read(10)
Alternatively you can create worker or thread, open port and query requests in it syncronously and deliver responces to application using signals - no freezes, clear code, slightly more complicated system.

pySerial Capturing a long response

Hi guys I'm working a on script that will get data from a host using the Data Communications Standard (Developed by: Data Communication Standard Committee Lens Processing Division of The Vision Council), by serial port and pass the data into ModBus Protocol for the device to perform it's operations.
Since I don't fiscally have access to the host machine I'm trying to develop a secondary script to emulate the host. I am currently on the stage where I need to read a lot of information from the serial port and I get only part of the data. I was hoping to get the whole string sent on the send_job() function on my host emulator script.
Guys also can any of you tell me if this would be a good approach? the only thing the machine is supposed to do is grab 2 values from the host response and assign them to two modbus holding registers.
NOTE: the initialization function is hard coded because it will always be the same and the actual response data will not matter except for status. Also the job request is hard coded i only pass the job # that i get from a modbus holding register, the exact logic on how the host resolved this should not matter i only need to send the job number scanned from the device in this format.
main script:
def request_job_modbus(job):
data = F'[06][1c]req=33[0d][0a]job={job}[0d][0a][1e][1d]'.encode('ascii')
writer(data)
def get_job_from_serial():
response = serial_client.read_all()
resp = response.decode()
return resp
# TODO : SEND INIT SEQUENCE ONCE AND VERIFY IF REQUEST status=0
initiation_request()
init_response_status = get_init_status()
print('init method being active')
print(get_init_status())
while True:
# TODO: get job request data
job_serial = get_job_from_serial()
print(job_serial)
host emulation script:
def send_job():
job_response = '''[06][1c]ans=33[0d]job=30925[0d]status=0;"ok"[0d]do=l[0d]add=;2.50[0d]ar=1[0d]
bcerin=;3.93[0d]bcerup=;-2.97[0d]crib=;64.00[0d]do=l[0d]ellh=;64.00[0d]engmask=;613l[0d]
erdrin=;0.00[0d]erdrup=;10.00[0d]ernrin=;2.00[0d]ernrup=;-8.00[0d]ersgin=;0.00[0d]
ersgup=;4.00[0d]gax=;0.00[0d]gbasex=;-5.30[0d]gcrosx=;-7.96[0d]kprva=;275[0d]kprvm=;0.55[0d]
ldpath=\\uscqx-tcpmain-at\lds\iot\do\800468.sdf[0d]lmatid=;151[0d]lmatname=;f50[0d]
lnam=;vsp_basic_fh15[0d]sgerin=;0.00[0d]sgerup=;0.00[0d]sval=;5.18[0d]text_11=;[0d]
text_12=;[0d]tind=;1.53[0d][1e][1d]'''.encode('ascii')
writer(job_response)
def get_init_request():
req = p.readline()
print(req)
request = req.decode()[4:11]
# print(request)
if request == 'req=ini':
print('request == req=ini??? <<<<<<< cumple condicion y enviala respuesta')
send_init_response()
send_job()
while True:
# print(get_init_request())
get_init_request()
what I get in screen: main script
init method being active
bce
erd
condition was met init status=0
outside loop
ers
condition was met init status=0
inside while loop
trigger reset <<<--------------------
5782
`:lmatid=;151[0d]lmatname=;f50[0d]
lnam=;vsp_basic_fh15[0d]sgerin=;0.00[0d]sgerup=;0.00[0d]sval=;5.18[0d]text_11=;[0d]
text_12=;[0d]tind=;1.53[0d][1e][1d]
outside loop
condition was met init status=0
outside loop
what I get in screen: host emulation script
b'[1c]req=ini[0d][0a][1e][1d]'
request == req=ini??? <<<<<<< cumple condicion y enviala respuesta
b''
b'[06][1c]req=33[0d][0a]job=5782[0d][0a][1e][1d]'
b''
b''
b''
b''
b''
b''
I'm suspect you're trying to write too much at once to a hardware buffer that is fairly small. Especially when dealing with low power hardware, assuming you can stuff an entire message into a buffer is not often correct. Even full modern PC's sometimes have very small buffers for legacy hardware like serial ports. You may find when you switch from development to actual hardware, that the RTS and DTR lines need to be used to determine when to send or receive data. This will be up to whoever designed the hardware unfortunately, as they are often also ignored.
I would try chunking your data transfer into smaller bits as a test to see if the whole message gets through. This is a quick and dirty first attempt that may have bugs, but it should get you down the right path:
def get_job_from_serial():
response = b'' #buffer for response
while True:
try:
response += serial_client.read() #read any available data or wait for timeout
#this technically could only be reading 1 char at a time, but any
#remotely modern pc should easily keep up with 9600 baud
except serial.SerialTimeoutException: #timeout probably means end of data
#you could also presumably check the length of the buffer if it's always
#a fixed length to determine if the entire message has been sent yet.
break
return response
def writer(command):
written = 0 #how many bytes have we actually written
chunksize = 128 #the smaller you go, the less likely to overflow
# a buffer, but the slower you go.
while written < len(command):
#you presumably might have to wait for p.dtr() == True or similar
#though it's just as likely to not have been implemented.
written += p.write(command[written:written+chunksize])
p.flush() #probably don't actually need this
P.S. I had to go to the source code for p.read_all (for some reason I couldn't find it online), and it does not do what I think you expect it does. The exact code for it is:
def read_all(self):
"""\
Read all bytes currently available in the buffer of the OS.
"""
return self.read(self.in_waiting)
There is no concept of waiting for a complete message, it just a shorthand for grab everything currently available.

Python 2.7 Script works with breakpoint in Debug mode but not when Run

def mp_worker(row):
ip = row[0]
ip_address = ip
tcp_port = 2112
buffer_size = 1024
# Read the reset message sent from the sign when a new connection is established
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
print('Connecting to terminal: {0}'.format(ip_address))
s.connect((ip_address, tcp_port))
#Putting a breakpoint on this call in debug makes the script work
s.send(":08a8RV;")
#data = recv_timeout(s)
data = s.recv(buffer_size)
strip = data.split("$", 1)[-1].rstrip()
strip = strip[:-1]
print(strip)
termStat = [ip_address, strip]
terminals.append(termStat)
except Exception as exc:
print("Exception connecting to: " + ip_address)
print(exc)
The above code is the section of the script that is causing the problem. It's a pretty simple function that connects to a socket based on a passed in IP from a DB query and receives a response that indicates the hardware's firmware version.
Now, the issue is that when I run it in debug with a breakpoint on the socket I get the entire expected response from the hardware, but if I don't have a breakpoint in there or I full on Run the script it only responds with part of the expected message. I tried both putting a time.sleep() in after the send to see if it would get the entire response and I tried using the commented out recv_timeout() method in there which uses a non-blocking socket and timeout to try to get an entire response, both with the exact same results.
As another note, this works in a script with everything in one main code block, but I need this part separated into a function so I can use it with the multiprocessing library. I've tried running it on both my local Windows 7 machine and on a Unix server with the same results.
I'll expands and reiterate on what I've put into a comment moment ago. I am still not entirely sure what is behind the different behavior in either scenario (apart from timing guess apparently disproved by an attempt to include sleep.
However, it's somewhat immaterial as stream sockets do not guarantee you get all the requested data at once and in chunks as requested. This is up for an application to deal with. If the server closes the socket after full response was sent, you could replace:
data = s.recv(buffer_size)
with recv() until zero bytes were received, this would be equivalent of getting 0 (EOF) from from the syscall:
data = ''
while True:
received = s.recv(buffer_size)
if len(received) == 0:
break
data += received
If that is not the case, you would have to rely on fixed or known (sent in the beginning) size you want to consider together. Or deal with this on protocol level (look for characters, sequences used to signal message boundaries.
I just recently found out a solution here, and thought I'd post it in case anyone else has issue, I just decided to try and call socket.recv() before calling socket.send() and then calling socket.recv() again afterwards and it seems to have fixed the issue; I couldn't really tell you why it works though.
data = s.recv(buffer_size)
s.send(":08a8RV;")
data = s.recv(buffer_size)

telnetlib - interactive console vs script

The problem is, my script won't work (it's printing empty lane), but it works in python interactive console.
import telnetlib
tn = telnetlib.Telnet("killermud.pl", 4000)
data = tn.read_very_eager()
data = data.decode()
print(data)
tn.close()
What is the reason of such behavior?
I just took a look at the documentation for the read_very_eager method, which says:
Read all data available already queued or on the socket,
without blocking.
It is likely that at the time you call this method that there is no data "already available or queued on the socket", so you're getting nothing back. You probably want to use something like the read_until method, which will read data until it finds a specific string. For example:
data = tn.read_until('Podaj swoje imie')
According to tlnetlib documentation, Telnet.read_very_eager() Raises EOFError if connection closed and no cooked data available. Return '' if no cooked data available otherwise. Do not block unless in the midst of an IAC sequence.
If you do data=="", returns true, Therefore, it means that no cooked data is available

python socketserver occasionally stops sending (and receiving?) messages

I've been experiencing a problem with a socketserver I wrote where the socketserver will seem to stop sending and receiving data on one of the ports it uses (while the other port continues to handle data just fine). Interestingly, after waiting a minute (or up to an hour or so), the socketserver will start sending and receiving messages again without any observable intervention.
I am using the Eventlet socketing framework, python 2.7, everything running on an ubuntu aws instance with external apps opening persistent connections to the socketserver.
From some reading I've been doing, it looks like I may not be implementing my socket server correctly.
According to http://docs.python.org/howto/sockets.html:
fundamental truth of sockets: messages must either be fixed length (yuck), or be delimited > > (shrug), or indicate how long they are (much better), or end by shutting down the connection.
I am not entirely sure that I am using a fix length message here (or am I?)
This is how I am receiving my data:
def socket_handler(sock, socket_type):
logg(1,"socket_handler:initializing")
while True:
recv = sock.recv(1024)
if not recv:
logg(1,"didn't recieve anything")
break
if len(recv) > 5:
logg(1,"socket handler: %s" % recv )
plug_id, phone_sid, recv_json = parse_json(recv)
send = 1
if "success" in recv_json and recv_json["success"] == "true" and socket_type == "plug":
send = 0
if send == 1:
send_wrapper(sock, message_relayer(recv, socket_type))
else:
logg(2, 'socket_handler:Ignoring received input: ' + str(recv) )
logg(1, 'Closing socket handle: [%s]' % str(sock))
sock.shutdown(socket.SHUT_RDWR)
sock.close()
"sock" is a socket object returned by the listener.accept() function.
The socket_handler function is called like so:
new_connection, address = listener.accept()
...<code omitted>...
pool.spawn_n(socket_handler, new_connection, socket_type)
Does my implementation look incorrect to anyone? Am I basically implementing a fixed length conversation protocol? What can I do to help investigate the issue or make my code more robust?
Thanks in advance,
T
You might be having buffering related problems if you're requesting to receive more bytes at the server (1024) than you're actually sending from the client.
To fix the problem, what's is usually done is encode the length of the message first and then the message itself. This way, the receiver can get the length field (which is of known size) and then read the rest of the message based on the decoded length.
Note: The length field is usually as many bytes long as you need in your protocol. Some protocols are 4-byte aligned and use a 32 bit field for this, but if you find that you've got enough with 1 or 2 bytes, then you can use that. The point here is that both client and server know the size of this field.

Categories