Is this a multi-threading race condition problem? - python

In a python3 tkinter project, I am trying to read a continuous stream of data from a serial port (just an arduino sending a milli second value over USB).
The code which reads serial data runs in a separate thread so as to disconnect it from the GUI loop.
I need to be able to connect and disconnect from the serial port, which is done from the GUI.
Everything works up until I disconnect from the serial port when I get the following error.
I was expecting that once, the serialConnect.close() function is called in the main code, the serialStream function would just run 'pass' (line 14) until the connection is opened again -- the error suggests it is still running line 12.
Is this a race condition error, I wonder and how to fix it?
Exception in thread Thread-1 (serialStream):
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/soon/Python/mwe_threaded_serial.py", line 12, in serialStream
rawReading = str(serialConnection.readline())
File "/home/soon/.local/lib/python3.10/site-packages/serial/serialposix.py", line 575, in read
buf = os.read(self.fd, size - len(read))
TypeError: 'NoneType' object cannot be interpreted as an integer
This is from a minimal working example, which looks like this:
import serial
import threading
import time
# Change to correct serial port on your system
serialPort = "/dev/ttyACM0"
serialConnection = serial.Serial()
def serialStream():
while True:
if (serialConnection.is_open):
rawReading = str(serialConnection.readline())
print(rawReading)
else:
pass
def connect_serial():
global serialConnection, serialPort
# In case serial connection is already oåpen
if (serialConnection.is_open):
serialConnection.close()
time.sleep(1)
serialConnection = serial.Serial(
port=serialPort,\
baudrate=9600,\
parity=serial.PARITY_NONE,\
stopbits=serial.STOPBITS_ONE,\
bytesize=serial.EIGHTBITS,\
timeout=1)
time.sleep(1)
if (not serialConnection.is_open):
print("Connection failed")
else:
print("Connection establied")
thread = threading.Thread(target=serialStream)
thread.daemon = True
thread.start()
connect_serial()
time.sleep(5)
serialConnection.close()
time.sleep(5)
connect_serial()
If anyone needs an example arduino code to send a milli reading:
void setup() {
// initialize serial communication at 9600 bits per second:
Serial.begin(9600);
}
void loop() {
Serial.println(millis());
delay(1); // delay in between reads for stability
}

Related

Socket Error in Python: BlockingIOError: [Errno 35] Resource temporarily unavailable

So here is something wrong.
I try to implement a simple Web Server in Python using socket. The single-thread version runs well but, when I try to implement more threads with non-blocking mode, it comes with errors. I searched on Stack Overflow and Google, but no answers.
Here is my code:
# coding:utf-8
import errno
import socket
import threading
import time
EOL1 = b'\n\n'
EOL2 = b'\n\r\n'
body = '''<h1>Hello, world!</h1> - from {thread_name}'''
response_params = [
'HTTP/1.0 200 OK',
'Date: Mon, 01 jan 2022 01:01:01 GMT'
'Content-Type: text/plain; charset=utf-8',
'Content_Length: {length}\r\n',
body,
]
response = '\r\n'.join(response_params)
def handle_connection(conn, addr):
# print(conn, addr)
# time.sleep(60)
request = b""
while EOL1 and EOL2 not in request:
request += conn.recv(1024) # ERROR HERE!
print(request)
current_thread = threading.currentThread()
content_length = len(body.format(thread_name=current_thread.name).encode())
print(current_thread.name)
conn.send(response.format(thread_name=current_thread.name,
length = content_length).encode())
conn.close()
def main():
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serversocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
serversocket.bind(('127.0.0.1', 8000))
serversocket.listen(10)
print('http://127.0.0.1:8000')
serversocket.setblocking(0)
try:
i = 0
while True:
try:
conn, address = serversocket.accept()
except socket.error as e:
if e.args[0] != errno.EAGAIN:
raise
continue
i += 1
print(i)
t = threading.Thread(target=handle_connection, args=(conn, address),
name = 'thread-%s' % i)
t.start()
finally:
serversocket.close()
if __name__ == '__main__':
main()
The Error message is here:
1
Exception in thread thread-1:
2
Traceback (most recent call last):
File "/Users/tdeveloper/opt/anaconda3/lib/python3.9/threading.py", line 973, in _bootstrap_inner
Exception in thread thread-2:
Traceback (most recent call last):
File "/Users/tdeveloper/opt/anaconda3/lib/python3.9/threading.py", line 973, in _bootstrap_inner
self.run()
self.run()
File "/Users/tdeveloper/opt/anaconda3/lib/python3.9/threading.py", line 910, in run
File "/Users/tdeveloper/opt/anaconda3/lib/python3.9/threading.py", line 910, in run
self._target(*self._args, **self._kwargs)
File "/Users/tdeveloper/Development/Learning/Python_Simple_WSGI/socket/thread_socketserver.py", line 26, in handle_connection
self._target(*self._args, **self._kwargs)
File "/Users/tdeveloper/Development/Learning/Python_Simple_WSGI/socket/thread_socketserver.py", line 26, in handle_connection
request += conn.recv(1024)
BlockingIOError: [Errno 35] Resource temporarily unavailable
request += conn.recv(1024)
BlockingIOError: [Errno 35] Resource temporarily unavailable
This is apparently an issue with the macos implementation of accept being different than that in other platforms with respect to inheritance of the non-blocking flag. It has nothing to do with threading per se.
Here's a trimmed-down single-threaded test program that demonstrates.
#!/usr/bin/env python3
import select
import socket
ssocket = socket.socket()
ssocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
ssocket.bind(('127.0.0.1', 7000))
ssocket.listen(10)
ssocket.setblocking(0) # <<<<<<<===========
select.select([ssocket], [], [])
csocket, caddr = ssocket.accept()
csocket.recv(10)
If you run this on linux, and connect to it with nc localhost 7000, the csocket.recv blocks as you're expecting. Run the same program on macos and the recv immediately triggers the BlockingIOException you're seeing.
Looking at the manual page accept(2) on macos shows:
[...] creates a new socket with the same properties of socket
In this case, the non-blocking file descriptor flag (O_NONBLOCK) is being "inherited" by the new socket. So if you don't want it, you'll need to disable it on the accepted socket with conn.setblocking(1). Apparently this behavior is due to MacOS being descended from the BSD flavor of Unix.
All of that being said, you have no need to disable blocking anyway unless there is more to your actual program than shown. I.e. if your main thread is doing nothing but accepting a connection and then spinning off a separate thread to handle the connection, there's no reason not to let the main thread just block in accept. If you allow the listening socket to remain in blocking mode, the accepted sockets should also be in blocking mode. (By the way, as is, you're wasting a ton of CPU time in that main thread loop: calling accept, trapping the exception, then doing continue to start the loop over.)
(For clarity, my specs: python 3.7.3 downloaded from https://www.python.org/ftp/python/3.7.3/python-3.7.3-macosx10.9.pkg running on MacOS Catalina 10.15.7)

Could not use os.fork() bind several process to one socket server when using asyncio

We all know that using asyncio substantially improves the performance of a socket server, and obviously things get even more awesome if we could take advantage of all cores in our cpu (maybe via multiprocessing module or os.fork() etc.)
I'm now trying to build a multicore socket server demo, with a asynchronous socket server listening on each core and all binding to one port. Simply by creating a async server and then use os.fork(), let processes work competitively.
However the single-core-fine code runs into some trouble when I'm trying to fork. Seems like there's some problem with registering same filedescriptors from different processes in epoll selector module.
I'm showing some code below, can anyone help me out?
Here's a simple, logically clear code of echo server using asyncio:
import os
import asyncio #,uvloop
from socket import *
# hendler sends back incoming message directly
async def handler(loop, client):
with client:
while True:
data = await loop.sock_recv(client, 64)
if not data:
break
await loop.sock_sendall(client, data)
# create tcp server
async def create_server(loop):
sock = socket(AF_INET ,SOCK_STREAM)
sock.setsockopt(SOL_SOCKET , SO_REUSEADDR ,1)
sock.bind(('',25000))
sock.listen()
sock.setblocking(False)
return sock
# whenever accept a request, create a handler task in eventloop
async def serving(loop, sock):
while True:
client ,addr = await loop.sock_accept(sock)
loop.create_task(handler(loop ,client))
loop = asyncio.get_event_loop()
sock = loop.run_until_complete(create_server(loop))
loop.create_task(serving(loop, sock))
loop.run_forever()
It works fine until I'm trying to fork, after the socket was bounl and before server starts serving. (This logic works fine in synchronous -- threading based code.)
When I'm trying this:
loop = asyncio.get_event_loop()
sock = loop.run_until_complete(create_server(loop))
from multiprocessing import cpu_count
for num in range(cpu_count() - 1):
pid = os.fork()
if pid <= 0: # fork process as the same number as
break # my cpu cores
loop.create_task(serving(loop, sock))
loop.run_forever()
Theoretically forked process are bounl to a same socket? And run in a same event loop? then work just fine?
However I'm getting these error messages:
Task exception was never retrieved
future: <Task finished coro=<serving() done, defined at /home/new/LinuxDemo/temp1.py:21> exception=FileExistsError(17, 'File exists')>
Traceback (most recent call last):
File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 262, in _add_reader
key = self._selector.get_key(fd)
File "/usr/local/lib/python3.7/selectors.py", line 192, in get_key
raise KeyError("{!r} is not registered".format(fileobj)) from None
KeyError: '6 is not registered'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/test/temp1.py", line 23, in serving
client ,addr = await loop.sock_accept(sock)
File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 525, in sock_accept
self._sock_accept(fut, False, sock)
File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 538, in _sock_accept
self.add_reader(fd, self._sock_accept, fut, True, sock)
File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 335, in add_reader
return self._add_reader(fd, callback, *args)
File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 265, in _add_reader
(handle, None))
File "/usr/local/lib/python3.7/selectors.py", line 359, in register
self._selector.register(key.fd, poller_events)
FileExistsError: [Errno 17] File exists
Python version 3.7.3,
I'm totally confused about what's going on.
Could anybody help? thanks
According to the tracker issue, it is not supported to fork an existing asyncio event loop and attempt to use it from multiple processes. However, according to Yury's comment on the same issue, multi-processing can be implemented by forking before starting a loop, therefore running fully independent asyncio loops in each child.
Your code actually confirms this possibility: while create_server is async def, it doesn't await anything, nor does it use the loop argument. So we can implement Yury's approach by by making create_server a regular function, removing the loop argument, and calling it before os.fork(), and only running event loops after forking:
import os, asyncio, socket, multiprocessing
async def handler(loop, client):
with client:
while True:
data = await loop.sock_recv(client, 64)
if not data:
break
await loop.sock_sendall(client, data)
# create tcp server
def create_server():
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('', 25000))
sock.listen()
sock.setblocking(False)
return sock
# whenever accept a request ,create a handler task in eventloop
async def serving(loop, sock):
while True:
client, addr = await loop.sock_accept(sock)
loop.create_task(handler(loop, client))
sock = create_server()
for num in range(multiprocessing.cpu_count() - 1):
pid = os.fork()
if pid <= 0: # fork process as the same number as
break # my cpu cores
loop = asyncio.get_event_loop()
loop.create_task(serving(loop, sock))
loop.run_forever()

Python multiprocessing Manger OSError "Only one usage of each socket address"

I am creating a communication platform in python (3.4.4) and using the multiprocessing.managers.BaseManager class. I have isolated the problem to the code below.
The intention is to have a ROVManager(role='server') instance running in one process on the main computer and providing read/write capabilities to the system dictionary for multiple ROVManager(role='client') instances running on the same computer and a ROV (remotely operated vehicle) connected to the same network. This way, multiple clients/processes can do different tasks like reading sensor values, moving motors, printing, logging etc, all using the same dictionary. start_reader() below is one of those clients.
Code
from multiprocessing.managers import BaseManager
import multiprocessing as mp
import sys
class ROVManager(BaseManager):
def __init__(self, role, address, port, authkey=b'abc'):
super(ROVManager, self).__init__(address=(address, port),
authkey=authkey)
if role is 'server':
self.system = {'shutdown': False}
self.register('system', callable=lambda: self.system)
server = self.get_server()
server.serve_forever()
elif role is 'client':
self.register('system')
self.connect()
def start_server(server_ip, port_var):
print('starting server')
ROVManager(role='server', address=server_ip, port=port_var)
def start_reader(server_ip, port_var):
print('starting reader')
mgr = ROVManager(role='client', address=server_ip, port=port_var)
i = 0
while not mgr.system().get('shutdown'):
sys.stdout.write('\rTotal while loops: {}'.format(i))
i += 1
if __name__ == '__main__':
server_p = mp.Process(target=start_server, args=('0.0.0.0', 5050))
reader_p = mp.Process(target=start_reader, args=('127.0.0.1', 5050))
server_p.start()
reader_p.start()
while True:
# Check system status, restart processes etc here
pass
Error
This results in the following output and error:
starting server
starting reader
Total while loops: 15151
Process Process - 2:
Traceback(most recent call last):
File "c:\python34\Lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
File "c:\python34\Lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\git\eduROV\error_test.py", line 29, in start_reader
while not mgr.system().get('shutdown'):
File "c:\python34\Lib\multiprocessing\managers.py", line 640, in temp
token, exp = self._create(typeid, *args, **kwds)
File "c:\python34\Lib\multiprocessing\managers.py", line 532, in _create
conn = self._Client(self._address, authkey=self._authkey)
File "c:\python34\Lib\multiprocessing\connection.py", line 496, in Client
c = SocketClient(address)
File "c:\python34\Lib\multiprocessing\connection.py", line 629, in SocketClient
s.connect(address)
OSError: [WinError 10048] Only one usage of each socket address(protocol / network address / port) is normally permitted
My research
Total while loops are usually in the range 15000-16000. From my understanding it seems like a socket is created and terminated each time mgr.system().get('shutdown') is called. Windows then runs out of available sockets. I can't seem to find a way to set socket.SO_REUSEADDR.
Is there a way of solving this, or isn't Managers made for this kind of communication? Thanks :)
As error suggests Only one usage of each socket address in general , you can/should bind only a single process to a socket ( unless you design your application accordingly, by passing SO_REUSEADDR option while creating socket)
. These lines
server_p = mp.Process(target=start_server, args=('0.0.0.0', 5050))
reader_p = mp.Process(target=start_reader, args=('127.0.0.1', 5050))
creates two processes on same port 5050 & so the error.
You can refer here to learn how to use SO_REUSEADDR & its implications but i am quoting the main part which should get you going
The second socket calls setsockopt with the optname parameter set to
SO_REUSEADDR and the optval parameter set to a boolean value of TRUE
before calling bind on the same port as the original socket. Once the
second socket has successfully bound, the behavior for all sockets
bound to that port is indeterminate. For example, if all of the
sockets on the same port provide TCP service, any incoming TCP
connection requests over the port cannot be guaranteed to be handled
by the correct socket — the behavior is non-deterministic.

Python P2P messaging

So the goal of this is code is to have a point to point connection. One client will host the connection and the other will just connect and they should be able to talk back and forth freely. I am trying to write the code myself, but I'm new to socket programming, and threading. I don't really want to to use a library that does all of the networking for me just yet. Any ideas to push me in the right direction? Can I have two threads communicate on the same port? I appreciate the input.
To test this you would have to run two instances, the first terminal will take an input(choose "S" for server) and the second, type anything(or nothing) to act as the client side. I am testing this code to incorporate it into a larger program I am working on, so the finished product will be more user friendly!
I'm running into the following errors:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "test.py", line 11, in recvthread
data = client.recv(size)
File "/usr/lib/python2.7/socket.py", line 174, in _dummy
raise error(EBADF, 'Bad file descriptor')
error: [Errno 9] Bad file descriptor
Here is the code trying to incorporate threading:
import socket
import threading
from threading import Thread
import select
import sys
def recvthread(mssg):
print mssg
if (mssg == 1):
while True:
data = client.recv(size)
print "[Other]:" + data
if (mssg == 2):
while True:
data = s.recv(size)
print "[Other]:" + data
def sendthread(mssg):
print mssg
if (mssg == 1):
while True:
data = raw_input("[ME]>")
client.send(data)
if (mssg == 2):
while True:
data = raw_input("[ME]>")
s.send(data)
host = 'localhost'
port = 2000
size = 1024
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
x = raw_input()
test = 'S'
if (x == test):
s.bind((host, port))
s.listen(5)
client, address = s.accept()
t1 = Thread(target=recvthread, args=(1,))
t2 = Thread(target=sendthread, args=(1,))
t1.start()
t2.start()
client.close()
else:
s.connect((host, port))
t1 = Thread(target=recvthread, args=(2,))
t2 = Thread(target=sendthread, args=(2,))
t1.start()
t2.start()
s.close()
The issue was me not completely understanding the threading. If I wanted the code to wait for the threads to wait I needed to use join(). Rookie mistake... Thanks for the help!

PySerial write() instant timeout

EDIT
I found out what the problem was and have answered my own question.
Original question below this line
I have a serial bridge between COM4 and COM5 implemented in software (Specifically, HDD's Free Virtual Serial Configuration Utility)
I have two different python scripts starting up in two different instances of Powershell, receive first:
import serial
receive = serial.Serial(port = 'COM5', baudrate = 9600)
text = receive.read(100)
receive.close()
print text
And then the sender:
import serial
send = serial.Serial(port = 'COM4', baudrate = 9600, timeout = 0)
send.write("Hello")
send.close()
When starting the sender script, the receiver script gets the sent message (So communication is clearly established) but the sender script immediately ends with an error:
Traceback (most recent call last):
File ".\sending.py", line 3, in <module>
send.writelines("Hello")
File "C:\Python27\lib\site-packages\serial\serialwin32.py", line 270, in write
raise writeTimeoutError
serial.serialutil.SerialTimeoutException: Write timeout
I get the same error when I change the sender script to
send = serial.Serial(port = 'COM4', baudrate = 9600)
So my question is: What exactly is timing out? How do I prevent that from happening? I mean, the data IS being sent so I could probably just put the whole thing in a try/except(and do nothing) block but that seems like a bad solution in the long run.
The clue is in the error message[1]
File "C:\Python27\lib\site-packages\serial\serialwin32.py", line 270, in write
raise writeTimeoutError
so we open that file and find:
if self._writeTimeout != 0: # if blocking (None) or w/ write timeout (>0)
# Wait for the write to complete.
#~ win32.WaitForSingleObject(self._overlappedWrite.hEvent, win32.INFINITE)
err = win32.GetOverlappedResult(self.hComPort, self._overlappedWrite, ctypes.byref(n), True)
if n.value != len(data):
raise writeTimeoutError
Read that first conditional again:
if self._writeTimeout != 0:
so let us rewrite our code from before
send = serial.Serial(port = 'COM4', baudrate = 9600, timeout = 0)
becomes
send = serial.Serial(port = 'COM4', baudrate = 9600, writeTimeout = 0)
and Et Voila: No exception.
[1] Well Designed Error Messages? That's new!
The problem may be that the interface tries to comply with RTS, CTS, DSR, or DTS signals. It is possible that if they are not properly virtually connected, they can mysteriously affect communication through a timeout.
I would also recommend looking at the configuration of the used virtual serial bridge.
One solution may be to ignore influence using rtscts=False and/or dsrdtr=False when opening the serial port in Python.
I could use an alternative solution for sniffing communication using hub4com, where I used the parameter --octs = off, for example in this way, but the virtual ports had to be created correctly before. hub4com --baud=115200 --route=0:All --route=1:All --route=2:All --no-default-fc-route=All:All --octs=off \\.\COM1 \\.\CNCB0 \\.\CNCB1

Categories