I am creating a communication platform in python (3.4.4) and using the multiprocessing.managers.BaseManager class. I have isolated the problem to the code below.
The intention is to have a ROVManager(role='server') instance running in one process on the main computer and providing read/write capabilities to the system dictionary for multiple ROVManager(role='client') instances running on the same computer and a ROV (remotely operated vehicle) connected to the same network. This way, multiple clients/processes can do different tasks like reading sensor values, moving motors, printing, logging etc, all using the same dictionary. start_reader() below is one of those clients.
Code
from multiprocessing.managers import BaseManager
import multiprocessing as mp
import sys
class ROVManager(BaseManager):
def __init__(self, role, address, port, authkey=b'abc'):
super(ROVManager, self).__init__(address=(address, port),
authkey=authkey)
if role is 'server':
self.system = {'shutdown': False}
self.register('system', callable=lambda: self.system)
server = self.get_server()
server.serve_forever()
elif role is 'client':
self.register('system')
self.connect()
def start_server(server_ip, port_var):
print('starting server')
ROVManager(role='server', address=server_ip, port=port_var)
def start_reader(server_ip, port_var):
print('starting reader')
mgr = ROVManager(role='client', address=server_ip, port=port_var)
i = 0
while not mgr.system().get('shutdown'):
sys.stdout.write('\rTotal while loops: {}'.format(i))
i += 1
if __name__ == '__main__':
server_p = mp.Process(target=start_server, args=('0.0.0.0', 5050))
reader_p = mp.Process(target=start_reader, args=('127.0.0.1', 5050))
server_p.start()
reader_p.start()
while True:
# Check system status, restart processes etc here
pass
Error
This results in the following output and error:
starting server
starting reader
Total while loops: 15151
Process Process - 2:
Traceback(most recent call last):
File "c:\python34\Lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
File "c:\python34\Lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\git\eduROV\error_test.py", line 29, in start_reader
while not mgr.system().get('shutdown'):
File "c:\python34\Lib\multiprocessing\managers.py", line 640, in temp
token, exp = self._create(typeid, *args, **kwds)
File "c:\python34\Lib\multiprocessing\managers.py", line 532, in _create
conn = self._Client(self._address, authkey=self._authkey)
File "c:\python34\Lib\multiprocessing\connection.py", line 496, in Client
c = SocketClient(address)
File "c:\python34\Lib\multiprocessing\connection.py", line 629, in SocketClient
s.connect(address)
OSError: [WinError 10048] Only one usage of each socket address(protocol / network address / port) is normally permitted
My research
Total while loops are usually in the range 15000-16000. From my understanding it seems like a socket is created and terminated each time mgr.system().get('shutdown') is called. Windows then runs out of available sockets. I can't seem to find a way to set socket.SO_REUSEADDR.
Is there a way of solving this, or isn't Managers made for this kind of communication? Thanks :)
As error suggests Only one usage of each socket address in general , you can/should bind only a single process to a socket ( unless you design your application accordingly, by passing SO_REUSEADDR option while creating socket)
. These lines
server_p = mp.Process(target=start_server, args=('0.0.0.0', 5050))
reader_p = mp.Process(target=start_reader, args=('127.0.0.1', 5050))
creates two processes on same port 5050 & so the error.
You can refer here to learn how to use SO_REUSEADDR & its implications but i am quoting the main part which should get you going
The second socket calls setsockopt with the optname parameter set to
SO_REUSEADDR and the optval parameter set to a boolean value of TRUE
before calling bind on the same port as the original socket. Once the
second socket has successfully bound, the behavior for all sockets
bound to that port is indeterminate. For example, if all of the
sockets on the same port provide TCP service, any incoming TCP
connection requests over the port cannot be guaranteed to be handled
by the correct socket — the behavior is non-deterministic.
Related
In my code I create threads, which publish.single multiple times on a MQTT connection. However this error is raised and I cannot understand or find its origin. The only time it mentions my code is with line 75, in send_on_sensor.
Exception in thread Thread-639:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "/Users//PycharmProjects//V3_multiTops/mt_GenPub.py", line 75, in send_on_sensor
publish.single(topic, payload, hostname=hostname)
File "/Users//PycharmProjects//venv/lib/python3.7/site-packages/paho/mqtt/publish.py", line 223, in single
protocol, transport)
File "/Users//PycharmProjects//venv/lib/python3.7/site-packages/paho/mqtt/publish.py", line 159, in multiple
client.connect(hostname, port, keepalive)
File "/Users//PycharmProjects//venv/lib/python3.7/site-packages/paho/mqtt/client.py", line 839, in connect
return self.reconnect()
File "/Users//PycharmProjects//venv/lib/python3.7/site-packages/paho/mqtt/client.py", line 962, in reconnect
sock = socket.create_connection((self._host, self._port), source_address=(self._bind_address, 0))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py", line 727, in create_connection
raise err
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py", line 716, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 61] Connection refused
This is the mentioned code part. The thrown line 75 is the one with time.sleep(delay). This method will be called on a new thread whenever a new set of data (as a queue of points) shall be sent.
def send_on_sensor(q, topic, delay):
while q.empty is not True:
payload = json.dumps(q.get())
publish.single(topic, payload, hostname=hostname)
time.sleep(delay)
I get the feeling I am doing something which is not "threadsafe"?! Also this issue occurs especially, when the delay is a short interval (< 1sec). From my output I can see that the next set of data (100 points) will start sending in a new thread before the first one has finished sending. I can fix that and also this error by increasing the time interval in between two sets of data. E.g. if I determine the delay between sets using this relation set_delay = 400 * point_delay I can safely use a delay of 0.1 secs. However the same relation won't work for smaller delays, so this solution really does not satisfy me.
What can I do about this issue? I really want to get my delay below 0.1 secs and be able to adjust it.
EDIT
this is the method which creates the threads:
def send_dataset(data, labels, secs=0):
qs = []
for i in range(8):
qs.append(queue.Queue())
for value in data:
msg = {
"key": value,
}
# c is set accordingly
qs[c].put(msg)
for q in qs:
topic = sensors[qs.index(q)]
t = threading.Thread(target=send_on_sensor, args=(q, topic, secs))
t.start()
time.sleep(secs)
and this is where I start all methods off
output_interval = 0.01
while True:
X, y = give_dataset()
send_dataset(X, y, output_interval)
time.sleep(output_interval * 2000)
Even though you added extra code, it doesnt reveal much. However, I have an experience with similar thing happening to me. I was doing heavy threaded app with MQTT and its quite save. Not totally but it is.
The reason why you get error with lowering the delay is that you have ONE client. By publishing message (I cant be sure because I dont see your code) you connect, send message and disconnect!. Since you are threading this process, you most propably send one message(still in process) and you are about to publish new one in new thread. However the first thread is going to finish and disconnects the client. The new thread is trying to publish, but you cant, because previous thread disconnected you.
Solution:
1) Dont disconnect the client upon publishing
2) Risky and you need more code: For every publish, create new client but be sure to handle this correctly. That means: create client, publish and disconnect, again and again, but make sure you close the connections correctly and delete the clients so your you dont store dead clients
3) solution to 2) - try to make function that will do all - create client, connect and publish and dies after the the end. If you thread such function, I guess you will not have to take care of problems arising in solution 2
Update:
In case your problem is something else, I still think that its not because of threads itself, but because multiple threads are trying to control something that should be controlled only by one thread - like client object
Update: template code
be aware that its my old code and I dont use it anymore because my applications needs particular thread attitude and so on, so I rewrite this one for each application individually. But this one works like charm for not threaded apps and possible for threaded too. It can publish only with qos=0
import paho.mqtt.client as mqtt
import json
# Define Variables
MQTT_BROKER = ""
MQTT_PORT = 1883
MQTT_KEEPALIVE_INTERVAL = 5
MQTT_TOPIC = ""
class pub:
def __init__(self,MQTT_BROKER,MQTT_PORT,MQTT_KEEPALIVE_INTERVAL,MQTT_TOPIC,transport = ''):
self.MQTT_TOPIC = MQTT_TOPIC
self.MQTT_BROKER =MQTT_BROKER
self.MQTT_PORT = MQTT_PORT
self.MQTT_KEEPALIVE_INTERVAL = MQTT_KEEPALIVE_INTERVAL
# Initiate MQTT Client
if transport == 'websockets':
self.mqttc = mqtt.Client(transport='websockets')
else:
self.mqttc = mqtt.Client()
# Register Event Handlers
self.mqttc.on_publish = self.on_publish
self.mqttc.on_connect = self.on_connect
self.connect()
# Define on_connect event Handler
def on_connect(self,mosq, obj, rc):
print("mqtt.thingstud.io")
# Define on_publish event Handler
def on_publish(self,client, userdata, mid):
print("Message Published...")
def publish(self,MQTT_MSG):
MQTT_MSG = json.dumps(MQTT_MSG)
# Publish message to MQTT Topic
self.mqttc.publish(self.MQTT_TOPIC,MQTT_MSG)
# Disconnect from MQTT_Broker
def connect(self):
self.mqttc.connect(self.MQTT_BROKER, self.MQTT_PORT, self.MQTT_KEEPALIVE_INTERVAL)
def disconnect(self):
self.mqttc.disconnect()
p = pub(MQTT_BROKER,MQTT_PORT,MQTT_KEEPALIVE_INTERVAL,MQTT_TOPIC)
p.publish('some messages')
p.publish('more messages')
Note that on object creation I connect automaticly, but I dont disconnect. That is something you have to do manually
I suggest you try to create as many pub objects as you have sensors and publish with them.
I have recently been introduced to the threading module in python so I decided to play around with it I opened a python socket server on port 7000:
import socket
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.bind(('127.0.0.1',7000))
s.listen(1)
c, a = s.accept()
and made my client server try connecting to every port from 1 to 65535 until it establishes connection on port 7000. Obviously this would take very long so I multi-threaded it:
import threading
import socket
import sys
host = None
def conn(port):
try:
s.connect((host,port))
print 'Connected'
sys.exit(1)
except:
pass
global host
host = '127.0.0.1'
for i in range(65535):
t = threading.Thread(target=conn, args=(i,))
t.start()
When the client connects its suppose to return the message 'connected' however when debugging I noticed some very strange behavior with the program. Sometimes the program would return that it connected, other times the program would fail to output that it was connected to the server instead it would just terminate without printing anything.
Its obviously a problem with the threads. As when i make the client connect to port 7000 only it works 100% of the time. However threading it through all 65535 ports causes the client to sometimes not print anything. What is the reason for this and how can I prevent or circumvent it.
Edit:
I realized making it try to connect to a smaller number of ports, ports 1-10 and port 7000, gives it a higher chance of printing out connected.
If connect() fails, consider the state of the socket as unspecified. Portable applications should close the socket and create a new one for reconnecting.
>>> import socket
>>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>>> s.connect(('127.0.0.1', 6999))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 61] Connection refused
>>>
>>> s.connect(('127.0.0.1', 7000))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 22] Invalid argument
>>>
>>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>>> s.connect(('127.0.0.1', 7000))
# Connect success.
65535 is a huge number.
Any performance gain you might get will be dwarfed by this amount of threads. An OS should plan processor time for each of this thread and then it takes time to switch between threads. In the worst case (and 7k is pretty much bad) all OS does is thread switching with little real work in between. 2-8 threads (or just a thread per physical core) would be much more performant.
Also, make sure you wait until your threads exit, and don't silence out errors with except: pass. I bet, there are a lot of interesting things happening there. At least [selectively] log these exceptions somewhere.
Edit. Use join in order to make sure that all spawned threads exit before the main thread.
threads = [threading.Thread(target=conn, args=(i,)) for i in range(8)]
for thread in threads:
thread.start()
# do whatever
for thread in threads:
thread.join()
I am trying to run a pyro4 server with a custom event loop on a Raspberry Pi running Raspbian 8 (jessie). When I create a nameserver using the hostname obtained from socket.gethostname(), specifically 'raspberrypi', my client script cannot find the nameserver. When I use 'localhost' as the hostname, my client script is able to find the hostname. In /etc/hosts, 'raspberrypi' is bound to 127.0.1.1, while 'localhost' is obviously bound to 127.0.0.1. I had thought that both of these addresses were bound to the loopback interface, so I don't understand why one should work and not the other.
For what it's worth, after some digging in the pyro4 code, it looks like at l.463 of Pyro4.naming.py, the call to proxy.ping() fails with 127.0.1.1 but not with 127.0.0.1, and this is ultimately what triggers the failure with the former address. Not being an expert in Pyro, it isn't clear to be whether this behavior is expected. Any thoughts? I assume this must be a common problem because most (all?) flavors of Debian include separate lines in /etc/hosts for these two addresses.
I have attached code below that reproduces the problem. This is basically just a slightly modified version of the "eventloop" example that ships with pyro.
server.py:
import socket
import select
import sys
import Pyro4.core
import Pyro4.naming
import MotorControl
Pyro4.config.SERVERTYPE="thread"
hostname=socket.gethostname()
print("initializing services... servertype=%s" % Pyro4.config.SERVERTYPE)
# start a name server with broadcast server as well
nameserverUri, nameserverDaemon, broadcastServer = Pyro4.naming.startNS(host=hostname)
pyrodaemon=Pyro4.core.Daemon(host=hostname)
motorcontroller = MotorControl.MotorControl()
serveruri=pyrodaemon.register(motorcontroller)
nameserverDaemon.nameserver.register("example.embedded.server",serveruri)
# below is our custom event loop.
while True:
nameserverSockets = set(nameserverDaemon.sockets)
pyroSockets = set(pyrodaemon.sockets)
rs = []
rs.extend(nameserverSockets)
rs.extend(pyroSockets)
rs,_,_ = select.select(rs,[],[], 0.001)
eventsForNameserver=[]
eventsForDaemon=[]
for s in rs:
if s in nameserverSockets:
eventsForNameserver.append(s)
elif s in pyroSockets:
eventsForDaemon.append(s)
if eventsForNameserver:
nameserverDaemon.events(eventsForNameserver)
if eventsForDaemon:
pyrodaemon.events(eventsForDaemon)
motorcontroller.increment_count()
nameserverDaemon.close()
broadcastServer.close()
pyrodaemon.close()
client.py:
from __future__ import print_function
import Pyro4
proxy=Pyro4.core.Proxy("PYRONAME:example.embedded.server")
print("count = %d" % proxy.get_count())
MotorControl.py
class MotorControl(object):
def __init__(self):
self.switches = 0
def get_count(self):
return self.switches
def increment_count(self):
self.switches = self.switches + 1
error:
Traceback (most recent call last):
File "pyroclient.py", line 5, in <module>
print("count = %d" % proxy.get_count())
File "/usr/local/lib/python2.7/dist-packages/Pyro4/core.py", line 248, in __getattr__
self._pyroGetMetadata()
File "/usr/local/lib/python2.7/dist-packages/Pyro4/core.py", line 548, in _pyroGetMetadata
self.__pyroCreateConnection()
File "/usr/local/lib/python2.7/dist-packages/Pyro4/core.py", line 456, in __pyroCreateConnection
uri = resolve(self._pyroUri, self._pyroHmacKey)
File "/usr/local/lib/python2.7/dist-packages/Pyro4/naming.py", line 548, in resolve
nameserver = locateNS(uri.host, uri.port, hmac_key=hmac_key)
File "/usr/local/lib/python2.7/dist-packages/Pyro4/naming.py", line 528, in locateNS
raise e
Pyro4.errors.NamingError: Failed to locate the nameserver
Pyro's name server lookup relies on two things:
broadcast lookup
direct lookup by hostname/ip-address
The first is not available when you're using the loopback adapter to bind the name server on (loopback doesn't support broadcast sockets). So we're left with the second one.
The answer to your question is then simple: the direct lookup is done on the value of the NS_HOST config item, which is by default set to 'localhost'. As localhost resolves to 127.0.0.1 it will never connect to 127.0.1.1.
Suggestion: bind the name server on 0.0.0.0 or "" (empty hostname) and it should be able to start a broadcast responder as well. Then your clients won't have any problem locating it.
Alternatively, simply set NS_HOST to 127.0.1.1 (or the hostname of your box) for your clients and they should be able to locate the name server as well if it's bound on 127.0.1.1
I write a simple program in python, with asyncore and threading. I want to implement a asynchorous client without blocking anything, like this:
How to handle asyncore within a class in python, without blocking anything?
Here is my code:
import socket, threading, time, asyncore
class Client(asyncore.dispatcher):
def __init__(self, host, port):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect((host, port))
mysocket = Client("",8888)
onethread = threading.Thread(target=asyncore.loop)
onethread.start()
# time.sleep(5)
mysocket.send("asfas\n")
input("End")
Now a exception will be throwed in send("asfas\n"), because I didn't open any server.
I think the exception in send function will call the handle_error function and won't affect the main program, but most of the time it crashes the whole program, and sometimes it works! And if I uncomment the time.sleep(5), it will only crash the thread. Why does it behave like this? Could I write a program that won't crash the whole program and don't use time.sleep() ? Thanks!
Error message:
Traceback (most recent call last):
File "thread.py", line 13, in <module>
mysocket.send("asfas\n")
File "/usr/lib/python2.7/asyncore.py", line 374, in send
result = self.socket.send(data)
socket.error: [Errno 111] Connection refused
First of all, I would suggest not using the old asyncore module but to look into more
modern and more efficient solutions: gevent, or going along the asyncio module (Python 3.4),
which has been backported somehow to Python 2.
If you want to use asyncore, then you have to know:
be careful when using sockets created in one thread (the main thread, in your case), and dispatched by another thread (managed by "onethread", in your case), sockets cannot be shared like this between threads it is not threadsafe objects by themselves
for the same reason, you can't use the global map created by default in asyncore module, you have to create a map by thread
when connecting to a server, connection may not be immediate you have to wait for it to be connected (hence your "sleep 5"). When using asyncore, "handle_write" is called when
socket is ready to send data.
Here is a newer version of your code, hopefully it fixes those issues:
import socket, threading, time, asyncore
class Client(threading.Thread, asyncore.dispatcher):
def __init__(self, host, port):
threading.Thread.__init__(self)
self.daemon = True
self._thread_sockets = dict()
asyncore.dispatcher.__init__(self, map=self._thread_sockets)
self.host = host
self.port = port
self.output_buffer = []
self.start()
def run(self):
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect((self.host, self.port))
asyncore.loop(map=self._thread_sockets)
def send(self, data):
self.output_buffer.append(data)
def handle_write(self):
all_data = "".join(self.output_buffer)
bytes_sent = self.socket.send(all_data)
remaining_data = all_data[bytes_sent:]
self.output_buffer = [remaining_data]
mysocket = Client("",8888)
mysocket.send("asfas\n")
If you have only 1 socket by thread (i.e a dispatcher's map with size 1), there is no
point using asyncore at all. Just use a normal, blocking socket in your threads. The
benefit of async i/o comes with a lot of sockets.
EDIT: answer has been edited following comments.
I am trying to run a simple server at start up. The OS I am using is Debian 6.0. I added a line in my .profile to run the python script: python /root/desktopnavserver2.py The computer boots and logs in, however I get the error below. The script runs fine when I don't have the add line in debians .profile and I just run the script myself in console. Any Help?
Error Trace:
Traceback (most recent call last):
File "/root/Desktop/navserver2.py", line 39, in <module>
server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler)
File "/usr/lib/python2.6/SocketServer.py", line 402, in __init__
self.server_bind()
File "/usr/lib/python2.6/SocketServer.py", line 413, in server_bind
self.socket.bind(self.server_address)
File "<string>", line 1, in bind
error: [Errno 98] Address already in use
Source:
#!/usr/bin/python
import SocketServer
import serial
com2 = serial.Serial(
port = 1,
parity = serial.PARITY_NONE,
bytesize = serial.EIGHTBITS,
stopbits = serial.STOPBITS_ONE,
timeout=3,
xonxoff = 0,
rtscts = 0,
baudrate = 9600
)
class MyTCPHandler(SocketServer.BaseRequestHandler):
"""
The RequestHandler class for our server.
It is instantiated once per connection to the server, and must
override the handle() method to implement communication to the
client.
"""
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).strip()
#print "%s wrote:"%self.client_address[0]
#print self.data
# just send back the same data, but upper-cased
self.request.sendall(self.data.upper())
com2.write(self.data)
if __name__ == "__main__":
HOST, PORT = "192.168.0.200", 14052 #change to 192.168.0.200
# Create the server, binding to localhost on port 9999
server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler)
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
The line error: [Errno 98] Address already in use explains it - find out what's running at that port upon entry to your program (hint: is the .py being called twice?)
I believe that .profile is executed by every instance of shell that identifies itself as a login shell. Probably you have more than one such shell.
Putting a single-instance script in your .profile is a bad idea anyway - it means that you can only have one login session, next sessions will cause this error.