I know that Django Channels can be used to make websocket server, not client. So I used websockets to relay websocket incoming message to my Django like this:
async def relay():
source_server = 'ws://source.example/ws/message' # This is an external server
target_server = 'ws://target.example/ws/message' # This is my Django server
async for target in websockets.connect(target_server):
try:
async for source in websockets.connect(source_server):
try:
while True:
try:
message = await source.recv()
await target.send()
# log message
except websockets.ConnectionClosed as e:
# lost source server or target server or both
raise(e)
except Exception as e:
# did not lose servers
continue
except websockets.ConnectionClosed as e:
# lost source server or target server or both
if target.close_code is not None:
# lost target server and need to start from the outer for loop
# (to get a new target websocket connection)
source.disconnect()
raise(e)
# lost source server and will continue the inner for loop
# (to get a new source websocket connection)
continue
except Exception as e:
# did not lose any server and will continue the inner for loop
# (to get a new source websocket connection)
continue
except websockets.ConnectionClosed as e:
# lost target server and will continue the outer for loop
# (to get a new target websocket connection)
continue
except Exception as e:
# did not lose any server and will start the outer for loop
# (to get a new target websocket connection)
continue
asyncio.run(relay())
Understandably, this is not the most efficient code-up. But this is what I can think of.
I run this code as a Docker container (let's call it relay container) along side with my Django Docker containers (with the same Docker image as Django of course).
Here's my questions:
Is there a way to make Django a websocket client? (I want to save one container for the relay). For your information, I run Django container (using Daphne), two Celery (one for beat and one for worker) containers.
If I bring down the relay container, it takes long time (five to ten seconds) to be down. The exit code is Out Of Memory. What causes the problem? How can I shutdown the container gracefully?
Thanks.
Related
I have an azure timer function which runs every minute to trigger a socket which gets data from a website. I don't want to establish a connection everytime the timer runs the function. So, is there a way in Python which I can check if a socket is open for a particular website on particular port?
Or, is there a way to re-use a socket in time-triggered applications?
# Open socket
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(20) # 20 sec timeout
if is_socket_open(sock):
logging.info("Socket is already open")
else:
logging.info("No socket was open. Opening a new one...")
sock.connect(server_address)
sock.settimeout(None)
logging.info(f"Connected to {sock}")
return sock
except socket.gaierror as e:
logging.exception(f"Error connecting to remote server {e}")
time.sleep(20)
except socket.error as e:
logging.exception(f"Connection error {e}")
time.sleep(20)
except Exception as e:
logging.exception(f"An exception occurred: {e}")
time.sleep(20)
def is_socket_open(sock: socket.socket) -> bool:
try:
# this will try to read bytes without blocking and also without removing them from buffer (peek only)
data = sock.recv(16, socket.MSG_PEEK)
if len(data) == 0:
return True
except socket.timeout:
return False # socket is not connected yet, therefore receiving timed out
except BlockingIOError:
return False # socket is open and reading from it would block
except ConnectionResetError:
return True # socket was closed for some other reason
except Exception as e:
logging.exception(f"unexpected exception when checking if a socket is closed: {e}")
return False
return False
So this entire process runs every minute.
You can always use global variables to reuse objects in future invocations. The following example was copied from Google Cloud Platform documentation, but you can apply the same concept to your Azure Function:
# Global (instance-wide) scope
# This computation runs at instance cold-start
instance_var = heavy_computation()
def scope_demo(request):
# Per-function scope
# This computation runs every time this function is called
function_var = light_computation()
return 'Instance: {}; function: {}'.format(instance_var, function_var)
In your case, you can declare the sock as a global variable and reuse it in future warm start invocations. You should also increase the timeout to above 60 seconds, giving that you're triggering your azure function every minute.
However, keep in mind that there is no guarantee that the state of the function will be preserved for future invocations. For instance, in auto-scaling situations, a new socket would be open.
Microsoft Azure also says the following in regards to client connections:
To avoid holding more connections than necessary, reuse client instances rather than creating new ones with each function invocation. We recommend reusing client connections for any language that you might write your function in.
See also:
Manage connections in Azure Functions
7 socket listener setup. It works great and keeps the connection open, non blocking, all that. From time to time a file will show up that I need to hand back to the client. That works to, but it only send the data in the file if the client sends a character first. I need to have it send the data when the file shows up and not wait. I am coming from php and know what I am doing there. Python is new to me so there are some nuances I don't understand about this code.
while True:
try:
#I want this bit here to fire without waiting for the client to send anything
#right now it works except the client has to send a character first
#check for stuff to send back
for fname in os.listdir('data/%s/in' % dirname):
print(fname)
f = open('data/%s/in/%s' % (dirname, fname), "r")
client.send(f.readline())
data = client.recv(size)
if data:
bucket=bucket+data
else:
raise error('Client disconnected')
except Exception as e:
client.close()
print(e)
return False
I have code that starts a while loop and runs until I hard kill it with Control-C. I'd like to make this a bit nicer by adding some way to communicate with the code to make it stop a bit more gracefully. Eventually, I'd like to control this from a PyQt application with a start/stop button and a pause/resume button. How would I add some hooks into the code to allow for this sort of control?
The current code looks like this:
def handle_notifications(dao_notifications):
# fetch notifications
while True:
try:
# store received notifications into the database
for notification in next(notifications_generator):
dao_notifications.insert(notification)
except StopIteration:
continue
def notifications_generator():
# create a socket to listen for notification events
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sockt:
# bind the listener port of the local host to the socket instance
sockt.bind((_LOCAL_IP_ADDRESS, _LISTENER_PORT))
# start the socket listening
sockt.listen()
# continually receive notifications and yield
while True:
# accept a communication connection on the socket
connection, connection_address = sockt.accept()
with connection:
# receive bytes of data from the socket, decode as Unicode string
xml = connection.recv(20480).decode("utf-8")
# only try to yield values if we've actually received data
if len(xml) > 0:
# parse the XML into a dictionary
notifications_soap = xmltodict.parse(xml)
# yield the notification messages as an iterable
notifications = \
notifications_soap["SOAP-ENV:Envelope"]["SOAP-ENV:Body"]["wsnt:Notify"]["wsnt:NotificationMessage"]
yield notifications
Perhaps this is a use case for signal handling? For example, I can write a handler for SIGINT to pause/suspend the execution (keep sleeping until another signal to resume arrives) and one for SIGTERM to gracefully clean up before exit, then the PyQt application will issue the appropriate signals to control execution. Is there a good/simple example of this?
I've run in to a strange problem in a multiplayer online game I'm developing.
When the user clicks "Accept Quest" on the client, it performs the following action:
packet = "A:io-QS#"
tcpClient.send(packet.encode('utf-8'))
On the server, there is a thread created for each client that handles sending/receiving data:
while (client[self.id].authenticated == True):
try:
data = (self.connection.recv(1024)).decode('utf-8')
client[self.id].lastPacketTime = time.time()
client[self.id].processData(data)
except:
print("Client disconnected due to data receive error")
client[self.id].saveDataToDatabase()
client[self.id].authenticated = False
client[self.id].loggedIn = False
If the server receives the packet "A:io-QS#", it throws an exception and disconnects the client. I modified the client code mentioned aboe to send the packet "M:w#" (directional movement packet) and it doesn't throw an exception,
Only when the packet is "A:io-QS#".
The packet size isn't a concern (a much larger packet containing login credentials passes through this server loop just fine).
I tried commenting out the "client[self.id].processData(data)" line and the exception still occurs (but only with the packet 'A:io-QS#').
The server throws an exception after receiving the data but before acting upon it, so it's not a logic error.
I'm at a bit of a loss, does anybody see anything I'm missing or have any recommendations on how I could test this issue further?
Thanks!
I suggest using sys.exc_info() in the except block in the server to find more about the exception.
I have a python program where I use a server socket to send data. There is a class which has some Threading methods. Each method checks a queue and if the queue is not empty, it sends the data over the sever socket. Queues are being filled with what clients send to server(server is listening for input requests). Sending is accomplished with a method call:
def send(self, data):
self.sqn += 1
try:
self.clisock.send(data)
except Exception, e:
print 'Send packet failed with error: ' + e.message
When the program starts, sending rate is around 500, but after a while it decreases instantly to 30 with this exception:
Send packet failed with error: <class 'socket.error'>>>[Errno 32] Broken pipe
I don't know what causes the rate to increase! Any idea?
That error is from your send function trying to write to a socket closed on the other side. If that is intended then catch the exception using
import errno, socket
try:
self.clisock.send(data)
except socket.error, err:
if err[0] == errno.EPIPE:
# do something
else:
pass # do something else
If this isn't intended behavior on the part of the client then you'll have to update your post with the corresponding client code.