I am using pyGTK and a python gRPC server. Both in a very basic setup.
I just create a gtk.Window(), show() it and run a gtk.main() loop.
My server starts like that:
def startServing():
global server
print("Starting server...")
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
laserClient_pb2_grpc.add_LComServicer_to_server(LCom(), server)
server.add_insecure_port('[::]:50053') # [::] is the same as 0.0.0.0 server.start()
print("Server is running...")
So I call:
try:
startServing()
gtk.main()
except KeyboardInterrupt:
server.stop(0)
This creates the window correctly but I never receive a request from my Java client. (The java client is not the problem.)
I read a lot on the internet and I do not understand all of the pyGTK thread handling but I tried gtk.gdk.threads_init() right before startServing() and I received the requests. However I receive just 1 request per second whereas my client sends a request every 50ms. If I delete gtk.main() and add a while loop:
while True:
time.sleep(60)
...I receive requests nearly every 50 to 100ms. This is the expected behaviour!
However my window won't get updated since there is no gtk.main() loop anymore. I even tried adding:
while True:
while gtk.events_pending():
gtk.main_iteration()
time.sleep(0.05)
But this gives, again, just 1 request per second.
I have no idea what I should do now. I really want to use gRPC and pyGTK together in the same program.
Related
I'm implementing a bi-directional ping-pong demo app between an electron app and a python backend.
This is the code for the python part which causes the problems:
import sys
import zerorpc
import time
from multiprocessing import Process
def ping_response():
print("Sleeping")
time.sleep(5)
c = zerorpc.Client()
c.connect("tcp://127.0.0.1:4243")
print("sending pong")
c.pong()
class Api(object):
def echo(self, text):
"""echo any text"""
return text
def ping(self):
p = Process(target=ping_response, args=())
p.start()
print("got ping")
return
def parse_port():
port = 4242
try:
port = int(sys.argv[1])
except Exception as e:
pass
return '{}'.format(port)
def main():
addr = 'tcp://127.0.0.1:' + parse_port()
s = zerorpc.Server(Api())
s.bind(addr)
print('start running on {}'.format(addr))
s.run()
if __name__ == '__main__':
main()
Each time ping() is called from javascript side it will start a new process that simulates some work (sleeping for 5 seconds) and replies by calling pong on nodejs server to indicate work is done.
The issue is that the pong() request never gets to javascript side. If instead of spawning a new process I create a new thread using _thread and execute the same code in ping_response(), the pong request arrives in the javascript side. Also if I manually run the bash command zerorpc tcp://localhost:4243 pong I can see that the pong request is received by the nodejs script so the server on the javascript side works ok.
What happens with zerorpc client when I create a new process and it doesn't manage to send the request ?
Thank you.
EDIT
It seems it gets stuck in c.pong()
Try using gipc.start_process() from the gipc module (via pip) instead of multiprocessing.Process(). It creates a new gevent context which otherwise multiprocessing will accidentally inherit.
I have an app similar to a chat-room writing in python that intends to do the following things:
A prompt for user to input websocket server address.
Then create a websocket client that connects to server and send/receive messages. Disable the ability to create a websocket client.
After receiving "close" from server (NOT a close frame), client should drop connecting and re-enable the app to create a client. Go back to 1.
If user exits the app, it exit the websocket client if there is one running.
My approach for this is using a main thread to deal with user input. When user hits enter, a thread is created for WebSocketClient using AutoBahn's twisted module and pass a Queue to it. Check if the reactor is running or not and start it if it's not.
Overwrite on message method to put a closing flag into the Queue when getting "close". The main thread will be busy checking the Queue until receiving the flag and go back to start. The code looks like following.
Main thread.
def main_thread():
while True:
text = raw_input("Input server url or exit")
if text == "exit":
if myreactor:
myreactor.stop()
break
msgq = Queue.Queue()
threading.Thread(target=wsthread, args=(text, msgq)).start()
is_close = False
while True:
if msgq.empty() is False:
msg = msgq.get()
if msg == "close":
is_close = True
else:
print msg
if is_close:
break
print 'Websocket client closed!'
Factory and Protocol.
class MyProtocol(WebSocketClientProtocol):
def onMessage(self, payload, isBinary):
msg = payload.decode('utf-8')
self.Factory.q.put(msg)
if msg == 'close':
self.dropConnection(abort=True)
class WebSocketClientFactoryWithQ(WebSocketClientFactory):
def __init__(self, *args, **kwargs):
self.queue = kwargs.pop('queue', None)
WebSocketClientFactory.__init__(self, *args, **kwargs)
Client thread.
def wsthread(url, q):
factory = WebSocketClientFactoryWithQ(url=url, queue=q)
factory.protocol = MyProtocol
connectWS(Factory)
if myreactor is None:
myreactor = reactor
reactor.run()
print 'Done'
Now I got a problem. It seems that my client thread never stops. Even if I receive "close", it seems still running and every time I try to recreate a new client, it creates a new thread. I understand the first thread won't stop since reactor.run() will run forever, but from the 2nd thread and on, it should be non-blocking since I'm not starting it anymore. How can I change that?
EDIT:
I end up solving it with
Adding stopFactory() after disconnect.
Make protocol functions with reactor.callFromThread().
Start the reactor in the first thread and put clients in other threads and use reactor.callInThread() to create them.
Your main_thread creates new threads running wsthread. wsthread uses Twisted APIs. The first wsthread becomes the reactor thread. All subsequent threads are different and it is undefined what happens if you use a Twisted API from them.
You should almost certainly remove the use of threads from your application. For dealing with console input in a Twisted-based application, take a look at twisted.conch.stdio (not the best documented part of Twisted, alas, but just what you want).
In my Tornado app in some situation some clients disconnect from server but my current code doesn't detect that client is disconnect from server. I currently use ping to find out if client is disconnected.
here is my ping pong code:
from threading import Timer
class SocketHandler(websocket.WebSocketHandler):
def __init__(self, application, request, **kwargs):
# some code here
Timer(5.0, self.do_ping).start()
def do_ping(self):
try:
self.ping_counter += 1
self.ping("")
if self.ping_counter > 2:
self.close()
Timer(60, self.do_ping).start()
except WebSocketClosedError:
pass
def on_pong(self, data):
self.ping_counter = 0
now I want to set SO_RCVTIMEO in tornado instead of using ping pong method.
something like this :
sock.setsockopt(socket.SO_RCVTIMEO)
Is it possible to set SO_RCVTIMEO in Tornado for close clients from server after specific time out ?
SO_RCVTIMEO does not do anything in an asynchronous framework like Tornado. You probably want to wrap your reads in tornado.gen.with_timeout. You'll still need to use pings to test the connection and make sure it is still working; if the connection is idle there are few guarantees about how long it will take for the system to notice. (TCP keepalives are a possibility, but these are not configurable on all platforms and generally use very long timeouts).
This is a simple example script from dev.deluge-torrent.org for interacting with the Deluge API.
Nothing happens after reactor.run() and I don't get the "Connection was successful" message, it just hangs forever.
I ran this on my Ubuntu machine where it works fine, but I couldn't get it to work on my Windows machine where i really want to put it to use.
from deluge.ui.client import client
# Import the reactor module from Twisted - this is for our mainloop
from twisted.internet import reactor
# Set up the logger to print out errors
from deluge.log import setupLogger
setupLogger()
# Connect to a daemon running on the localhost
# We get a Deferred object from this method and we use this to know if and when
# the connection succeeded or failed.
d = client.connect()
# We create a callback function to be called upon a successful connection
def on_connect_success(result):
print "Connection was successful!"
print "result:", result
# Disconnect from the daemon once we successfully connect
client.disconnect()
# Stop the twisted main loop and exit
reactor.stop()
# We add the callback to the Deferred object we got from connect()
d.addCallback(on_connect_success)
# We create another callback function to be called when an error is encountered
def on_connect_fail(result):
print "Connection failed!"
print "result:", result
# We add the callback (in this case it's an errback, for error)
d.addErrback(on_connect_fail)
# Run the twisted main loop to make everything go
reactor.run()
I have no idea of how to go about debugging this issue. I'm very new to Twisted, and from what I understand it's a huge library.
I have a small python webserver script for hosting my own website, complete with request handling and error returning. This script worked perfectly on my PC, but when I tried it on my raspberry Pi, it would not restart every 3 minutes (Server would crash after 15, so restarting every 3 minutes seemed good).
So I rewrote my server script and it checks stuff like if it's booting up for the first time or restarting. I'll just show you the code.
#Handler class above here
...
...
class Server:
global server_class, server_adress, httpd
server_class = HTTPServer
server_adress = ('localhost', 8080)
httpd = server_class(server_adress, Handler)
def __init__(self):
self.status = False
self.process()
def process(self):
print(self.status)
process = threading.Timer(10, self.process)
process.start()
if self.status == True:
httpd.socket.close()
self.main()
if self.status == False:
self.main()
def main(self):
try:
if self.status == False:
print("Server online!")
self.status = True
httpd.serve_forever()
if self.status == True:
print("Server restarted!")
httpd.serve_forever()
except KeyboardInterrupt:
print("Server shutting down...")
httpd.socket.close()
if __name__ == "__main__":
instance = Server()
After the ten seconds of running (And it works, I can access my website on http://localhost:8080/index.html), it will continue giving the following error every ten seconds:
File "C:\Users\myname\Dropbox\Python\Webserver\html\server.py", line 187, in main httpd.serve_forever()
File "C:\Python33\lib\socketserver.py", line 237, in serve_forever poll_interval)
File "C:\Python33\lib\socketserver.py", line 155, in _eintr_retry return func(*args)
ValueError: file descriptor cannot be a negative integer (-1)
Basically, how do I fix this? I could just use a simple function with a threading timer to restart the function that is running the server, but somehow that doesn't work on my Raspberry Pi, but it does on my windows.
EDIT:
I should also note that the first time starting the script I can access the website and it's fast. After 10 seconds (after the server restarting), I can access it but it is very slow. After another 10 seconds I am not able to access my website.
The problem you get happens because you access the underlying socket of the server directly. Closing the socket is effectively like unplugging your network connection. The actual server that is sitting on top of the socket remains unaware of the fact that the socket was closed, and tries to continue to serve. As the socket was closed, there is no longer a file descriptor available (this is the error you get).
So instead of cutting the server off its connection, you should tell the server to actually shut down gracefully. This allows it to finish any ongoing connections and safely release everything it might do in the background. You can do that using the shutdown method. Executing that will internally tell the server to remember to shut down the next time the loop within serve_forever occurs.
If I remember correctly, serve_forever is a blocking method, meaning that it will not continue when it is executed. So the simplest way to make a server restart itself would be a single main thread doing this:
while True:
httpd.serve_forever()
So whenever the server stops—for whatever reason—it immediately starts again. Of course here you would now add some status variable (instead of True) which allows you to actually turn off the server. For example in the body of a KeyboardInterrupt catch, you would first set that variable to False and then shut down the server using httpd.shutdown().