Python: Accepting Raw Input While Another Thread Checks For Messages - python

I am trying to write a basic chat client in Python for a project and have completed the task, easy. However when I handed it in they asked if I could get it to accept user input while checking for messages (an extra unmarked task for people who complete work early).
I assume this is something to do with threading, so I tried creating a thread for accepting user input and one for checking for messages, however it would appear that the raw_input stops the other thread.
How would I do this in python? Perhaps I have misunderstood how threading works? - Python Noob
Second try:
#Update last connection
s[user] = str(time.time());
#Start chat server
class chatServer ( threading.Thread ):
def __init__ (self, channel):
self.channel = channel
self.lastMessage = ""
threading.Thread.__init__ ( self ) #Pass to thread constructor
def messageOut ( self ):
while 1:
print "Asking for input"
message = raw_input("Message: ");
s[self.channel] = message;
time.sleep(1)
def messageIn ( self ):
while 1:
print "Checking for new messages"
if s[self.channel]!=self.lastMessage:
print s[self.channel]
self.lastMessage = s[self.channel]
time.sleep(1)
print "Welcome " + user + " type to send a message"
chatServer("channel1").messageIn()
chatServer("channel1").messageOut()
First try:
#Start chat server
class chatServer ( threading.Thread ):
def __init__ (self, user, channel, server):
self.channel = channel
self.lastMessage = ""
self.user = user
self.s = server
threading.Thread.__init__ ( self ) #Pass to thread constructor
def start ( self ):
print "Welcome " + self.user + " type to send a message"
self.messageIn()
self.messageOut()
def messageOut ( self ):
while 1:
message = raw_input("Message: ");
s['message'] = message;
time.sleep(1)
def messageIn ( self ):
while 1:
print "Checking for new messages"
if s[self.channel]!=self.lastMessage:
print s[self.channel]
lastMessage = s[self.channel]
time.sleep(1)
chatServer(user, "channel1", server).start()
Many thanks for your time
P.s. server is a simple class that gets/puts messages it is given
P.p.s This is not homework, more for my personal interest

Not a real answer to your question but as an aside you may want to to look at eventlet.
It allows you to have co-routines which will enable you to handle the kind of things you want to do but in a way thats very easy to read/understand and (imho) very pythonic.
Heres a great video to get you started: PyCon 2010: Eventlet: Asynchronous I/O with a synchronous interface
The main project website: http://eventlet.net/
A chat example using telnet: http://eventlet.net/doc/examples.html#multi-user-chat-server
Hope it helps and you get a real answer to your question too.

Actually you are creating only one thread, reading and posting a single message in a sequential way.
You have to create two threads that read and write messages and are independent from each other. The real problem is synchronization between the two and the sharing of resources in common.

Related

Kafka producer in another Process doesn't seem to work in python

I'm running a microservice in python which implements several handlers for each kind of message.
class MsFeatureLandmark(BaseMicroservice):
def __init__(self):
self.config = safe_load(open(sys.argv[1]))
client = MongoClient(self.config.get('mongo').get('connection_url'))
database = client[self.config.get('mongo').get('mongo_db')]
self.dict = {
MessageType.create_model.name: ComputeLocDescHandler(self.config),
MessageType.detect_all.name: ExtractFeatureHandler(self.config, database),
MessageType.detect_landmark.name: ExtractFeatureHandler(self.config, database)
}
super().__init__(self.dict, self.config.get('kafka'))
def on_message_received(self, generic_message):
# self.dict.get(generic_message.metadata_type).handle(generic_message.message)
p = Process(target=self.dict.get(generic_message.metadata_type).handle, args=(generic_message.message,))
p.daemon = True
p.start()
MsFeatureLandmark().run()
On each message that I receive, I launch the corresponding .handle() method.
At the end of the computation ( which involves tensorflow ) I use methods inherited from the super class to send a message using kafka
def write_message(self, message):
if(self.is_prod_init):
output_topic = self.config.get('kafka').get('output_topic')
cl.logging.info("Sending on " + output_topic + " message: " + str(message))
self.producer.send(output_topic, message)
else:
raise ValueError('Producer is not initialized.')
def init_producer(self):
self.producer = KafkaProducer(bootstrap_servers=self.config.get('kafka').get('bootstrap_servers'),
value_serializer=lambda m: json.dumps(m).encode('utf-8'))
self.is_prod_init = True
self.producer.flush()
When I run all of this code in a synchorized way ( without using Process(target=self. etc. ) everything works correctly, but the producer goes in timeout since it takes too long to process data.
If I run it using the Process, I get no error messages, but the producer doesn't seem to produce any message for whatever reason.
What am I missing?
EDIT: For some reason, the last time I run this microservice an Exception occurred and the consumer ( which is run in another process) read the message successfully. This made me wonder: if I raise an exception, will the message be sent? Yes.
self.write_message(message)
raise Exception('spakkiggnustel')
adding this last line "solved" the problem. Why? I'm more confused than I was before

variables inside BaseHTTPRequestHandler Python

I am creating a chatbot using Python and MS Bot Builder SDK for Python.
The bot is a HTTPServer using a handler. What I want are variables to help me keeping track of the conversation, for example a message counter. But I can't get it to work, each time the bot receives a request (me sending something), it's like another handler is created, cause the number of messages is always 1. I'm not sure what is being called on each request.
Here is the (important) code:
class BotRequestHandler(BaseHTTPRequestHandler):
count = 0
#staticmethod
def __create_reply_activity(request_activity, text):
# not important
def __handle_conversation_update_activity(self, activity):
# not important
def __handle_message_activity(self, activity):
self.count += 1 ############## INCREMENTATION ##############
self.send_response(200)
self.end_headers()
credentials = MicrosoftAppCredentials(APP_ID, APP_PASSWORD)
connector = ConnectorClient(credentials, base_url=activity.service_url)
reply = BotRequestHandler.__create_reply_activity(activity, '(%d) You said: %s' % (self.count, activity.text))
connector.conversations.send_to_conversation(reply.conversation.id, reply)
def __handle_authentication(self, activity):
# not important
def __unhandled_activity(self):
# not important
def do_POST(self):
body = self.rfile.read(int(self.headers['Content-Length']))
data = json.loads(str(body, 'utf-8'))
activity = Activity.deserialize(data)
if not self.__handle_authentication(activity):
return
if activity.type == ActivityTypes.conversation_update.value:
self.__handle_conversation_update_activity(activity)
elif activity.type == ActivityTypes.message.value:
self.__handle_message_activity(activity)
else:
self.__unhandled_activity()
class BotServer(HTTPServer):
def __init__(self):
super().__init__(('localhost', 9000), BotRequestHandler)
def _run(self):
try:
print('Started http server')
self.serve_forever()
except KeyboardInterrupt:
print('^C received, shutting down server')
self.socket.close()
server = BotServer()
server._run()
What I get if enter the message 'a' 4 times is '(1) You said: a' 4 times.
I tried overrifing init method of BaseHTTPRequestHandler but it didn't work.
For those who know: the thing is with Python SDK we don't have Waterfall dialogs like in Node.js, or I didn't find how it works, if someone knows just tell me, cause here I need to keep track of a lot of things from the user and I need variables. And I really want to use Python because I need some ML and other modules in Python.
Thank you for your help.

program finished after send an email

The Email class is tested and has got capabilities to send an email when valid credentials are in use. The problem become when I'm doing use multiple protocols from twisted; in example when the protocols twisted mail and twisted DNS or twisted IRC.
The created code will run endless and when an event is triggered then I wish to receive an email reporting the issue, such as DNS could not resolve a valid domain, DNS service is down, etc. but when an email is received then the program exit (return code 0), therefore the class Email should contains some piece of code which I misleaded, I already check the API but there is not clue about what I missing from.
The class that I'm using currently to send an email:
class Email:
def __init__(self):
threading.Thread.__init__(self)
self.smtp_server = "SMTP"
self.user_name = "MAIL#DOMAIN"
self.user_password = "MAIL_PASSWORD"
self.portTLS = 587
self.portSSL = 465
def sendEmail(self, m):
contextFactory = ClientContextFactory()
contextFactory.method = SSLv3_METHOD
resultDeferred = Deferred()
senderFactory = ESMTPSenderFactory(
self.user_name,
self.user_password,
self.user_name,
m.to,
m.text,
resultDeferred,
contextFactory=contextFactory)
reactor.connectTCP(self.smtp_server, self.portTLS, senderFactory)
resultDeferred.addCallbacks(self.cbSentMessage, self.ebSentMessage)
return resultDeferred
def cbSentMessage(self, result):
print "Message sent"
reactor.stop()
def ebSentMessage(self, err):
err.printTraceback()
reactor.stop()
You are calling reactor.stop to stop your program after resultDeferred fires. If you stop doing that, your program will no longer exit.
(Also, you should get rid of the call to threading.Thread.__init__, that is unnecessary and almost certainly causing other bugs.)
Yes user Glyph was right, now I get feeling like a fool to did do the question now :'''(
The solution was remove the reactor.stop() on the callback functions, therefore these function are now as:
def cbSentMessage(self, result):
print "Message sent"
in the another one is not necesary since the function is called when an error is trigerred, however I change it anyway:
def ebSentMessage(self, err):
err.printTraceback()

Kombu-python - force blocking/synchronous behavior (or processing a message only when the previous finished)

I have Kombu processing a rabbitmq queue and calling django functions/management commands etc. My problem is that I have an absolute requirement for correct order of execution. tha handler for message 3 can never run before the handler for message1 and 2 is finished. I need to ensure Kombu doesn't process another message before I finish processing the previous one:
Consider this base class
class UpdaterMixin(object):
# binding management commands to event names
# override in subclass
event_handlers = {}
app_name = '' #override in subclass
def __init__(self):
if not self.app_name or len(self.event_handlers) == 0:
print('app_name or event_handlers arent implemented')
raise NotImplementedError()
else:
self.connection_url = settings.BROKER_URL
self.exchange_name = settings.BUS_SETTINGS['exchange_name']
self.exchange_type = settings.BUS_SETTINGS['exchange_type']
self.routing_key = settings.ROUTING_KEYS[self.app_name]
def start_listener(self):
logger.info('started %s updater listener' % self.app_name)\\
with Connection(self.connection_url) as connection:
exchange = Exchange(self.exchange_name, self.exchange_type, durable=True)
queue = Queue('%s_updater' % self.app_name, exchange=exchange, routing_key=self.routing_key)
with connection.Consumer(queue, callbacks=[self.process_message]) as consumer:
while True:
logger.info('Consuming events')
connection.drain_events()
def process_message(self, body, message):
logger.info('data received: %s' % body)
handler = self.event_handlers[body['event']]
logger.info('Executing management command: %s' % str(handler))
data = json.dumps(body)
call_command(handler, data, verbosity=3, interactive=False)
message.ack()
Is there a way to force kombu for this kind of behavior? I don't care if the lock would be in not draining another event until processing is done or not running another process_message until the previous is finished, or any other method to acheive this. I just need to make sure execution order is strictly maintained.
I'll be glad for any help with this.
Just figured out the since python is single threaded by default, then this code is blocking/synchronous by default unless I explicitly rewrite it to be async. If anyone bumps into this

How to implement a two way jsonrpc + twisted server/client

Hello I am working on develop a rpc server based on twisted to serve several microcontrollers which make rpc call to twisted jsonrpc server. But the application also required that server send information to each micro at any time, so the question is how could be a good practice to prevent that the response from a remote jsonrpc call from a micro be confused with a server jsonrpc request which is made for a user.
The consequence that I am having now is that micros are receiving bad information, because they dont know if netstring/json string that is comming from socket is their response from a previous requirement or is a new request from server.
Here is my code:
from twisted.internet import reactor
from txjsonrpc.netstring import jsonrpc
import weakref
creds = {'user1':'pass1','user2':'pass2','user3':'pass3'}
class arduinoRPC(jsonrpc.JSONRPC):
def connectionMade(self):
pass
def jsonrpc_identify(self,username,password,mac):
""" Each client must be authenticated just after to be connected calling this rpc """
if creds.has_key(username):
if creds[username] == password:
authenticated = True
else:
authenticated = False
else:
authenticated = False
if authenticated:
self.factory.clients.append(self)
self.factory.references[mac] = weakref.ref(self)
return {'results':'Authenticated as %s'%username,'error':None}
else:
self.transport.loseConnection()
def jsonrpc_sync_acq(self,data,f):
"""Save into django table data acquired from sensors and send ack to gateway"""
if not (self in self.factory.clients):
self.transport.loseConnection()
print f
return {'results':'synced %s records'%len(data),'error':'null'}
def connectionLost(self, reason):
""" mac address is searched and all reference to self.factory.clientes are erased """
for mac in self.factory.references.keys():
if self.factory.references[mac]() == self:
print 'Connection closed - Mac address: %s'%mac
del self.factory.references[mac]
self.factory.clients.remove(self)
class rpcfactory(jsonrpc.RPCFactory):
protocol = arduinoRPC
def __init__(self, maxLength=1024):
self.maxLength = maxLength
self.subHandlers = {}
self.clients = []
self.references = {}
""" Asynchronous remote calling to micros, simulating random calling from server """
import threading,time,random,netstring,json
class asyncGatewayCalls(threading.Thread):
def __init__(self,rpcfactory):
threading.Thread.__init__(self)
self.rpcfactory = rpcfactory
"""identifiers of each micro/client connected"""
self.remoteMacList = ['12:23:23:23:23:23:23','167:67:67:67:67:67:67','90:90:90:90:90:90:90']
def run(self):
while True:
time.sleep(10)
while True:
""" call to any of three potential micros connected """
mac = self.remoteMacList[random.randrange(0,len(self.remoteMacList))]
if self.rpcfactory.references.has_key(mac):
print 'Calling %s'%mac
proto = self.rpcfactory.references[mac]()
""" requesting echo from selected micro"""
dataToSend = netstring.encode(json.dumps({'method':'echo_from_micro','params':['plop']}))
proto.transport.write(dataToSend)
break
factory = rpcfactory(arduinoRPC)
"""start thread caller"""
r=asyncGatewayCalls(factory)
r.start()
reactor.listenTCP(7080, factory)
print "Micros remote RPC server started"
reactor.run()
You need to add a enough information to each message so that the recipient can determine how to interpret it. Your requirements sounds very similar to those of AMP, so you could either use AMP instead or use the same structure as AMP to identify your messages. Specifically:
In requests, put a particular key - for example, AMP uses "_ask" to identify requests. It also gives these a unique value, which further identifies that request for the lifetime of the connection.
In responses, put a different key - for example, AMP uses "_answer" for this. The value matches up with the value from the "_ask" key in the request the response is for.
Using an approach like this, you just have to look to see whether there is an "_ask" key or an "_answer" key to determine if you've received a new request or a response to a previous request.
On a separate topic, your asyncGatewayCalls class shouldn't be thread-based. There's no apparent reason for it to use threads, and by doing so it is also misusing Twisted APIs in a way which will lead to undefined behavior. Most Twisted APIs can only be used in the thread in which you called reactor.run. The only exception is reactor.callFromThread, which you can use to send a message to the reactor thread from any other thread. asyncGatewayCalls tries to write to a transport, though, which will lead to buffer corruption or arbitrary delays in the data being sent, or perhaps worse things. Instead, you can write asyncGatewayCalls like this:
from twisted.internet.task import LoopingCall
class asyncGatewayCalls(object):
def __init__(self, rpcfactory):
self.rpcfactory = rpcfactory
self.remoteMacList = [...]
def run():
self._call = LoopingCall(self._pokeMicro)
return self._call.start(10)
def _pokeMicro(self):
while True:
mac = self.remoteMacList[...]
if mac in self.rpcfactory.references:
proto = ...
dataToSend = ...
proto.transport.write(dataToSend)
break
factory = ...
r = asyncGatewayCalls(factory)
r.run()
reactor.listenTCP(7080, factory)
reactor.run()
This gives you a single-threaded solution which should have the same behavior as you intended for the original asyncGatewayCalls class. Instead of sleeping in a loop in a thread in order to schedule the calls, though, it uses the reactor's scheduling APIs (via the higher-level LoopingCall class, which schedules things to be called repeatedly) to make sure _pokeMicro gets called every ten seconds.

Categories