I have a thread in python which handled receive OSC packets...
I need to retrieve datas from osc in my main function. How could get data from thread out of the thread ?
Here's the code to demonstrate my issue:
TRY WIH CLASS, BUT STILL "DATA IS NOT DEFINED
import OSC
import threading
import atexit
#------OSC Server-------------------------------------#
receive_address = '127.0.0.1', 7402
# OSC Server. there are three different types of server.
s = OSC.ThreadingOSCServer(receive_address)
# this registers a 'default' handler (for unmatched messages)
s.addDefaultHandlers()
class receive:
def printing_handler(addr, tags, data, source):
if addr=='/data':
self.data=data.pop(0)
s.addMsgHandler("/data", printing_handler)
return data
def main(self):
# Start OSCServer
#Main function...I need to retrieve 'data' from the OSC THREAD here
print "Starting OSCServer"
st = threading.Thread(target=s.serve_forever)
st.start()
reception=receive()
reception.main()
plouf = data.reception()
print plouf
thanks in advance
Use a Queue from the standard library, or use Global variables.
Related
I am trying to call a subprocess inside a subprocess in order to send information using ZMQ to an Unity Application. When I call socket.recv() or time.sleep, it stucks the parent process(which is a child process of a main process)
import json
import zmq
from multiprocessing import Process
import multiprocessing as mp
from absl import app, flags, logging
from absl.flags import FLAGS
def send_unity_data(arg):
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
socket.bind("tcp://*:8080")
while True:
if(arg.poll()):
message=arg.recv()
x = { "x":str(message[0]), "y":str(message[1])}
app_json = json.dumps(x)
socket.send_string(app_json)
message = socket.recv()
print("Received request: %s" % message)
def streaming(detection,args):
try:
vid = cv2.VideoCapture(int(FLAGS.video))
except:
vid = cv2.VideoCapture(FLAGS.video)
receiver1 , sender1 = mp.Pipe()
b_proc3 = Process(target=send_unity_data, args=[receiver1])
b_proc3.start()
while(True):
...
def Main(_argv):
receiver , sender = mp.Pipe()
b_proc = Process(target=streaming, args=[receiver,FLAGS])
b_proc.start()
while(True):
...
I want to send positional coordinates to an Unity application, which is calculated by the streaming process, if someone has a better way to do it, I can change my code as well.
Avoid any undeterministically long blocking state in Video streaming
Without deeper analysis, your code uses blocking-mode operations, which will block, whenever there are no messages yet in the Context()-s instance receiving queue and the code submits a call to a socket.recv()-method, like in the message = socket.recv() SLOC above.
Designing multi-layer / multi-process coordination is to avoid each and every potential blocking - ZeroMQ has .poll()-methods for non-blocking or deterministic ( max-latency-budget consolidated MUX-ed ) priority polling (mainloop-alike)-"Controller"-policies.
Feel free to read more details about how to best use ZeroMQ Hierarchy for your projects.
Where the code blocks? Let's review the as-is state:
multiprocessing module has other defaults and exhibits other behaviour than the tools based on the ZeroMQ messaging/signaling infrastructure. Best use the ZeroMQ on both sides - no need to rely on 2nd layer of multiprocessing.Pipe tools for delivering content into ZeroMQ operating realms. ZeroMQ stack-less transport classes, as inproc:// and ipc:// and even cluster-wide tipc:// deliver way better performance as they may ejnoy a Zero-Copy for ultimate shaving off the processing latency.
Anyway, avoid all blocking forms of methods being called and design a code so that it does not depend on (not yet, the more on not ever) delivered messages:
def send_unity_data( arg ): ### an ad-hoc called
context = zmq.Context() ### 1st: spends time to instantiate a .Context()
socket = context.socket( zmq.ROUTER ) ### 2nd: spends time to instantiate a .socket()
socket.bind("tcp://*:8080") ### 3rd: spends time to ask/acquire O/S port(s)
while True: ### INFINITE-LOOP----------------------------------------
if( arg.poll() ### ?-?-?-?-?-?-? MAY BLOCK depending on arg's .poll()-method
): ### IF .poll()-ed:
message = arg.recv() ### ?-?-?-?-?-?-? MAY BLOCK till a call to arg.recv()-finished
x = { "x": str( message[0] ), ### try to assemble a dict{}
"y": str( message[1] ) ### based on both message structure
} ### and content
app_json = json.dumps( x ) ### try to JSON-ify the dict{}
socket.send_string( app_json ) ### x-x-x-x-x-x-x WILL BLOCK till a socket.send_string() finished
message = socket.recv() ### x-x-x-x-x-x-x WILL BLOCK till a socket.recv()is ever finished
print( "Received request: %s" ### try to print
% message ### a representation of a <message>
) ### on CLI
########################################### This is
# SPIN/LOOP OTHEWRISE ### hasty to O/S resources, wasting GIL-lock latency masking
########################################################################################################
I am attempting to create a script that uses a value in a remote database to create a wol packet.
I need to pass the value of a (something that I honestly don't know what to call) to a variable. I am unable to just set it to the variable, and I can't figure out how to import it from somewhere else. I need "payload" which is printed and returned under "def message" to be saved to the variable "data"
Below is my code, and I will link the MQTT code that this relies on.
# Import standard python modules.
import sys
# Import Adafruit IO MQTT client.
from Adafruit_IO import MQTTClient
from Adafruit_IO import *
# Set to your Adafruit IO key & username below.
ADAFRUIT_IO_KEY = 'where the api key goes'
ADAFRUIT_IO_USERNAME = 'my username' # See https://accounts.adafruit.com
# to find your username.
# Set to the ID of the feed to subscribe to for updates.
FEED_ID = 'test1'
# Define callback functions which will be called when certain events happen.
def connected(client):
# Connected function will be called when the client is connected to Adafruit IO.
# This is a good place to subscribe to feed changes. The client parameter
# passed to this function is the Adafruit IO MQTT client so you can make
# calls against it easily.
print ('Connected to Adafruit IO! Listening for {0} changes...').format(FEED_ID)
# Subscribe to changes on a feed named DemoFeed.
client.subscribe(FEED_ID)
def disconnected(client):
# Disconnected function will be called when the client disconnects.
print ('Disconnected from Adafruit IO!')
sys.exit(1)
def message(client, feed_id, payload):
# Message function will be called when a subscribed feed has a new value.
# The feed_id parameter identifies the feed, and the payload parameter has
print ('Feed {0} received new value: {1}').format(FEED_ID, payload)
return (payload == payload)
data = message(x, x, x)
# Create an MQTT client instance.
client = MQTTClient(ADAFRUIT_IO_USERNAME, ADAFRUIT_IO_KEY)
# Setup the callback functions defined above.
client.on_connect = connected
client.on_disconnect = disconnected
client.on_message = message
# Connect to the Adafruit IO server.
client.connect()
# Start a message loop that blocks forever waiting for MQTT messages to be
# received. Note there are other options for running the event loop like doing
# so in a background thread--see the mqtt_client.py example to learn more.
if data == '1':
print('Latest value from Test: {0}'.format(data.value))
wol.send_magic_packet('my mac addy')
time.sleep(3)
# Send a value to the feed 'Test'.
aio.send('test1', 0)
print ("worked this time")
client.loop_blocking()
Here is the link to the other code it relies on https://github.com/adafruit/io-client-python/blob/master/Adafruit_IO/mqtt_client.py
First of all, you don't need to import the same module twice
from Adafruit_IO import MQTTClient # this imports part MQTTClient
from Adafruit_IO import * # this imports everything, including MQTTClient again
As for your problem, you've returned (payload == payload) which will always be True, i'm not sure what you're trying to do here but it should look something like this:
def message(client, feed_id, payload):
...
print ('Feed {0} received new value: {1}').format(FEED_ID, payload)
return payload == payload_from_database # what you return will be saved as the variable data
data = message("Steve the happy client", 85, "£100") # in this case, data will be payload
I have some code that uses the requests module to communicate with a logging API. However, requests itself, through urllib3, does logging. Naturally, I need to disable logging so that requests to the logging API don't cause an infinite loop of logs. So, in the module I do the logging calls in, I do logging.getLogger("requests").setLevel(logging.CRITICAL) to mute routine request logs.
However, this code is intended to load and run arbitrary user code. Since the python logging module apparently uses global state to manage settings for a given logger, I am worried the user's code might turn logging back on and cause problems, for instance if they naively use the requests module in their code without realizing I have disabled logging for it for a reason.
How can I disable logging for the requests module when it is executed from the context of my code, but not affect the state of the logger for the module from the perspective of the user? Some sort of context manager that silences calls to logging for code within the manager would be ideal. Being able to load the requests module with a unique __name__ so the logger uses a different name could also work, though it's a bit convoluted. I can't find a way to do either of these things, though.
Regrettably, the solution will need to handle multiple threads, so procedurally turning off logging, then running the API call, then turning it back on will not work as global state is mutated.
I think I've got a solution for you:
The logging module is built to be thread-safe:
The logging module is intended to be thread-safe without any special
work needing to be done by its clients. It achieves this though using
threading locks; there is one lock to serialize access to the module’s
shared data, and each handler also creates a lock to serialize access
to its underlying I/O.
Fortunately, it exposes the second lock mentioned though a public API: Handler.acquire() lets you acquire a lock for a particular log handler (and Handler.release() releases it again). Acquiring that lock will block all other threads that try to log a record that would be handled by this handler until the lock is released.
This allows you to manipulate the handler's state in a thread-safe way. The caveat is this: Because it's intended as a lock around the I/O operations of the handler, the lock will only be acquired in emit(). So only once a record makes it through filters and log levels and would be emitted by a particular handler will the lock be acquired. That's why I had to subclass a handler and create the SilencableHandler.
So the idea is this:
Get the topmost logger for the requests module and stop propagation for it
Create your custom SilencableHandler and add it to the requests logger
Use the Silenced context manager to selectively silence the SilencableHandler
main.py
from Queue import Queue
from threading import Thread
from usercode import fetch_url
import logging
import requests
import time
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)
class SilencableHandler(logging.StreamHandler):
def __init__(self, *args, **kwargs):
self.silenced = False
return super(SilencableHandler, self).__init__(*args, **kwargs)
def emit(self, record):
if not self.silenced:
super(SilencableHandler, self).emit(record)
requests_logger = logging.getLogger('requests')
requests_logger.propagate = False
requests_handler = SilencableHandler()
requests_logger.addHandler(requests_handler)
class Silenced(object):
def __init__(self, handler):
self.handler = handler
def __enter__(self):
log.info("Silencing requests logger...")
self.handler.acquire()
self.handler.silenced = True
return self
def __exit__(self, exc_type, exc_value, traceback):
self.handler.silenced = False
self.handler.release()
log.info("Requests logger unsilenced.")
NUM_THREADS = 2
queue = Queue()
URLS = [
'http://www.stackoverflow.com',
'http://www.stackexchange.com',
'http://www.serverfault.com',
'http://www.superuser.com',
'http://travel.stackexchange.com',
]
for i in range(NUM_THREADS):
worker = Thread(target=fetch_url, args=(i, queue,))
worker.setDaemon(True)
worker.start()
for url in URLS:
queue.put(url)
log.info('Starting long API request...')
with Silenced(requests_handler):
time.sleep(5)
requests.get('http://www.example.org/api')
time.sleep(5)
log.info('Done with long API request.')
queue.join()
usercode.py
import logging
import requests
import time
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)
def fetch_url(i, q):
while True:
url = q.get()
response = requests.get(url)
logging.info("{}: {}".format(response.status_code, url))
time.sleep(i + 2)
q.task_done()
Example output:
(Notice how the call to http://www.example.org/api isn't logged, and all threads that try to log requests are blocked for the first 10 seconds).
INFO:__main__:Starting long API request...
INFO:__main__:Silencing requests logger...
INFO:__main__:Requests logger unsilenced.
INFO:__main__:Done with long API request.
Starting new HTTP connection (1): www.stackoverflow.com
Starting new HTTP connection (1): www.stackexchange.com
Starting new HTTP connection (1): stackexchange.com
Starting new HTTP connection (1): stackoverflow.com
INFO:root:200: http://www.stackexchange.com
INFO:root:200: http://www.stackoverflow.com
Starting new HTTP connection (1): www.serverfault.com
Starting new HTTP connection (1): serverfault.com
INFO:root:200: http://www.serverfault.com
Starting new HTTP connection (1): www.superuser.com
Starting new HTTP connection (1): superuser.com
INFO:root:200: http://www.superuser.com
Starting new HTTP connection (1): travel.stackexchange.com
INFO:root:200: http://travel.stackexchange.com
Threading code is based on Doug Hellmann's articles on threading and queues.
I have what I would think is a pretty common use case for Gevent. I need a UDP server that listens for requests, and based on the request submits a POST to an external web service. The external web service essentially only allows one request at a time.
I would like to have an asynchronous UDP server so that data can be immediately retrieved and stored so that I don't miss any requests (this part is easy with the DatagramServer gevent provides). Then I need some way to send requests to the external web service serially, but in such a way that it doesn't ruin the async of the UDP server.
I first tried monkey patching everything and what I ended up with was a quick solution, but one in which my requests to the external web service were not rate limited in any way and which resulted in errors.
It seems like what I need is a single non-blocking worker to send requests to the external web service in serial while the UDP server adds tasks to the queue from which the non-blocking worker is working.
What I need is information on running a gevent server with additional greenlets for other tasks (especially with a queue). I've been using the serve_forever function of the DatagramServer and think that I'll need to use the start method instead, but haven't found much information on how it would fit together.
Thanks,
EDIT
The answer worked very well. I've adapted the UDP server example code with the answer from #mguijarr to produce a working example for my use case:
from __future__ import print_function
from gevent.server import DatagramServer
import gevent.queue
import gevent.monkey
import urllib
gevent.monkey.patch_all()
n = 0
def process_request(q):
while True:
request = q.get()
print(request)
print(urllib.urlopen('https://test.com').read())
class EchoServer(DatagramServer):
__q = gevent.queue.Queue()
__request_processing_greenlet = gevent.spawn(process_request, __q)
def handle(self, data, address):
print('%s: got %r' % (address[0], data))
global n
n += 1
print(n)
self.__q.put(n)
self.socket.sendto('Received %s bytes' % len(data), address)
if __name__ == '__main__':
print('Receiving datagrams on :9000')
EchoServer(':9000').serve_forever()
Here is how I would do it:
Write a function taking a "queue" object as argument; this function will continuously process items from the queue. Each item is supposed to be a request for the web service.
This function could be a module-level function, not part of your DatagramServer instance:
def process_requests(q):
while True:
request = q.get()
# do your magic with 'request'
...
in your DatagramServer, make the function running within a greenlet (like a background task):
self.__q = gevent.queue.Queue()
self.__request_processing_greenlet = gevent.spawn(process_requests, self.__q)
when you receive the UDP request in your DatagramServer instance, you push the request to the queue
self.__q.put(request)
This should do what you want. You still call 'serve_forever' on DatagramServer, no problem.
Hello I am working on develop a rpc server based on twisted to serve several microcontrollers which make rpc call to twisted jsonrpc server. But the application also required that server send information to each micro at any time, so the question is how could be a good practice to prevent that the response from a remote jsonrpc call from a micro be confused with a server jsonrpc request which is made for a user.
The consequence that I am having now is that micros are receiving bad information, because they dont know if netstring/json string that is comming from socket is their response from a previous requirement or is a new request from server.
Here is my code:
from twisted.internet import reactor
from txjsonrpc.netstring import jsonrpc
import weakref
creds = {'user1':'pass1','user2':'pass2','user3':'pass3'}
class arduinoRPC(jsonrpc.JSONRPC):
def connectionMade(self):
pass
def jsonrpc_identify(self,username,password,mac):
""" Each client must be authenticated just after to be connected calling this rpc """
if creds.has_key(username):
if creds[username] == password:
authenticated = True
else:
authenticated = False
else:
authenticated = False
if authenticated:
self.factory.clients.append(self)
self.factory.references[mac] = weakref.ref(self)
return {'results':'Authenticated as %s'%username,'error':None}
else:
self.transport.loseConnection()
def jsonrpc_sync_acq(self,data,f):
"""Save into django table data acquired from sensors and send ack to gateway"""
if not (self in self.factory.clients):
self.transport.loseConnection()
print f
return {'results':'synced %s records'%len(data),'error':'null'}
def connectionLost(self, reason):
""" mac address is searched and all reference to self.factory.clientes are erased """
for mac in self.factory.references.keys():
if self.factory.references[mac]() == self:
print 'Connection closed - Mac address: %s'%mac
del self.factory.references[mac]
self.factory.clients.remove(self)
class rpcfactory(jsonrpc.RPCFactory):
protocol = arduinoRPC
def __init__(self, maxLength=1024):
self.maxLength = maxLength
self.subHandlers = {}
self.clients = []
self.references = {}
""" Asynchronous remote calling to micros, simulating random calling from server """
import threading,time,random,netstring,json
class asyncGatewayCalls(threading.Thread):
def __init__(self,rpcfactory):
threading.Thread.__init__(self)
self.rpcfactory = rpcfactory
"""identifiers of each micro/client connected"""
self.remoteMacList = ['12:23:23:23:23:23:23','167:67:67:67:67:67:67','90:90:90:90:90:90:90']
def run(self):
while True:
time.sleep(10)
while True:
""" call to any of three potential micros connected """
mac = self.remoteMacList[random.randrange(0,len(self.remoteMacList))]
if self.rpcfactory.references.has_key(mac):
print 'Calling %s'%mac
proto = self.rpcfactory.references[mac]()
""" requesting echo from selected micro"""
dataToSend = netstring.encode(json.dumps({'method':'echo_from_micro','params':['plop']}))
proto.transport.write(dataToSend)
break
factory = rpcfactory(arduinoRPC)
"""start thread caller"""
r=asyncGatewayCalls(factory)
r.start()
reactor.listenTCP(7080, factory)
print "Micros remote RPC server started"
reactor.run()
You need to add a enough information to each message so that the recipient can determine how to interpret it. Your requirements sounds very similar to those of AMP, so you could either use AMP instead or use the same structure as AMP to identify your messages. Specifically:
In requests, put a particular key - for example, AMP uses "_ask" to identify requests. It also gives these a unique value, which further identifies that request for the lifetime of the connection.
In responses, put a different key - for example, AMP uses "_answer" for this. The value matches up with the value from the "_ask" key in the request the response is for.
Using an approach like this, you just have to look to see whether there is an "_ask" key or an "_answer" key to determine if you've received a new request or a response to a previous request.
On a separate topic, your asyncGatewayCalls class shouldn't be thread-based. There's no apparent reason for it to use threads, and by doing so it is also misusing Twisted APIs in a way which will lead to undefined behavior. Most Twisted APIs can only be used in the thread in which you called reactor.run. The only exception is reactor.callFromThread, which you can use to send a message to the reactor thread from any other thread. asyncGatewayCalls tries to write to a transport, though, which will lead to buffer corruption or arbitrary delays in the data being sent, or perhaps worse things. Instead, you can write asyncGatewayCalls like this:
from twisted.internet.task import LoopingCall
class asyncGatewayCalls(object):
def __init__(self, rpcfactory):
self.rpcfactory = rpcfactory
self.remoteMacList = [...]
def run():
self._call = LoopingCall(self._pokeMicro)
return self._call.start(10)
def _pokeMicro(self):
while True:
mac = self.remoteMacList[...]
if mac in self.rpcfactory.references:
proto = ...
dataToSend = ...
proto.transport.write(dataToSend)
break
factory = ...
r = asyncGatewayCalls(factory)
r.run()
reactor.listenTCP(7080, factory)
reactor.run()
This gives you a single-threaded solution which should have the same behavior as you intended for the original asyncGatewayCalls class. Instead of sleeping in a loop in a thread in order to schedule the calls, though, it uses the reactor's scheduling APIs (via the higher-level LoopingCall class, which schedules things to be called repeatedly) to make sure _pokeMicro gets called every ten seconds.