Autocompletion of function parameters in (I)Python for XMLRPC-Client - python

I am using xmlrpclib to interactively remote control some lab equipment with IPython. I love IPython's autocompletion features and I would also like to have it via xmlrpclib. So far I managed to accomplish method name completion and method help with the following approach.
A little test server simulating a motorized stage (this is only useful, if you want to test my client code):
import time # needed for StageSimulation
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
class StageSimulation:
""" A very simple simulation of a motorized linear stage """
v = 5 # mm/s -- speed of stage
goalPos = 0 # mm -- goal Position in mm
goalTime = 0 # sec -- time when goal position should be reached
def getPos(self):
""" Return actual Position of stage """
delta_t = self.goalTime - time.time() # remaining moving time
if delta_t <= 0 : # stage is not moving
return self.goalPos
else: # stage is moving
return self.goalPos - self.v*delta_t
def move(self, goalPos, v=5):
""" Move stage to position ``goalPos`` with speed ``v`` """
p0 = self.getPos()
delta_p = goalPos - p0
if v*delta_p < 0: # sign of v wrong
v *= -1
self.goalTime = time.time() + delta_p/v
self.goalPos, self.v = goalPos, v
# Restrict to a particular path (see python Docs)
class RequestHandler(SimpleXMLRPCRequestHandler):
rpc_paths = ('/RPC2',)
if __name__=='__main__':
""" Instaniate Server """
host, hport = "localhost", 8787
LogXMLRPCRequests = False
server = SimpleXMLRPCServer((host, hport), allow_none=True,
requestHandler=RequestHandler)
server.register_introspection_functions()
StS = StageSimulation()
server.register_instance(StS)
try:
server.serve_forever()
except KeyboardInterrupt:
print("Terminated server.")
My client instantiates an object in which all known methods are registered:
import xmlrpclib
class XMLRPCClientObject(object):
"""XMLRPC Client which allows Tab completion on server instances
This is achieved by reading all methods names from the server and
using them to generate local wrappers to them.
"""
def __init__(self, url):
""" Connect to server with ``url`` and generate methods """
self.SP = xmlrpclib.ServerProxy(url)
self.generateMethods()
def generateMethods(self):
""" Read names of server methods and use them for local wrappers """
SP = self.SP
for n in SP.system.listMethods():
f = getattr(SP, n)
f.__doc__ = SP.system.methodHelp(n) # add doc string
f.__name__ = n # needed to make help() work
setattr(self, n, f) # register as local method
if __name__ == "__main__":
""" main function connects to Test Server """
S = XMLRPCClientObject("http://localhost:8787")
In addition to the method name completion, I would also like to have parameter name completion as in S.move(goal<TAB>. An approach would be to utilize xmlrpc.ServerProxy.system.methodSignature(), but system_methodSignature() is not supported by SimpleXMLRPCServer. Does anybody have an idea how to retrieve the signatures of the server methods?

I tend to think that python inspect module can help, it provides the basics you can use for your desired features

Related

Reading memory with Pymem (MultiThreading problem?)

Hi I am currently working on Reinforcement-Learning project using Python, the problem is that I need to read a memory address to access game variables for example Speed of Car or Track Progress. Everything is working fine I also tried to create 2 threads that simulateously access the memory address. But When My PPO Agent worker is creating new Enviroment, with the class containing the Speedometer.py class and tries calling return_speed_mph() the program gives an error:
raise pymem.exception.MemoryReadError(address, struct.calcsize('i'), e.error_code)pymem.exception.MemoryReadError: Could not read memory at: 9521188, length: 4 - GetLastError: 6
The Speedometer class looks like this:
from pymem import *
from pymem.process import *
from multiprocessing import Lock, Value
class Speedometer:
mem: Pymem
module: pymem.process
shared_speed: Value
lock: Lock
offsets = [0xC, 0xC, 0x38, 0xC, 0x54]
def __init__(self):
self.module = module_from_name(self.mem.process_handle, "speed.exe").lpBaseOfDll
self.mem = Pymem("speed.exe")
self.lock = Lock()
self.shared_speed = Value('i', 0)
def return_speed_mph(self):
with self.lock:
result = self.mem.read_int(self.get_pointer_address(self.module + 0x00514824, self.offsets))
return result
def get_pointer_address(self, base, offsets):
addr = self.mem.read_int(base)
for offset in offsets:
if offset != offsets[-1]:
addr = self.mem.read_int(addr + offset)
addr = addr + offsets[-1]
return addr
I temporarily fixed the problem by moving the Inicialization of mem and module directly into return_speed_mph() function, but I think that it is memory inefficient:
def return_speed_mph(self):
with self.lock:
self.mem = Pymem("speed.exe")
self.module = module_from_name(self.mem.process_handle, "speed.exe").lpBaseOfDll
result = self.mem.read_int(self.get_pointer_address(self.module + 0x00514824, self.offsets))
return result
Is there any way to fix it somehow without moving the initialization of the module and mem directly into the return_speed_mph() function?
Thanks in advance. :D
(The Library I am using is called Pymem)

How to avoid classmethod side effect using Celery?

I am running a class based app using celery, but I am noting that when two processes run simultaneously, certain staticmethods in the class are not acting independently. Here is the app invocation
import os
from PriceOptimization.celery import app
from .Tasks_Sim.sim import Sim, final_report
#app.task(name='Simulations.tasks.scoring')
def simulation(clients, deciles):
s = Sim(**sim_params)
market_by_year = s.control_flow(my_save_path)
report = final_report(market_by_year)
return report
Within my Sim app, I have a class method that creates id's for my instance as follows
class Company:
company_id = 0
#classmethod
def set_company_no(cls):
cls.company_id += 1
return cls.company_id-1
def __init__(self, companies, year):
self._company_id = Company.set_company_no()
self._company_year = year
Usually the first task instantiated will complete successfully, but on the next invocation, I am getting a list index out of range error that suggests to me that my workers are not independent and that my company_id object is not commencing from zero with the next invocation. How can I prevent this side effect and have each app run independently?
For now, I have elected to make my process run sequentially using a redis lock:
from settings import REDIS_INSTANCE
REDIS_LOCK_KEY = 'ABC'
#app.task(name='Simulations.tasks.scoring')
def simulation(clients, deciles):
timeout = (60 * 5)
have_lock = False
my_lock = REDIS_INSTANCE.lock(REDIS_LOCK_KEY, timeout=timeout)
while have_lock == False:
have_lock = my_lock.acquire(blocking=False)
if have_lock:
print('unique process commencing...')
s = Sim(**sim_params)
market_by_year = s.control_flow(my_save_path)
report = final_report(market_by_year)
else:
print('waiting for lock to commence...')
time.sleep(10)
my_lock.release()
return report

How to set up bottlepy with threading for processing in background

I'm trying to set up a web service that processes a rendering in background, which takes a minute. While the rendering is in progress i want the server to be able to handle requests in parallel, returning Id {} not found or the result if there is one.
The tutorials i found mainly handle simple requests without much processing (http://bottlepy.org/docs/dev/async.html, using sleep to emulate processing). So i'm not quite sure how to implement threading - should the bottlepy routes be set up in a thread?
From http://bottlepy.org/docs/dev/tutorial_app.html#server-setup i know that the default server of bottlepy is single-threaded so i tried to switch to another server (PasteServer).
from bottle import Bottle, run, PasteServer
from service import startWithDirectArgs, default_out, default_out_dir
import threading
class BakingThread(threading.Thread):
# lock = threading.Lock()
isProcessRunning = False
resultDict = {}
currentId = 0
def __init__(self, bakingId: str, args):
super().__init__()
self.bakingId = bakingId
self.args = args
def run(self):
# with BakingThread.lock:
if BakingThread.isProcessRunning:
return False
BakingThread.processRunning = True
print("\033[1;32;49m" +
"Starting baking process with id {}".format(self.bakingId) +
"\033[0;37;49m")
result = startWithDirectArgs(self.args)
# result = calculatePi(100_0000_00)
BakingThread.resultDict[self.bakingId] = str(result)
BakingThread.isProcessRunning = False
print("\033[1;32;49m" +
"Finished baking process with id {}".format(self.bakingId) +
"\033[0;37;49m")
return result
def getUniqueId() -> str:
BakingThread.currentId += 1
return str(BakingThread.currentId)
def calculatePi(n: int) -> float:
halfPi = 1.0
zaehler = 2.0
nenner = 1.0
for i in range(n):
halfPi *= zaehler / nenner
if i % 2:
zaehler += 2.0
else:
nenner += 2.0
return 2.0 * halfPi
app = Bottle()
#app.route("/bakeFile/<fileParam>")
def bakeFile(fileParam: str):
# args = {"file": fileParam, "out": default_out_dir + default_out}
args = {
"file": "build/igmodels/AOMaps/Scene.igxc", # fileParam,
"out": default_out_dir + default_out
}
print(args)
cid = getUniqueId()
bt = BakingThread(cid, args)
bt.start()
bt.join()
#app.route("/bakeUrl/<urlParam>")
def bakeUrl(urlParam: str):
args = {"url": urlParam, "out": default_out_dir + default_out}
print(args)
cid = getUniqueId()
bt = BakingThread(cid, args)
bt.start()
bt.join()
#app.route("/pullState/<bakingId>")
def pullState(bakingId: str):
print("\033[1;33;49m" + "pullState id {}".format(BakingThread.currentId) +
"\033[0;37;49m")
result = BakingThread.resultDict.get(bakingId,
"Id {} not found".format(bakingId))
return result
app.run(host="localhost", port=8080, debug=True, server=PasteServer)
I expect to be able to run http://localhost:8080/bakeFile/3dGeometryFileName and while the rendering is running i expect calling http://localhost:8080/pullState/1 to respond with Id 1 not found. After the rendering is done the same call should return a result.
Edit: The rendering process was implemented in C++, bound with PyBind. The Global Interpreter Lock (GIL) prevented the concurrent execution of rendering and webserving, so i added py::gil_scoped_release release; before and py::gil_scoped_acquire acquire; after the expensive calculations in the C++ code. In the code above i added a snippet to calculate pi directly in python without C++/PyBind, so the bottlePy developer could point me onto that GIL thing. (Thx Marcel)
Solved it.
The rendering process was implemented in C++, bound with PyBind. The Global Interpreter Lock (GIL) prevented the concurrent execution of rendering and webserving, so i added py::gil_scoped_release release; before and py::gil_scoped_acquire acquire; after the expensive calculations in the C++ code. In the code above i added a snippet to calculate pi directly in python without C++/PyBind, so the bottlePy developer could point me onto that GIL thing. (Thx Marcel)

Twisted: cache list of instances

I am currently working through the twisted Developer Guides and would like to get some information/guide regarding extending the usecase of one of the provides examples, namely caching instances of objects:
The example code is:
cache.classes.py
from twisted.spread import pb
class MasterDuckPond(pb.Cacheable):
def __init__(self, ducks):
self.observers = []
self.ducks = ducks
def count(self):
print "I have [%d] ducks" % len(self.ducks)
def addDuck(self, duck):
self.ducks.append(duck)
for o in self.observers: o.callRemote('addDuck', duck)
def removeDuck(self, duck):
self.ducks.remove(duck)
for o in self.observers: o.callRemote('removeDuck', duck)
def getStateToCacheAndObserveFor(self, perspective, observer):
self.observers.append(observer)
# you should ignore pb.Cacheable-specific state, like self.observers
return self.ducks # in this case, just a list of ducks
def stoppedObserving(self, perspective, observer):
self.observers.remove(observer)
class SlaveDuckPond(pb.RemoteCache):
# This is a cache of a remote MasterDuckPond
def count(self):
return len(self.cacheducks)
def getDucks(self):
return self.cacheducks
def setCopyableState(self, state):
print " cache - sitting, er, setting ducks"
self.cacheducks = state
def observe_addDuck(self, newDuck):
print " cache - addDuck"
self.cacheducks.append(newDuck)
def observe_removeDuck(self, deadDuck):
print " cache - removeDuck"
self.cacheducks.remove(deadDuck)
pb.setUnjellyableForClass(MasterDuckPond, SlaveDuckPond)
cache_sender.py
#!/usr/bin/env python
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
from twisted.spread import pb, jelly
from twisted.python import log
from twisted.internet import reactor
from cache_classes import MasterDuckPond
class Sender:
def __init__(self, pond):
self.pond = pond
def phase1(self, remote):
self.remote = remote
d = remote.callRemote("takePond", self.pond)
d.addCallback(self.phase2).addErrback(log.err)
def phase2(self, response):
self.pond.addDuck("ugly duckling")
self.pond.count()
reactor.callLater(1, self.phase3)
def phase3(self):
d = self.remote.callRemote("checkDucks")
d.addCallback(self.phase4).addErrback(log.err)
def phase4(self, dummy):
self.pond.removeDuck("one duck")
self.pond.count()
self.remote.callRemote("checkDucks")
d = self.remote.callRemote("ignorePond")
d.addCallback(self.phase5)
def phase5(self, dummy):
d = self.remote.callRemote("shutdown")
d.addCallback(self.phase6)
def phase6(self, dummy):
reactor.stop()
def main():
master = MasterDuckPond(["one duck", "two duck"])
master.count()
sender = Sender(master)
factory = pb.PBClientFactory()
reactor.connectTCP("localhost", 8800, factory)
deferred = factory.getRootObject()
deferred.addCallback(sender.phase1)
reactor.run()
if __name__ == '__main__':
main()
cache_receiver.py:
#!/usr/bin/env python
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
from twisted.application import service, internet
from twisted.internet import reactor
from twisted.spread import pb
import cache_classes
class Receiver(pb.Root):
def remote_takePond(self, pond):
self.pond = pond
print "got pond:", pond # a DuckPondCache
self.remote_checkDucks()
def remote_checkDucks(self):
print "[%d] ducks: " % self.pond.count(), self.pond.getDucks()
def remote_ignorePond(self):
# stop watching the pond
print "dropping pond"
# gc causes __del__ causes 'decache' msg causes stoppedObserving
self.pond = None
def remote_shutdown(self):
reactor.stop()
application = service.Application("copy_receiver")
internet.TCPServer(8800, pb.PBServerFactory(Receiver())).setServiceParent(
service.IServiceCollection(application))
This example seems pretty straightforward, the MasterDuckPond is controlled by the sending side, and the SlaveDuckPond is a cache that tracks changes to the master.
However, how would I go about updating/caching an entire list of instanced objects?
Don't use PB. The protocol is overly complicated for at least 99% of use-cases (meaning you will have to work a lot harder than necessary to understand, implement, and maintain your project). There are no other implementations of it and there probably never will be (which means you're stuck with Python and Twisted - which is not to say those things are bad, but there are a lot of other things out there that may also be good). The level of maintenance is minimal (so if you find bugs, there's a small chance someone will help you get them fixed but that's about all you can expect - and you may not even get that if the bugs aren't trivial).
Give HTTP a try. It can do a lot. If you combine it with a data format like Capn, CBOR, or even JSON it can do even more.

Python threading for wheel encoders on a Robot

I'm writing the code for a robot which my college is entering into a competition. I'm currently trying to build some wheel encoders using reflectance sensors. I realised a while back that I would probably need to use threading to achieve this, seeing as the robot needs to monitor both the left and right encoders at the same time. The code below is what I have so far:
from __future__ import division
import threading
import time
from sr import *
R = Robot()
class Encoder(threading.Thread):
def __init__(self, motor, pin, div=16):
self.motor = motor
self.pin = pin
self.div = div
self.count = 0
threading.Thread.__init__(self)
def run(self):
while True:
wait_for(R.io[0].input[self.pin].query.d)
self.count += 1
def rotations(self, angle, start_speed=50):
seg = 360/self.div
startcount = self.count
current_dist = angle #Distance away from target
R.motors[self.motor].target = start_speed
while current_dist > 360:
newcount = self.count - startcount
current_dist = angle - newcount*seg
R.motors[self.motor].target = 50
while abs(current_dist) > seg/2:
newcount = self.count - startcount
current_dist = angle - newcount*seg
current_speed = start_speed * current_dist / 360
if current_speed < 5:
R.motors[self.motor].target = 5
else:
R.motors[self.motor].target = current_speed
R.motors[self.motor].target = 0
WheelLeft = Encoder(0,0)
WheelLeft.start()
WheelRight = Encoder(1,3)
WheelRight.start()
WheelRight.rotations(720)
WheelLeft.rotations(720)
The sr module is provided by Southampton University, who are running the competition. It allows us to interact with the robot's hardware.
Now, the threads which get created seem to allow the two reflectance sensors to be monitored separately. This bit of code: R.io[0].input[self.pin].query.d works out whether the value coming from the reflectance sensor has changed. The 'rotations' method turns the wheel through a certain angle by constantly checking how many degrees the wheel has already turned through, and slowing it down as it reaches the end. I would like both wheels to start turning when I run the program, and then slow down and stop when they have gone through 2 rotations. Currently though, when I run the program, one wheel starts turning and slows down and stops, followed by the other wheel. It seems to me like the 'rotations' method is not running in a thread, like the 'run' method is. Is it only the code under the 'run' method that runs in a thread, or is it the whole class?
If it helps, I've been following this tutorial: http://www.devshed.com/c/a/Python/Basic-Threading-in-Python/1/
Also, I would like to know why it is possible to start a thread only with Encoder(0,0).start(). Why do you not have to create an object using the class (e.g. Thread = Encoder(0,0).start() for a new thread to be created?
Sorry if the terminoligy I've used isn't up to scratch, as you can probably tell I'm quite new to threading, and programming in general.
Encoder(0,0).start() is a call to the method to start the thread. In turn, this method calls your run implementation, which doesn't use the rotations method. If you want to do so, then you have to call it in the while loop of run.
With Thread = Encoder(0,0).start() you store the value retrieved from that call (which is None), but to get it you need to start the new thread first anyway.
The run method is the thread of execution.
If you want something else to happen in that thread, you have to call it from Encoder.run().
Oh, and Encoder(0,0).start() does create an object. Just because you didn't bind that object to a local variable doesn't mean it doesn't exist. If it didn't exist you couldn't call its start method.
You have to be very careful about its lifetime though, without a local variable keeping it alive.
You can extend from SR's Poll class so that it can be used in a wait_for:
import poll
class Encoder(poll.Poll):
def __init__(self, motor, pin, div=16):
self.motor = motor
self.pin = pin
self.div = div
self.count = 0
self.target_reached = False
# kick off a thread to count the encoder ticks
self.counter_thread = threading.Thread(target=self._update_count)
self.counter_thread.start()
def _update_count(self):
while True:
wait_for(R.io[0].input[self.pin].query.d)
self.count += 1
def rotations(self, angle, start_speed=50):
if not self.target_reached:
raise Exception("Last motion still in progress!")
self.target_reached = False
# kick off a thread to control the speed
self.angle_thread = threading.Thread(
target=self._update_speeds,
args=(angle, start_speed)
)
self.angle_thread.start()
def _update_speeds(self, angle, start_speed):
# control the motor speed as before
...
# let things know we're done
self.target_reached = True
# implement poll methods
def eval(self):
return (self.target_reached, None)
Which then lets you do:
wheelLeft = Encoder(0,0)
wheelRight = Encoder(1,3)
wheelRight.rotations(720)
wheelLeft.rotations(720)
wait_for(wheelRight & wheelLeft)
Note that an encoder isn't itself a thread - it's a "has a" relationship, not an "is a" relationship

Categories