I am trying to write a script that will start an new engine.
Using some code from IPython source I have:
[engines.py]
def make_engine():
from IPython.parallel.apps import ipengineapp as app
app.launch_new_instance()
if __name__ == '__main__':
make_engine(file='./profiles/security/ipcontroller-engine.json', config='./profiles/e2.py')
if I run this with python engines.py in the command line I run into a configuration problem and my traceback is:
Traceback (most recent call last):
File "engines.py", line 30, in <module>
make_engine(file='./profiles/security/ipcontroller-engine.json', config='./profiles/e2.py')
File "engines.py", line 20, in make_engine
app.launch_new_instance(**kwargs)
File "/Users/martin/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 562, in launch_instance
app = cls.instance(**kwargs)
File "/Users/martin/anaconda/lib/python2.7/site-packages/IPython/config/configurable.py", line 354, in instance
inst = cls(*args, **kwargs)
File "<string>", line 2, in __init__
File "/Users/martin/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 94, in catch_config_error
app.print_help()
File "/Users/martin/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 346, in print_help
self.print_options()
File "/Users/martin/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 317, in print_options
self.print_alias_help()
File "/Users/martin/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 281, in print_alias_help
cls = classdict[classname]
KeyError: 'BaseIPythonApplication'
if I do a super ugly hack like the following, it works:
def make_engine():
from IPython.parallel.apps import ipengineapp as app
app.launch_new_instance()
if __name__ == '__main__':
from sys import argv
argv = ['--file=./profiles/security/ipcontroller-engine.json', '--config=./profiles/e2.py'] #OUCH this is ugly!
make_engine()
Why can't I pass the keyword arguments in the launch_new_instance method?
What are the right keyword arguments?
Where can I get the entry point to entering my configuration options?
Thanks,
Martin
The way to instantiate a new ipengine using the IPEngineApp api is:
def make_engine():
from IPython.parallel.apps.ipengineapp import IPEngineApp
lines1 ="a_command()"
app1 = IPEngineApp()
app1.url_file = './profiles/security/ipcontroller-engine.json'
app1.cluster_id = 'e2'
app1.startup_command = lines1
app1.init_engine()
app1.start()
However, this starts a new ipengine process that takes control of the script execution process, so there is no way I can start multiple engines in the same script using this method.
Thus I had to fallback on the subprocess module to spawn all additional new ipengines:
import subprocess
import os
pids = []
for num in range(1,3):
args = ["ipengine", "--config", os.path.abspath("./profiles/e%d.py" % num), "--file",os.path.abspath( "./profiles/security/ipcontroller-engine.json") ]
pid = subprocess.Popen(args).pid
pids.append(pid)
Related
I had a function for making a logger proxy that could be safely passed to multiprocessing workers and log back to the main logger with essentially the following code:
import logging
from multiprocessing.managers import BaseManager
class SimpleGenerator:
def __init__(self, obj): self._obj = obj
def __call__(self): return self._obj
def get_logger_proxy(logger):
class LoggerManager(BaseManager): pass
logger_generator = SimpleGenerator(logger)
LoggerManager.register('logger', callable = logger_generator)
logger_manager = LoggerManager()
logger_manager.start()
logger_proxy = logger_manager.logger()
return logger_proxy
logger = logging.getLogger('test')
logger_proxy = get_logger_proxy(logger)
This worked great on python 2.7 through 3.7. I could pass the resulting logger_proxy to workers and they would log information, which would then be properly sent back to the main logger.
However, on python 3.8.2 (and 3.8.0) I get the following:
Traceback (most recent call last):
File "junk.py", line 20, in <module>
logger_proxy = get_logger_proxy(logger)
File "junk.py", line 13, in get_logger_proxy
logger_manager.start()
File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/managers.py", line 579, in start
self._process.start()
File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/context.py", line 283, in _Popen
return Popen(process_obj)
File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/anaconda3/envs/py3.8/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_logger_proxy.<locals>.LoggerManager'
So it seems that they changed something about ForkingPickler that makes it unable to handle the closure in my get_logger_proxy function.
My question is, how can I fix this to work in python 3.8? Or is there a better way to get a logger proxy that will work in python 3.8 they way this one did for previous versions?
I'm trying to run these lines of code in atom and python3.6 :
from pycall import CallFile, Call, Application
import sys
def call():
c = Call('SIP/200')
a = Application('Playback', 'hello-world')
cf = CallFile(c, a)
cf.spool()
if __name__ == '__main__':
call()
But I receive this error:
Traceback (most recent call last):
File "/home/pd/gits/voiphone/main.py", line 12, in <module>
call()
File "/home/pd/gits/voiphone/main.py", line 9, in call
cf.spool()
File "/home/pd/telephonerelayEnv/lib/python3.6/site-packages/pycall/callfile.py", line 135, in spool
self.writefile()
File "/home/pd/telephonerelayEnv/lib/python3.6/site-packages/pycall/callfile.py", line 123, in writefile
f.write(self.contents)
File "/home/pd/telephonerelayEnv/lib/python3.6/site-packages/pycall/callfile.py", line 118, in contents
return '\n'.join(self.buildfile())
File "/home/pd/telephonerelayEnv/lib/python3.6/site-packages/pycall/callfile.py", line 100, in buildfile
raise ValidationError
pycall.errors.ValidationError
I would appreciate if you help me solving my problem.
thank you in advance
Looking at the source code for the validity check, it appears like the only check that could be catching you out is the one that verifies the spool directory. By default this is set to /var/spool/asterisk/outgoing but can be changed when you create the callfile:
cf = CallFile(c, a, spool_dir='/my/asterisk/spool/outgoing')
So I am trying to follow a tutorial online on how to work with web.py, and I got it installed but unfortunately, using this piece of code yields a nasty error.
My Code...
import web
urls = (
'/(.*)', 'index'
)
app = web.application(urls, globals())
class index:
def GET(self, name):
return "Hello", name, '. How are you today?'
if __name__== "__main__":
app.run()
MY ERROR:
C:\Users\User\AppData\Local\Programs\Python\Python36-32\python.exe C:/Users/User/PycharmProjects/Webprojects/main.py
Traceback (most recent call last):
File "C:/Users/User/PycharmProjects/Webprojects/main.py", line 15, in <module>
app.run()
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\site-packages\web\application.py", line 312, in run
return wsgi.runwsgi(self.wsgifunc(*middleware))
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\site-packages\web\wsgi.py", line 59, in runwsgi
return httpserver.runsimple(func, server_addr)
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\site-packages\web\httpserver.py", line 154, in runsimple
func = LogMiddleware(func)
File "C:\Users\User\AppData\Local\Programs\Python\Python36-32\lib\site-packages\web\httpserver.py", line 296, in __init__
from BaseHTTPServer import BaseHTTPRequestHandler
ModuleNotFoundError: No module named 'BaseHTTPServer'
Process finished with exit code 1
That import line won't work in Python 3 which moved BaseHTTPServer to http.server
In your specific case you should update web.py to the current version that works with Python 3.
I am using multiprocessing in Python 3.4.3 to speed up my code. I have a problem with getting back my results. I have tried the following simple code, which works just fine.
import numpy
from multiprocessing import Pool
from functools import partial
from OpenDutchWordnet import Wn_grid_parser, le, les, synset, relation
def funct(arg1, value):
return arg1 * value
if __name__ == '__main__':
#------FOR TESTING-------
t=[1,2,3,4]
arg1=4
pool=Pool(processes=1)
func=partial(funct, arg1)
print("func: ", func)
m4=pool.map(func,t)
print(m4)
#------/FOR TESTING-------
Of course, I would like to run more than 1 processes. And the code which I would like to run is the following,
import numpy
from multiprocessing import Pool
from functools import partial
from OpenDutchWordnet import Wn_grid_parser, le, les, synset, relation
def funct2(arg1, value):
return arg1.get_relations(value)
if __name__ == '__main__':
myparser= Wn_grid_parser(Wn_grid_parser.odwn)
l_sensesofwoord = myparser.lemma_get_generator("man")
sense=l_sensesofwoord[0]
synsetid_sense=sense.get_synset_id()
t=["has_hyperonym", "has_holonym"]
arg1=myparser.synsets_find_synset(synsetid_sense)
f=partial(funct2, arg1)
print("f is: ", f)
m1=pool.map(f,t)
When running this code, I get the following errormessage.
f is: functools.partial(<function funct2 at 0x00000000046D5378>, <synset.Synset object at 0x000000005011DDA0>)
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Python34\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "C:\Python34\lib\multiprocessing\pool.py", line 44, in mapstar
return list(map(*args))
File "C:\Users\UTRSB\AppData\Local\Continuum\Anaconda3\mycode\multi.py", line 14, in funct2
return numpy.asarray(arg1.get_relations(value))
File "C:\Python34\lib\site-packages\OpenDutchWordnet\synset.py", line 98, in get_relations
for relation_el in self.synset_el.iterfind(xml_query)]
File "C:\Python34\lib\site-packages\OpenDutchWordnet\synset.py", line 97, in <listcomp>
return [Relation(relation_el)
File "C:\Python34\lib\site-packages\lxml\_elementpath.py", line 156, in select
for elem in result:
File "C:\Python34\lib\site-packages\lxml\_elementpath.py", line 88, in select
for elem in result:
File "C:\Python34\lib\site-packages\lxml\_elementpath.py", line 89, in select
for e in elem.iterchildren(tag):
File "lxml.etree.pyx", line 1363, in lxml.etree._Element.iterchildren (src\lxml\lxml.etree.c:50501)
File "lxml.etree.pyx", line 2730, in lxml.etree.ElementChildIterator.__cinit__ (src\lxml\lxml.etree.c:66739)
File "apihelpers.pxi", line 24, in lxml.etree._assertValidNode (src\lxml\lxml.etree.c:14133)
AssertionError: invalid Element proxy at 53353160
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:/Users/UTRSB/AppData/Local/Continuum/Anaconda3/mycode/multi.py", line 52, in <module>
m1=pool.map(f,t)
File "C:\Python34\lib\multiprocessing\pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "C:\Python34\lib\multiprocessing\pool.py", line 599, in get
raise self._value
AssertionError: invalid Element proxy at 53353160
I have also tried using another way: result=pool.apply_async(geefAlleGloss,[p])
this works just fine, but when I want to use get() to obtain the results. I end up with the same error. answer=result.get()
I think the error is somewhere in the map function. At first, I thought it had something to to with the imported modules from OpenDutchWordnet that I use. But as the partial function works, the error should be caused by the get() and/or map() function.
I would appreciate any help.
I am trying to make a very simple application that allows for people to define their own little python scripts within the application. I want to execute the code in a new process to make it easy to kill later. Unfortunately, Python keeps giving me the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 540, in runfile
execfile(filename, namespace)
File "/home/skylion/Documents/python_exec test.py", line 19, in <module>
code_process = Process(target=exec_, args=(user_input_code))
File "/usr/lib/python2.7/multiprocessing/process.py", line 104, in __init__
self._args = tuple(args)
TypeError: 'code' object is not iterable
>>>
My code is posted below
user_input_string = '''
import os
world_name='world'
robot_name='default_body + os.path.sep'
joint_names=['hingejoint0', 'hingejoint1', 'hingejoint2', 'hingejoint3', 'hingejoint4', 'hingejoint5', 'hingejoint6', 'hingejoint7', 'hingejoint8']
print(joint_names)
'''
def exec_(arg):
exec(arg)
user_input_code = compile(user_input_string, 'user_defined', 'exec')
from multiprocessing import Process
code_process = Process(target=exec_, args=(user_input_code))
code_process.start()
What am I missing? Is there something wrong with my user_input_string? With my compile options? Any help would be appreciated.
I believe args must be a tuple. To create a single-element tuple, add a comma like so: args=(user_input_code,)