I'm getting some very strange behaviour in a tornado application.
I'm running a lot of processes each which have their own HTTP servers and running on a different port.
I added a new process to the system which is another TCPServer class that listens on an entirely different port and doesn't have any interaction with the other processes.
I bring the new server up as follows:
def runSimService(port):
sim=SimService()
sim.listen(port)
currentIOLoop = tornado.ioloop.IOLoop.current()
currentIOLoop.start()
class SimService(TCPServer):
def __init__(self,host='localhost',motorport=27017):
TCPServer.__init__(self)
self.log=logging.getLogger("tornado.access")
# Needs to contain a User class log.
self.con=motor.MotorClient(host,motorport)
self.db=self.con.pDB
self.col=self.db.pCol
Basically that's the only code I left when debugging. The crash i get isn't a normal python exception crash which is worrying me.
I'm developing on a mac the moment. Can someone please explain is this crash is something wrong with my code or is there something else happening here?'
UPDATE:
Ok this is really bizare it only seems to happen when I import the following:
from nltk.stem.snowball import SnowballStemmer
OR
from nltk import word_tokenize, pos_tag
or nltk in general...
Could there be some weird interaction between libraries happening? I'm stuck
Code to create processes
if __name__ =='__main__':
AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient",max_clients=2000)
processes=[]
processes.append(Process(target=runSimServer,args=(...,)))
processes.append(Process(target=runServer,args=(...)))
processes.append(Process(target=runServer,args=(...)))
processes.append(Process(target=runServer,args=(...)))
processes.append(Process(target=runServer,args=(...)))
processes.append(Process(target=runServer,args=(...)))
# write pids to pid/ directory for use with the shutdown program
with open("pid/"+__file__.split(".")[0]+".pid","w") as f:
for p in processes:
p.start()
f.write(str(p.pid)+"\n")
Thanks
The key part of that crash message is "multi-threaded process forked". If you are going to use both threads and processes, you must fork all your processes before creating any threads. It looks like nltk is creating some threads when you import it. If you are also using multiple processes (it doesn't look like you are from the code you quoted, but that is obviously incomplete), you must not import nltk until after all processes have been started.
Related
I was working on a specific type of application in Dash which required the action executed by pressing the button to be performed in a separate process. This process, in turn, was parallelizable, and in some cases spawned child processes for the efficient computation. The configuration given in this cases makes the child processes to get stuck. The code below reproduces the situation described as follows:
import multiprocessing
import time
import dash
from dash import html
from dash.dependencies import Input, Output
app = dash.Dash(__name__)
app.layout = html.Div([
html.Button(id='refresh-button', children='Button'),
html.Div(id='dynamic-container1')
])
def run_function(i):
print('hello')
time.sleep(15)
print(f'hello world {i}')
def run_process():
num = 1
print('hello world 00000')
process = multiprocessing.Process(target=run_function, args=(num,))
process.start()
process.join()
print('hello world')
#app.callback(Output('dynamic-container1', 'children'), Input('refresh-button', 'n_clicks'))
def refresh_state(click):
if click == 0 or click is None:
return None
p = multiprocessing.Process(target=run_process)
p.start()
p.join()
return None
if __name__ == '__main__':
app.run_server(debug=True)
The output of this application when pressing the button is always the following:
Connected to pydev debugger (build 172.3968.37)
Dash is running on http://127.0.0.1:8050/
* Serving Flask app 'main' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
pydev debugger: process 6884 is connecting
hello world 00000
which means that the first process utilizing the function run_process() was launched, however, the child process run_function(i) was not even started. I was trying to find an explanation in popular books on multiprocessing in Python and any guidance to these "chaining" processes, but to no avail. From my understanding, the new child process run_function(i) should occupy a separate core (if there is any free core) and not to depend on the resources consumed by the parent process run_process(). Could you, please, explain to me the mechanics of this? I have a doubt that in this code run_function(i) might be coerced to consume the same resources as run_process() does, so the system basically is just restricting any new process from starting from the same resources, but I would like to confirm it from more expert users of Python.
I used Python 3.7 and Pycharm Community 2017.2.3 on Win7 to reproduce this example
I can not reproduce your error and i think this is because you are using windows ( i am on Linux with python 3.9), so i can not find the error for you, but maybe i can give you some hints:
First: To find the error, try to reduce the code to the core of the problem (you can remove the whole dash stuff to check if this is the error). In my tests the results were the same, with and without dash
Second: Windows and Linux handle the multiprocessing stuff a bit different:
windows spawns the process:
The parent process starts a fresh python interpreter process. The child process will only inherit those resources necessary to run the process object’s run() method. In particular, unnecessary file descriptors and handles from the parent process will not be inherited. Starting a process using this method is rather slow compared to using fork or forkserver.
Unix Systems fork the process
The parent process uses os.fork() to fork the Python interpreter. The child process, when it begins, is effectively identical to the parent process. All resources of the parent are inherited by the child process. Note that safely forking a multithreaded process is problematic.
With Unix System you can spawn (multiprocessing.set_start_method("spawn")), with this i can't reproduce you error, but with this fork/spawn example i wanted to make clear, that there sometimes things are a bit different between windows and linux, even if the same packages are used. I think your understanding of multiprocessing is correct. (Maybe this site helps too.)
Third: In the docs are some programming guidelines you should be aware of. Maybe they will help, too. And in general, the multiprocessing package does not work well with a lot of interactive python shells (like IDLE or pycharm), this can may be better on newer versions. Maybe you should try it from terminal to check if this changes something.
I hope this helps a litle bit.
I am really new to python's multiple processing, and I have some notions about async call, yield and etc...the most basic stuff. And I came to this snippet, in which multiprocessing.Process wraps around tornado.ioloop.IOLoop.instance
# Set up the tornado web app
app = make_app(predicted_model_queue)
app.listen(8080)
server_process = Process(target=tornado.ioloop.IOLoop.instance().start)
# Start up the server to expose the metrics.
server_process.start()
It intends to start a tornado server as a server_process, but the code does not work. I got the error,
OSError: [Errno 9] Bad file descriptor
I have no experiences with both lib, and have no idea how to fix it. Can anyone please help me?
This is an unusual pattern - if you're writing a new app, I wouldn't recommend copying it.
If you're just trying to run an app that does this (looks like it came from here), the problem is that IOLoops cannot safely cross process boundaries (on some platforms it can sometimes work, but not always). To rewrite this code to correctly create the app and IOLoop in the child process, you could do this:
def run_server():
app = make_app(predicted_model_queue)
app.listen(8080)
tornado.ioloop.IOLoop.current().start()
server_process = Process(target=run_server)
server_process.start()
This way only the predicted_model_queue is shared between the two processes.
I have simple Twisted webserver application serving my math requests. Everything working fine (I hide big code pieces which is not related to my question):
#import section ...
class PlsPage(Resource):
isLeaf = True
def render_POST(self, request):
reactor.callLater(0, self._delayedRender, request)
return NOT_DONE_YET
def _delayedRender(self, request):
#some actions before
crossval_scores = cross_validation.cross_val_score(pls1, X, y=numpy.asarray(Y), scoring=my_custom_scorer, cv=KFold(700, n_folds=700))
#some actions after
request.finish()
reactor.listenTCP(12000, server.Site(PlsPage()))
reactor.run()
When I try to speed up cross_validation calculation by setting n_jobs for example to 3.
crossval_scores = cross_validation.cross_val_score(pls1, X, y=numpy.asarray(Y), scoring=my_custom_scorer, cv=KFold(700, n_folds=700), n_jobs=3)
and after that I got exactly 3 exceptions:
twisted.internet.error.CannotListenError: Couldn't listen on any:12000: [Errno 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted.
For some reasons I can't call cross_val_score with n_jobs > 1 inside _delayedRender.
Here is a traceback of exception, for some reasons reactor.listenTCP trying to start 3 times too.
Any ideas how to get it work?
UPD1. I create file PLS.py and moved all the code here, except last 2 lines:
from twisted.web import server
from twisted.internet import reactor, threads
import PLS
reactor.listenTCP(12000, server.Site(PLS.PlsPage()))
reactor.run()
But the problem still persists. I also found that this problem persists only on Windows. My Linux machine run this scripts well.
scikit_learn apparently uses the multiprocessing module in order to achieve concurrency. The multiprocessing transmits data between processes using pickle, which, among other... idiosyncratic problems that it causes, will cause some of the modules imported in your parent process to be imported in your worker processes.
Your PLS_web.py "module", however, is not actually a module, it's a script; since you have put reactor.listenTCP and reactor.run at the bottom of it, it actually does stuff when you import it rather than just loading its code.
This particular error is because since your web server is being run 4 times (once for the controller process, once for each of the three jobs), each of the 3 times beyond the first encounter an error because the first server is already listening on port 12000.
You should remove the reactor.run/reactor.listenTCP lines elsewhere, into a top level script. A good rule of thumb is that these lines should never appear in the same file as a class or def statement; define your code in one place and start it up in another. Once you've moved it to a file that doesn't get imported (and you might want to even put it in a file whose name isn't a legal module identifier, like run-my-server.py) then multiprocessing might be able to import all the code it needs and do its job.
Better yet, don't write those lines at all, write a twisted application plugin and run your program with twistd. If you don't have to put the reactor.run statement in any place, you can't put it in the wrong place :).
I’m trying to write a program in Python. What I want to write is a script which immediately returns a friendly message to the user, but spawns a long subprocess in the background that takes with several different files and writes them to a granddaddy file. I’ve done several tutorials on threading and processing, but what I’m running into is that no matter what I try, the program waits and waits until the subprocess is done before it displays the aforementioned friendly message to the user. Here’s what I’ve tried:
Threading example:
#!/usr/local/bin/python
import cgi, cgitb
import time
import threading
class TestThread(threading.Thread):
def __init__(self):
super(TestThread, self).__init__()
def run(self):
time.sleep(5)
fileHand = open('../Documents/writable/output.txt', 'w')
fileHand.write('Big String Goes Here.')
fileHand.close()
print 'Starting Program'
thread1 = TestThread()
#thread1.daemon = True
thread1.start()
I’ve read these SO posts on multithreading
How to use threading in Python?
running multiple threads in python, simultaneously - is it possible?
How do threads work in Python, and what are common Python-threading specific pitfalls?
The last of these says that running threads concurrently in Python is actually not possible. Fair enough. Most of those posts also mention the multiprocessing module, so I’ve read up on that, and it seems fairly straightforward. Here’s the some of the resources I’ve found:
How to run two functions simultaneously
Python Multiprocessing Documentation Example
https://docs.python.org/2/library/multiprocessing.html
So here’s the same example translated to multiprocessing:
#!/usr/local/bin/python
import time
from multiprocessing import Process, Pipe
def f():
time.sleep(5)
fileHand = open('../Documents/writable/output.txt', 'w')
fileHand.write('Big String Goes Here.')
fileHand.close()
if __name__ == '__main__':
print 'Starting Program'
p = Process(target=f)
p.start()
What I want is for these programs to immediately print ‘Starting Program’ (in the web-browser) and then a few seconds later a text file shows up in a directory to which I’ve given write privileges. However, what actually happens is that they’re both unresponsive for 5 seconds and then they print ‘Starting Program’ and create the text file at the same time. I know that my goal is possible because I’ve done it in PHP, using this trick:
//PHP
exec("php child_script.php > /dev/null &");
And I figured it would be possible in Python. Please let me know if I’m missing something obvious or if I’m thinking about this in the completely wrong way. Thanks for your time!
(System information: Python 2.7.6, Mac OSX Mavericks. Python installed with homebrew. My Python scripts are running as CGI executables in Apache 2.2.26)
Ok- I think I found the answer. Part of it was my own misunderstanding. A python script can't simply return message to a client-side (ajax) program but still be executing a big process. The very act of responding to the client means that the program has finished, threads and all. The solution, then, is to use the python version of this PHP trick:
//PHP
exec("php child_script.php > /dev/null &");
And in Python:
#Python
subprocess.call(" python worker.py > /dev/null &", shell=True)
It starts an entirely new process outside the current one, and it will continue after the current one has ended. I'm going to stick with Python because at least we're using a civilized api function to start the worker script instead of the exec function, which always made me uncomfortable.
I am writing some multiprocessing code (Python 2.6.4, WinXP) that spawns processes to run background tasks. In playing around with some trivial examples, I am running into an issue where my code just continuously spawns new processes, even though I only tell it to spawn a fixed number.
The program itself runs fine, but if I look in Windows TaskManager, I keep seeing new 'python.exe' processes appear. They just keep spawning more and more as the program runs (eventually starving my machine).
For example,
I would expect the code below to launch 2 python.exe processes. The first being the program itself, and the second being the child process it spawns. Any idea what I am doing wrong?
import time
import multiprocessing
class Agent(multiprocessing.Process):
def __init__(self, i):
multiprocessing.Process.__init__(self)
self.i = i
def run(self):
while True:
print 'hello from %i' % self.i
time.sleep(1)
agent = Agent(1)
agent.start()
It looks like you didn't carefully follow the guidelines in the documentation, specifically this section where it talks about "Safe importing of main module".
You need to protect your launch code with an if __name__ == '__main__': block or you'll get what you're getting, I believe.
I believe it comes down to the multiprocessing module not being able to use os.fork() as it does on Linux, where an already-running process is basically cloned in memory. On Windows (which has no such fork()) it must run a new Python interpreter and tell it to import your main module and then execute the start/run method once that's done. If you have code at "module level", unprotected by the name check, then during the import it starts the whole sequence over again, ad infinitum
When I run this in Linux with python2.6, I see a maximum of 4 python2.6 processes and I can't guarantee that they're all from this process. They're definitely not filling up the machine.
Need new python version? Linux/Windows difference?
I don't see anything wrong with that. Works fine on Ubuntu 9.10 (Python 2.6.4).
Are you sure you don't have cron or something starting multiple copies of your script? Or that the spawned script is not calling anything that would start a new instance, for example as a side effect of import if your code runs directly on import?