I have an application that uses PyQt4 and python-twisted to maintain a connection to another program. I am using "qt4reactor.py" as found here. This is all packaged up using py2exe. The application works wonderfully for 99% of users, but one user has reported that networking is failing completely on his Windows system. No other users report the issue, and I cannot replicate it on my own Windows VM. The user reports no abnormal configuration.
The debugging logs show that the reactor.connectTCP() call is executing immediately, even though the reactor hasn't been started yet! There's no mistaking run order because this is a single-threaded process with 60 sec of computation and multiple log messages between this line and when the reactor is supposed to start.
There's a lot of code, so I am only putting in pseudo-code, hoping that there is a general solution for this issue. I will link to the actual code below it.
import qt4reactor
qt4reactor.install()
# Start setting up main window
# ...
from twisted.internet import reactor
# Separate listener for detecting/processing multiple instances
self.InstanceListener = ListenerFactory(...)
reactor.listenTCP(LISTEN_PORT, self.InstanceListener)
# The active/main connection
self.NetworkingFactory = ClientFactory(...)
reactor.connectTCP(ACTIVE_IP, ACTIVE_PORT, self.NetworkingFactory)
# Finish setting up main window
# ...
from twisted.internet import reactor
reactor.runReturn()
The code is nested throughout the Armory project files. ArmoryQt.py (containing the above code) and armoryengine.py (containing the ReconnectingClientFactory subclass used for this connection).
So, the reactor.connectTCP() call executes immediately. The client code executes the send command and then immediately connectionLost() gets called. It does not appear to try to reconnect. It also doesn't throw any errors other than connectionLost(). Even more mysteriously, it receives messages from the remote node later on, and this app even processes them! But it believes it's not connected (and handshake never finished, so the remote node shouldn't be sending messages, but might be a bug/oversight in that program).
What on earth is going on!? How could the reactor get started before I tell it to start? I searched the code and found no other code that (I believe) could start the reactor.
The API that you're looking for is twisted.internet.reactor.callWhenRunning.
However, it wouldn't hurt to have less than 60 seconds of computation at startup, either :). Perhaps you should spread that out, or delegate it to a thread, if it's relatively independent?
Related
I'm building photovoltaic motorized solar trackers. They're controlled by Raspberry Pi's running python script. RPI's are connected to my public openVPN server for remote control and continuous software development. That's working fine. Recently a passionate customer asked me for some sort of telemetry data for his tracker - let's say, it's current orientation, measured wind speed etc.. By being new to python, I'm really struggling with this part.
I've decided to use socket approach from guides like this. Python script listens on a socket, and my openVPN server, which is also web server, connects to it using PHP fsockopen. Python sends telemetry data, PHP makes it user friendly and displays it on the web. Everything so far works, however I don't know how to design my python script around it.
The problem is, that my script has to run continuously, and socket.accept() halts it's execution, waiting for a connection. Didn't find any obvious solution on the web. Would multi-threading work for this? Sounds a bit like overkill.
Is there a way to run socket listening asynchronously? Like, for example, pigpio callback's which I'm using abundantly?
Or alternatively, is there a better way to accomplish my goal?
I tried with remote accessing status file that my script is maintaining, but that proved to be extremely involved with setup and prone to errors when the file was being written.
I also tried running the second script. Problem is, then I have no access to relevant data, or I need to read beforementioned status file, and that leads to the same problems as above.
Relevant bit of code is literally only this:
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
Best regards.
For a simple case like this I would probably just wrap the socket code into a separate thread.
With multithreading in python, the Global Interpreter Lock (GIL) means that only one thread executes at a time, so you don't really need to add any further locks to the data if you're just reading the values, and don't care if it's also being updated at the same time.
Your code would essentially read something like:
from threading import Thread
def handle_telemetry_requests():
# Main loop
try:
while True:
# Telemetry
conn, addr = S.accept()
conn.send(data.encode())
conn.close()
except:
# Error handling here (this will cause thread to exit if any error occurs)
pass
socket_thread = Thread(target=handle_telemetry_requests)
socket_thread.daemon = True
socket_thread.start()
Setting the daemon flag means that when the main application ends, the thread will also be terminated.
Python does provide the asyncio module - which may provide the callbacks you're looking for (though I don't have any experience with this).
Other options are to run a flask server in the python apps which will handle the sockets for you and you can just code the endpoints to request the data. Or think about using an MQTT broker - the current data can be written to that - and other apps can subscribe to updates.
I have a task where I need to run some python file (call it app.py)
that uploads a server (using flask). This is done in run_tests function.
Then, I want to query this
server for some test inputs that I have. This is done in the function
get_sentences_and_test (I do not put its code here for simplicity of the question. It includes waiting for the server to be up, using sleep instructions, and then query it).
I use python mutiprocessing package, for process and subprocess.
My program has a very simple structure like:
def run_tests():
subprocess.call(['python3', path_to_app.py])
main:
api_proc = Process(target=run_tests)
api_proc.start()
get_sentences_and_test(api_proc)
api_proc.terminate()
My problem is this code works ok, and does what it supposed to do.
However, the port that the subcall in run_tests creates when the server is up and running is not
killed once the program is done. And, I have to kill it manually.
I want to know:
How can I kill the process that occupies this port?
What is the best practice to do this? This should be a day-to-day problem for
people working with services and multi processing\threading. Yet, I didn't find a simple
solution or many sources on this issue.
I write a SSH server with Twisted Conch. But encountered a difficult problem. Assume that user A and user B log in to the twisted ssh server through ssh command. Then user A tail or cat a large file (greater than 100M) on the server, which will cause a lot of echoing through the twisted ssh server, making python ssh process (twisted.conch ) cpu usage is very high (greater than 95%, or even 100%), then the user B will be blocked, a long time no response. Is there any way to sleep userA's session(0.5 seconds) when find the user A has a large number of echo through the twisted ssh server and not blocking the other connected users.
import sys
import checkers
from twisted.python import components, log, logfile
from twisted.cred import portal
from twisted.internet import reactor
from twisted.conch.ssh import factory, keys, session, filetransfer
from twisted.conch.unix import UnixSSHRealm, SSHSessionForUnixConchUser, UnixConchUser
import keyvalue
if __name__ == "__main__":
sshFactory = factory.SSHFactory()
sshFactory.portal = portal.Portal(UnixSSHRealm())
sshFactory.portal.registerChecker(checkers.UsernamePasswordChecker())
sshFactory.publicKeys = {
'ssh-rsa': keys.Key.fromString(keyvalue.publicKey)}
sshFactory.privateKeys = {
'ssh-rsa': keys.Key.fromString(keyvalue.privateKey)}
components.registerAdapter(
SSHSessionForUnixConchUser, UnixConchUser, session.ISession)
log.startLogging(sys.stdout)
reactor.listenTCP(2222, sshFactory)
reactor.run()
This is effectively a bug in Twisted. One user using the server should not generate so much load that it's unresponsive to everyone else.
However, it's not an easy one to fix. There are a couple of solutions.
First, before you do anything else, you should ensure your code is using PyPy, which may give you all the additional performance you need to support more users. Even if it isn't sufficient, it should be helpful in combination with these other solutions.
One is that you can run this code in multiple processes, using a strategy like this, which will allow you to preemptively run the process on multiple cores. Of course, that doesn't let you do much concurrently inside one process.
Another option is that you could use twisted.protocols.htb, which you could use on sshFactory, to rate-limit incoming traffic and ensure it is processed fairly between competing connections.
Please share any progress that you make on this, as I'm sure it would be interesting to other Twisted users!
I have simple Twisted webserver application serving my math requests. Everything working fine (I hide big code pieces which is not related to my question):
#import section ...
class PlsPage(Resource):
isLeaf = True
def render_POST(self, request):
reactor.callLater(0, self._delayedRender, request)
return NOT_DONE_YET
def _delayedRender(self, request):
#some actions before
crossval_scores = cross_validation.cross_val_score(pls1, X, y=numpy.asarray(Y), scoring=my_custom_scorer, cv=KFold(700, n_folds=700))
#some actions after
request.finish()
reactor.listenTCP(12000, server.Site(PlsPage()))
reactor.run()
When I try to speed up cross_validation calculation by setting n_jobs for example to 3.
crossval_scores = cross_validation.cross_val_score(pls1, X, y=numpy.asarray(Y), scoring=my_custom_scorer, cv=KFold(700, n_folds=700), n_jobs=3)
and after that I got exactly 3 exceptions:
twisted.internet.error.CannotListenError: Couldn't listen on any:12000: [Errno 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted.
For some reasons I can't call cross_val_score with n_jobs > 1 inside _delayedRender.
Here is a traceback of exception, for some reasons reactor.listenTCP trying to start 3 times too.
Any ideas how to get it work?
UPD1. I create file PLS.py and moved all the code here, except last 2 lines:
from twisted.web import server
from twisted.internet import reactor, threads
import PLS
reactor.listenTCP(12000, server.Site(PLS.PlsPage()))
reactor.run()
But the problem still persists. I also found that this problem persists only on Windows. My Linux machine run this scripts well.
scikit_learn apparently uses the multiprocessing module in order to achieve concurrency. The multiprocessing transmits data between processes using pickle, which, among other... idiosyncratic problems that it causes, will cause some of the modules imported in your parent process to be imported in your worker processes.
Your PLS_web.py "module", however, is not actually a module, it's a script; since you have put reactor.listenTCP and reactor.run at the bottom of it, it actually does stuff when you import it rather than just loading its code.
This particular error is because since your web server is being run 4 times (once for the controller process, once for each of the three jobs), each of the 3 times beyond the first encounter an error because the first server is already listening on port 12000.
You should remove the reactor.run/reactor.listenTCP lines elsewhere, into a top level script. A good rule of thumb is that these lines should never appear in the same file as a class or def statement; define your code in one place and start it up in another. Once you've moved it to a file that doesn't get imported (and you might want to even put it in a file whose name isn't a legal module identifier, like run-my-server.py) then multiprocessing might be able to import all the code it needs and do its job.
Better yet, don't write those lines at all, write a twisted application plugin and run your program with twistd. If you don't have to put the reactor.run statement in any place, you can't put it in the wrong place :).
I am running an automated test using an Android emulator driving an app with a Monkey script written in Python.
The script is copying files onto the emulator, clicks buttons in the app and reacts depending on the activities that the software triggers during its operation. The script is supposed to be running the cycle a few thousand times so I have this in a loop to run the adb tool to copy the files, start the activities and see how the software is reacting by calling the getProperty method on the device with the parameter 'am.current.comp.class'.
So here is a very simplified version of my script:
for target in targets:
androidSDK.copyFile(emulatorName, target, '/mnt/sdcard')
# Runs the component
device.startActivity(component='com.myPackage/com.myPackage.myactivity')
while 1:
if device.getProperty('am.current.comp.class') == 'com.myPackage.anotheractivity':
time.sleep(1) # to allow the scree to display the new activity before I click on it
device.touch(100, 100, 'DOWN_AND_UP')
# Log the result of the operation somewhere
break
time.sleep(0.1)
(androidSDK is a small class I've written that wraps some utility functions to copy and delete files using the adb tool).
On occasions the script crashes with one of a number of exceptions, for instance (I am leaving out the full stack trace)
[com.android.chimpchat.adb.AdbChimpDevice]com.android.ddmlib.ShellCommandUnresponsiveException
or
[com.android.chimpchat.adb.AdbChimpDevice] Unable to get variable: am.current.comp.class
[com.android.chimpchat.adb.AdbChimpDevice]java.net.SocketException: Software caused connectionabort: socket write error
I have read that sometimes the socket connection to the device becomes unstable and may need a restart (adb start-server and adb kill-server come in useful).
The problem I'm having is that the tools are throwing Java exceptions (Monkey runs in Jython), but I am not sure how those can be trapped from within my Python script. I would like to be able to determine the exact cause of the failure inside the script and recover the situation so I can carry on with my iterations (re-establish the connection, for instance? Would for instance re-initialising my device with another call to MonkeyRunner.waitForConnection be enough?).
Any ideas?
Many thanks,
Alberto
EDIT. I thought I'd mention that I have discovered that it is possible to catch Java-specific exceptions in a Jython script, should anyone need this:
from java.net import SocketException
...
try:
...
except(SocketException):
...
It is possible to catch Java-specific exceptions in a Jython script:
from java.net import SocketException
...
try:
...
except(SocketException):
...
(Taken from OP's edit to his question)
This worked for me:
device.shell('exit')# Exit the shell