Calling methods from an already running python script - python

I'm new to python and have been struggling with this for quite a while. Okay so I have a program that logs into a server and then constantly pings the server every 10 seconds to see if the status of the server has changed. In the same script I have a function which sends a message to the server.
a simple example of the send method:
def send(self, message):
url = ("https://testserver.com/socket?message=%s") % (message)
req = urllib2.Request(url, None, None)
response = urllib2.urlopen(req).read()
print response
would it be possible for me to call this method from another script while this one was running? as in using the same session. It seems as though when I run my script which calls this function it creates a new instance of it instead of using the current instance of that script causing it to throw my exception saying I am not connected to the server.
Sorry for the noob question. I have tried googling for a while but I cant seem to find the answer. I have read the following but these didn't solve the problem:
Python call function within class
Python code to get current function into a variable?
Hi #nFreeze thanks for the reply I have tried to use ZeroRPC but every time I run the script/example you gave (obviously edited) I run into this error:
Traceback (most recent call last):
File "C:\Users\dwake\Desktop\Web Projects\test.py", line 1, in <module>
import zerorpc
File "C:\Python27\lib\site-packages\zerorpc\__init__.py", line 27, in <module>
from .context import *
File "C:\Python27\lib\site-packages\zerorpc\context.py", line 29, in <module>
import gevent_zmq as zmq
File "C:\Python27\lib\site-packages\zerorpc\gevent_zmq.py", line 33, in <module>
import gevent.event
File "C:\Python27\lib\site-packages\gevent\__init__.py", line 48, in <module>
from gevent.greenlet import Greenlet, joinall, killall
File "C:\Python27\lib\site-packages\gevent\greenlet.py", line 6, in <module>
from gevent.hub import greenlet, getcurrent, get_hub, GreenletExit, Waiter
File "C:\Python27\lib\site-packages\gevent\hub.py", line 30, in <module>
greenlet = __import__('greenlet').greenlet
ImportError: No module named greenlet
Even though I have installed gevent. I'm not sure how to fix this. Have been googling for a good hour now.

What you're looking for is called an RPC server. It allows external clients to execute exposed functions in your app. Luckily python has many RPC options. ZeroRPC is probably my favorite as it is easy to use and supports node.js. Here is an example of how to expose your send method using ZeroRPC:
In your app (server)
import zerorpc
class HelloRPC(object):
def send(self, message):
url = ("https://testserver.com/socket?message=%s") % (message)
req = urllib2.Request(url, None, None)
response = urllib2.urlopen(req).read()
return response
s = zerorpc.Server(HelloRPC())
s.bind("tcp://0.0.0.0:4242")
s.run()
In the other app (client)
import zerorpc
c = zerorpc.Client()
c.connect("tcp://127.0.0.1:4242")
print c.send("RPC TEST!")

Simpliest way is to use UNIX signals. You'll need no third-party libraries.
# your-daemon.py
import signal
from time import sleep
def main():
while True:
print "Do some job..."
sleep(5)
def send():
print "Send your data"
def onusr1(*args):
send()
if __name__ == '__main__':
signal.signal(signal.SIGUSR1, onusr1)
main()
Run in terminal:
$ pgrep -f your-daemon.py | xargs kill -SIGUSR1
Of course, this works only in local machine. Also you can't specify any argument to send function, and if you want to have many handlers, then use RPC as adviced below.

Related

Why does my asyncio event loop die when I kill a forked subprocess (PTY)?

I try to create a software that spawns bash shells and makes them controllable via websockets.
It's based on fastapi and fastapi_socketio on the server side and socket.io + JS on the client side.
Gotta admit that I am an absolute noob when it comes down to asyncio. I can use it when I control it by myself but I am not familiar with managing event loops etc. coming from other modules.
To start a PTY I use the fork() method from the pty module like in figure "1 - Forking a PTY" (the submitted command is "/bin/bash"):
It actually works pretty well. The client_sid is the socket.io session id of the client and I can seamlessly control multiple terminals via xtermjs from my web UI.
I got one problem though. When I issue "exit" into xtermjs, I expect the child process to exit and free the file descriptor. This should be detected by the fstat method in the method displayed in figure "2 - The method sending the PTYs STDOUT/ERR to the remote socket" and the method should just exit and close the websocket connection then.
What happens instead is that the web terminal receives multiple exceptions in a very fast manner (figure "3 - The error displayed to the client") and when I try to shut down uvicorn with CTRL+C I get the error from figure "4 - The error displayed when I try to shutdown uvicorn with CTRL+C".
I'd really appreciate any help with this topic because I just don't have deep enough knowledge of asynchronous python (and probably the OS/PTYs) yet.
To me it feels like the child process forked from my main process is somehow interacting with the asyncio loops but I really don't know how. Is the child process probably inheriting the asyncio loop and killing it when it dies, does that make any sense?
The only solution coming to my mind is detecting the "kill" command issued from the web UI but that would miss e.g. a kill signal sent to the PTY subprocess and it's not really clean.
Thanks in regard.
1 - Forking a PTY
async def pty_handle_pty_config(self, sio: AsyncServer, client_sid: str, message: dict):
if not client_sid in self.clients or self.clients[client_sid] is None:
await self.disconnect_client(sio=sio, client_sid=client_sid)
return
if not isinstance(message, dict) or not 'command' in message or not isinstance(message['command'], str):
await self.disconnect_client(sio=sio, client_sid=client_sid)
return
child_pid, fd = fork() # pty.fork()
if child_pid == 0:
subproc_run(message['command']) # subprocess.run()
else:
self.ptys[client_sid] = {
'fd': fd
}
self.set_winsize(client_sid, 50, 50)
await sio.emit('pty_begin', data=dict(state='success'), namespace='/pty', room=client_sid)
sio.start_background_task(
target=self.pty_read_and_forward,
sio=sio,
client_sid=client_sid,
client_data=self.clients[client_sid]
)
2 - The method sending the PTYs STDOUT/ERR to the remote socket
async def pty_read_and_forward(self, sio: AsyncServer, client_sid: str, client_data: dict):
log = get_logger()
max_read_bytes = 1024 * 20
loop = get_event_loop()
while True:
try:
await async_sleep(.05) # asyncio.sleep
timeout_sec = 0
(data_ready, _, _) = await loop.run_in_executor(None, select, [self.ptys[client_sid]['fd']], [], [], timeout_sec)
if data_ready:
output = await loop.run_in_executor(None, os_read, self.ptys[client_sid]['fd'], max_read_bytes) # os.read
try:
fstat(self.ptys[client_sid]['fd']) # os.fstat
except OSError as exc:
log.error(exc)
break
await sio.emit(
event='pty_out',
data=dict(
output=output.decode('utf-8', errors='ignore')
),
namespace='/pty',
room=client_sid
)
except Exception as exc:
if not client_sid in self.clients:
log.info(f'PTY session closed [sid={client_sid};user={client_data["username"]}]')
else:
log.warn(f'PTY session closed unexpectedly [sid={client_sid};user={client_data["username"]}] - {excstr(exc)}')
break
3 - The error displayed to the client
asyncio.exceptions.CancelledError
Process SpawnProcess-2:
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.10/dist-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
target(sockets=sockets)
File "/usr/local/lib/python3.10/dist-packages/uvicorn/server.py", line 60, in run
return asyncio.run(self.serve(sockets=sockets))
File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/usr/local/lib/python3.10/dist-packages/uvicorn/server.py", line 80, in serve
await self.main_loop()
File "/usr/local/lib/python3.10/dist-packages/uvicorn/server.py", line 221, in main_loop
await asyncio.sleep(0.1)
File "/usr/lib/python3.10/asyncio/tasks.py", line 599, in sleep
loop = events.get_running_loop()
RuntimeError: no running event loop
4 - The error displayed when I try to shutdown uvicorn with CTRL+C
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/unix_events.py", line 42, in _sighandler_noop
def _sighandler_noop(signum, frame):
BlockingIOError: [Errno 11] Resource temporarily unavailable
I luckily got the problem fixed by invoking os.execvpe() instead of using subprocess.run(). This approach replaces the child process with a completely new process while staying connected to the file descriptor the parent process is able to read.
I didn't really know the consequences of using fork() in the child when asyncio is used within the parent process. It's a really bad approach because asyncio won't be able to separate the child's event loop from the parent one. If you close the child one's, the parent's one will also stop working due to both "sharing" their environment thanks to the fork().
See this question for more details: mixing asyncio with fork in python: bad idea?
It works like this:
child_pid, fd = fork()
if child_pid == 0:
# This is the forked process. Replace it with the command
# the user wants to run.
env = os.environ.copy()
execvpe(message['command'], [message['command'],], env)

Requests.get() gets stuck on connect

I'm trying to make a simple get request using requests
import requests
def main():
content = requests.get("https://google.com")
print(content.status_code)
if __name__ == "__main__":
main()
I'm running this on Linux, version 17.10.
Python version: either 2.7 or 3.6 (tried both).
The code gets stuck in running, it doesn't timeout or anything.
After I stop it, based on the callstack, it gets stuck at:
File "/usr/lib/python2.7/socket.py", line 228, in meth return getattr(self._sock,name)(*args)
I just ran your code on Python console and it returned 200. I am Running Python 3.6.7, Ubuntu 18.04.
It may be the case that your computer cannot connect to google.com for a very long time, you should pass timeout parameter along with try except
Use the following code:
import requests
def main():
success = False
while success==False:
try:
content = requests.get("https://google.com", timeout=5)
success=True
except:
pass
print(content.status_code)
if __name__ == "__main__":
main()
If there is a temporary problem with your network connection, this code snippet gurantees a proper response.

Using gevent.queue.Queue.get(): gevent.hub.LoopExit: 'This operation would block forever'

I've been trying to integrate event streaming into my flask application for the past few days with good results on my local testing, but somewhat worse when running the application with uWSGI on my server. My code is basically built upon the example from flask. I'm using python 3.4.2.
The problem
When running the app on my uWSGI server, it raises gevent.hub.LoopExit: 'This operation would block forever'. whenever a client tries connecting to the /streaming endpoint. My assumption is that this is caused by calling get() on an empty queue indefinitely.
Full traceback:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/werkzeug/wsgi.py", line 691, in __next__
return self._next()
File "/usr/lib/python3/dist-packages/werkzeug/wrappers.py", line 81, in _iter_encoded
for item in iterable:
File "./voting/__init__.py", line 49, in gen
result = queue.get(block=True)
File "/usr/local/lib/python3.4/dist-packages/gevent/queue.py", line 284, in get
return self.__get_or_peek(self._get, block, timeout)
File "/usr/local/lib/python3.4/dist-packages/gevent/queue.py", line 261, in __get_or_peek
result = waiter.get()
File "/usr/local/lib/python3.4/dist-packages/gevent/hub.py", line 878, in get
return self.hub.switch()
File "/usr/local/lib/python3.4/dist-packages/gevent/hub.py", line 609, in switch
return greenlet.switch(self)
gevent.hub.LoopExit: ('This operation would block forever', <Hub at 0x7f717f40f5a0 epoll default pending=0 ref=0 fileno=6>)
My code
The /streaming endpoint:
#app.route("/streaming", methods=["GET", "OPTIONS"])
def streaming():
def gen():
queue = Queue()
subscriptions.add_subscription(session_id, queue)
try:
while True:
result = queue.get() # Where the Exception is raised
ev = ServerSentEvent(json.dumps(result["data"]), result["type"])
yield ev.encode()
except GeneratorExit: # TODO Need a better method to detect disconnecting
subscriptions.remove_subscription(session_id, queue)
return Response(gen(), mimetype="text/event-stream")
Adding an event to the queue:
def notify():
msg = {"type": "users", "data": db_get_all_registered(session_id)}
subscriptions.add_item(session_id, msg) # Adds the item to the relevant queues.
gevent.spawn(notify)
As previously said, it runs fine locally with werkzeug:
from app import app
from gevent.wsgi import WSGIServer
from werkzeug.debug import DebuggedApplication
a = DebuggedApplication(app, evalex=True)
server = WSGIServer(("", 5000), a)
server.serve_forever()
What I've tried
Monkey-patching with monkey.patch_all().
Switching from Queue to JoinableQueue.
gevent.sleep(0) in combination with Queue.get().
That exception basically means that there are no other greenlets running in that loop/thread to switch to. So when the greenlet goes to block (queue.get()), the hub has nowhere else to go, nothing else to do.
The same code would work in gevent's WSGIServer because the server itself is a greenlet that's running the socket.accept loop, so there's always another greenlet to switch to. But apparently uwsgi doesn't work that way.
The way to fix this is to arrange for there to be other greenlets running. For example, instead of spawning a greenlet to notify on demand, arrange for such a greenlet to already be running and blocking on its own queue.

Flask mail giving Pickling errors with celery

I trying to use Celery (and rabbitmq) to send emails asynchronously with Flask mail. Initially I had an issue with render_template from flask breaking celery - Flask-Mail breaks Celery (The celery task would still execute successfully but no emails were being sent). While I was trying to fix that issue (which is still not fixed!) - I stumbled upon another problem. This pickling error which is due to a thread lock. I noticed that the problem started when I changed the way I called the celery task (from delay to apply_async). Since then I tried reverting my changes but I still can't get rid of the error. Any help regarding any one of the issues will be highly appreciated.
The traceback:
File "/Users/.../python2.7/site-packages/celery/app/amqp.py", line 250, in publish_task
**kwargs
File "/Users/.../lib/python2.7/site-packages/kombu/messaging.py", line 157, in publish
compression, headers)
File "/Users/.../lib/python2.7/site-packages/kombu/messaging.py", line 233, in _prepare
body) = encode(body, serializer=serializer)
File "/Users/.../lib/python2.7/site-packages/kombu/serialization.py", line 170, in encode
payload = encoder(data)
File "/Users/.../lib/python2.7/site-packages/kombu/serialization.py", line 356, in dumps
return dumper(obj, protocol=pickle_protocol)
PicklingError: Can't pickle <type 'thread.lock'>: attribute lookup thread.lock failed
tasks.py
from __future__ import absolute_import
from flask import render_template
from flask.ext.mail import Message
from celery import Celery
celery = Celery('tasks',
broker = 'amqp://tester:testing#localhost:5672/test_host')
#celery.task(name = "send_async_email")
def send_auth_email(app, nickname, email):
with app.test_request_context("/"):
recipients = []
recipients.append(email)
subject = render_template("subject.txt")
msg = Message(subject, recipients = recipients)
msg.html = render_template("test.html", name = nickname)
app.mail.send(msg)
In the test case I just call:
send_auth_email.delay(test_app, nick, email)
FYI: The API works perfectly fine if I don't use celery (i.e. synchronously). Thanks in advance!
When you invoke send_auth_email.delay(test_app, nick, email) all function arguments are being sent to task Queue. To do so, Celery pickles them.
Short answer test_app, being flask application, uses some magic and cannot be pickled. See docs for details on what can be pickled, and what not.
One solution is to pass all necessary arguments (in your case it seems that this is only name) to re-instantiate test_app in send_auth_email.

python nose and twisted

I am writing a test for a function that downloads the data from an url with Twisted (I know about twisted.web.client.getPage, but this one adds some extra functionality). Either ways, I want to use nosetests since I am using it throughout the project and it doesn't look appropriate to use Twisted Trial only for this particular test.
So what I am trying to do is something like:
from nose.twistedtools import deferred
#deferred()
def test_download(self):
url = 'http://localhost:8000'
d = getPage(url)
def callback(data):
assert len(data) != 0
d.addCallback(callback)
return d
On localhost:8000 listens a test server. The issue is I always get twisted.internet.error.DNSLookupError
DNSLookupError: DNS lookup failed: address 'localhost:8000' not found: [Errno -5] No address associated with hostname.
Is there a way I can fix this? Does anyone actually uses nose.twistedtools?
Update: A more complete traceback
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/nose-0.11.2-py2.6.egg/nose/twistedtools.py", line 138, in errback
failure.raiseException()
File "/usr/local/lib/python2.6/dist-packages/Twisted-9.0.0-py2.6-linux-x86_64.egg/twisted/python/failure.py", line 326, in raiseException
raise self.type, self.value, self.tb
DNSLookupError: DNS lookup failed: address 'localhost:8000' not found: [Errno -5] No address associated with hostname.
Update 2
My bad, it seems in the implementation of getPage, I was doing something like:
obj = urlparse.urlparse(url)
netloc = obj.netloc
and passing netloc to the the factory when I should've passed netloc.split(':')[0]
Are you sure your getPage function is parsing the URL correctly? The error message seems to suggest that it is using the hostname and port together when doing the dns lookup.
You say your getPage is similar to twisted.web.client.getPage, but that works fine for me when I use it in this complete script:
#!/usr/bin/env python
from nose.twistedtools import deferred
from twisted.web import client
import nose
#deferred()
def test_download():
url = 'http://localhost:8000'
d = client.getPage(url)
def callback(data):
assert len(data) != 0
d.addCallback(callback)
return d
if __name__ == "__main__":
args = ['--verbosity=2', __file__]
nose.run(argv=args)
While running a simple http server in my home directory:
$ python -m SimpleHTTPServer
Serving HTTP on 0.0.0.0 port 8000 ...
The nose test gives the following output:
.
----------------------------------------------------------------------
Ran 1 test in 0.019s
OK

Categories