Ruby on Rails hangs during HTTP request from Python script - python

I'm writing a web application in Ruby on Rails where users can write python code in a web editor and execute it in an docker environment on the server. I've written a simple python code that creates a docker container:
import docker
import sys
if __name__ == "__main__":
if(len(sys.argv) == 2):
token = sys.argv[1]
client = docker.from_env()
res = client.containers.run('openql','python3 /home/pythonwrapper.py '+token)
else:
print("Requires one parameter")
As you can see it creates a docker container using the image openql and execute a simple python script inside. If a user press the execute button in the web editor Ruby on Rails executes this script by using the this command: system("python","script.py","<TOKEN>") So far so good, this works all fine.
However, there is a problem executing the pythonwrapper.py inside the docker container. I'm using python's requests library for requesting the files written by the user to execute them inside the docker container. The code looks like this:
# Request all the available assets, it does not download the content of the files.
# Downloading the content of the files is done in a later request
url = rails_url+"allAssets/"+token
res = requests.get(url)
#Convert bytes into string
content = str(res.content, 'utf8')
Looks pretty simple, but the whole ruby on rails webserver hangs during this request. And the strange thing is that it all works fine if I first executes this script manually from the console after restarting the server.
The only thing I get from the Rails console if this:
Started GET "/allAssets/123" for 10.0.2.15 at 2017-08-02 10:24:59 +0200
When I quit the webserver for a restart, Ruby on Rails shows the following logs:
Screenshot console
And then nothing. Does anyone know what could be the problem?

You should be running the container in background i guess.
res = client.containers.run('openql','python3 /home/pythonwrapper.py '+token, detach=True)
This will make sure that your server is not stuck till it waits for the container to complete and finish the command it is executing

Related

InactiveRpcChannel Error when running a GRPC server as a subprocess daemon in flask on cloud run

I'm deploying a flask app on cloud run using gunicorn. My setup is kind of unusual so bear with me; my app is written in python and part of it is a GRPC server binary embedded in the python package, originally written in cpp. This GRPC server is initiated only once (by checking the process name using psutil) by the factory method for the GRPC's channel stub
The problem is the following:
While testing locally everything runs smoothly, the flask app gets a request it proxies the requests to the GRPC subprocess, wait for a response serialize it, and then sends it back to the client.
However, on cloud run I'm facing with thousand of InactiveRpcError.
What I'm suspecting is that cloud-run is killing the workers which is coincidentally the parent process for the grpc server subprocess.
However, I tried to add a retry to re-connect but it doesn't work
Which led me to another suspicion that cloud-run runtime might suspending the process without killing since the connection function, should start another process
Finally, I tried to create a init process using bash and run the grpc-server as a background process as detailed in Ahmed-tb blog here but still without any success
Code for the GRPC server initializer
def connect(timeout=TIMEOUT_SEC):
#retry(retry_on_exception=grpc.FutureTimeoutError, stop_max_attempt_number=5)
def create_channel():
ch = grpc.insecure_channel(f'localhost:{CHANNEL_PORT}')
grpc.channel_ready_future(ch).result(timeout=timeout)
return ch
executable_path = os.path.join(
os.path.dirname(pb2.__file__),
EXECUTABLE_NAME
)
if EXECUTABLE_NAME not in [p.name() for p in psutil.process_iter()]:
subprocess.Popen([executable_path], shell=True, stdout=subprocess.PIPE)
ch = create_channel()
client = svc.LocalServiceStub(ch)
return client

How to have a Python Script run constantly in the background connected to a Node.JS server

Currently I have a Node.JS server that needs to run a python script. Part of this script imports some data from a server that takes time which is detrimental to the experience as this Python script needs to be ran frequently.
What I want is for the Node.JS server to run this Python script when the server is ran, then in the background constantly have it keep running, with the ability to (from the Node.JS server) call a python function that returns data.
So far I have this on the Node.JS server that runs a python script and outputs the response. This is repeated every time data is needed to be retrieved:
const util = require('util');
var options = {
mode: 'text',
args: [sentenceToUse]
};
const readPy = util.promisify(PythonShell.run);
const test = await readPy("spacyTest.py", options);
var response = test.toString();
response = response.toString();
response = response.replace(/'/g, '"');
response = JSON.parse(response);
return(response);
console.log(test);
'''
How can I keep this running in the background without restarting the python script every time data is needed?
It seems you need to change the python script itself to keep running and respond to requests from its parent. If the python script now runs and exits, there's nothing you can do from node.js to stop that. You need to change the python script.
Here are some options:
The python script can regularly send data back to its parent on stdout and you can read that data from nodejs as it arrives.
You can put a little local http server or socket.io server in the python script on a known port and have your nodejs parent communicate with that to make requests and get responses.
The Python-shell module also shows you how to communicate between node.js parent and the python script here in the doc, so that is an option too.
Since options 1 and 3 are built-in already, I would probably start with one of those until you find some reason they aren't sufficient.

Flask library is no longer accessible from http even though the py is still alive and working

Im using the flask library with python to create a web interface.
The py is supposed to be running indefinitely (it get data every 30 min from the DB , do some operations and send the dataframe as json on a certain myroute)
Im using:
if __name__ == '__main__':
app.run(host = '0.0.0.0', port = 5055, debug=True)
The problem is that after a while , if I run http://myapp:5055/myroute I will get a ERR_CONNECTION_REFUSED.
if I run lsof -i tcp:5055 shows that no app is using this port.
However the .py logs that I have in the code are indicating that the code is working fine (its fetching data and adjusting it)
Any clue on how to debug further?

Use value from javascript on html page to execute python script

I have a problem. I have a project that involves sending a value from a web page to a web server and using that value to generate a voltage by commanding a digital to analog converter. For the server side I am using a Python script that works very well and I have created a simple web page in which I can enter the value wanted. But the link between them is missing. I am trying to understand CGI scripts to use them for parsing the value from the web page to the Python script but with no luck. Does anyone have any other ideas or can anyone explain CGI for beginners? Thank you.
Here is a simple Python script that pipes the output of a locally executed command (dir on a Windows computer in this case) via a web request (using the excellent web.py library):
import web
from subprocess import check_output
urls = (
'/', 'index'
)
app = web.application(urls, globals())
class index:
def GET(self):
return '<pre>'+check_output('dir', shell=True)+'</pre>'
if __name__ == "__main__":
app.run()

Calling a python web server within a script

I would like to have a web server displaying the status of 2 of my python scripts.
These scripts listen for incoming data on a specific port. What I would like to do is have it so that when the script is running the web server will return a HTTP200 and when the script is not running a 500. I have had a look at cherrypy and other such python web servers but I could not get them to run first and then while the web server is running continue with the rest of my code. I would like it so that if the script crashes so does the web server. Or a way for the web server to display say a blank webpage with just a 1 in the HTML if the script is running or a 0 if it is not.
Any suggestions?
Thanks in advance.
Actually I was just answering a question moderately similar to this one the idea would be to run script A and have it break off 2 threads running the scripts that you intend and then just have a web page do a:
import threading, cherrypy
from cherrypy import expose
class thread1(threading.Thread):
def run(self):
#code for script 1 goes here
class thread2(threading.Thread):
def run(self):
#code for script 2 goes here
t1 = thread1()
t2 = thread2()
t1.start()
t2.start()
#expose
def check(self):
if t1.isAlive() and t2.isAlive():
return "1"
return "0"
I would advise you to put either nginx or apache infront of this with them being a reverse proxy.
Now there is 2 ways that this will show you that one of them stopped. Either it will show you a 1 that both are running fine. A zero if one or both stopped but managed to keep the rest of the script running. Or nginx/apache will give you a 500 error saying that the backend server (ie:cherrypy) crashed which means that the entire script stopped working.
I would break it apart further:
script A, on port a
script B, on port b
web script C which checks on A and B (by making simple requests to them)
and returns the results in a machine-friendly format, ie JSON or XML
web page D which calls C and formats the results for people, ie an HTML table
There are existing programs which do this - Nagios springs to mind.

Categories