How do I start python XMLRPC server in the background? - python

I wrote a python XMLRPC server for my web application. The problem is whenever I start the server from shell and exit, xmlrpc server stops as well. I tried executing server script from another file thinking that it will continue to run in the background but that didn't work. Here's the code used to start a server.
host = 'localhost'
port = 8000
server = SimpleXMLRPCServer.SimpleXMLRPCServer((host, port))
server.register_function(getList)
server.serve_forever()
In the shell I just do >>python MyXmlrpcServer.py to start a server.
What do I do to be able to start a server and keep it running?

#warwaruk makes a useful suggestion; Twisted XML-RPC is simple and robust. However, if you simply want to run and manage a python process in the 'background' take a look at Supervisord. It is a simple process management system.
$ pip install supervisor
$ echo_supervisord_conf > /etc/supervisord.conf
Edit that config file to add a definition of your process thus...
[program:mycoolproc]
directory=/path/to/my/script/dir
command=python MyXmlrpcServer.py
Start supervisord and start your process
$ supervisord
$ supervisorctl start mycoolproc

Better use twisted to create an XML-RPC server. Thus you will not need writing your own server, it is very flexible, and you will be able to run in background using twistd:
#!/usr/bin/env python
import time, datetime, os, sys
from twisted.web import xmlrpc, server
from twisted.internet import reactor
class Worker(xmlrpc.XMLRPC):
def xmlrpc_test(self):
print 'test called!'
port = 1235
r = Worker(allowNone=True)
if __name__ == '__main__':
print 'Listening on port', port
reactor.listenTCP(port, server.Site(r))
reactor.run()
else: # run the worker as a twistd service application: twistd -y xmlrpc_server.py --no_save
from twisted.application import service, internet
application = service.Application('xmlrpc_server')
reactor.listenTCP(port, server.Site(r))
reactor.run()
#internet.TCPServer(port, server.Site(r)).setServiceParent(application)

Related

Trying to make a simple python web server and its not starting

Ive been trying to make a web server and I have the code down that should be able to get it running but when I go in to the Command Prompt and type python app.py it doesn't run when it should this is the code that I have
from flask import Flask
app = Flask(__name__)
#app.route("/")
def main():
return "Welcome to my Flask page"
if __name__ == "__main__":
app.run(debug = True, host = "0.0.0.0", port=80)```
The server won't run on port 80, it will run on the default port (5000). If you run the server and navigate to HTTP://0.0.0.0:5000/, you should see your / response. See Why can't I change the host and port that my Flask app runs on?.
To change the port Flask runs on, you can specify it in the command line:
flask run -h localhost -p 3000
Here, I run the server on localhost:3000. If you try to run the server on port 80, you will get a permission denied error since any port under 1024 needs root privileges (as m1ghtfr3e said in their answer).
Also, this is a great tutorial I recommend to anyone learning flask https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world
I think the problem is port 80.
Which OS are you using?
Ports under 1024 need root privileges, there is also a possibility that it is not working because some other service (like Apache) is running on this port.
So either fixing privileges or services or changing the port should make it run.

Running python-socketio with SystemD to play sounds at background

My SystemD service file looks like this:
[Unit]
Description=XXX
After=sound.target network.target
Wants=sound.target
[Service]
ExecStart=/usr/bin/python3 -u raspberry.py
WorkingDirectory=/home/pi/Desktop
Restart=always
User=pi
PrivateTmp=true
[Install]
Alias=XXX
WantedBy=multi-user.target
The python script is a classic python-socketio client which should listen for events like "listen" and "play". The main part of code looks like this:
import subprocess
import socketio
HOST = "https://XXX.ngrok.io"
sio = socketio.Client(engineio_logger=True)
...
#sio.on('play')
def play(data):
print("play")
subprocess.call(["espeak", "'Not working'"])
if __name__ == '__main__':
subprocess.call(["espeak","'Initialized'"])
sio.connect(HOST)
sio.wait()
When I set up the service to run at booting, the first calling of espeak is executed and socket connection with my server is established but then if I send an event (through my server) the second calling of espeak is not working (there is no sound). If I look into logs through journalctl -u XXX I will see that the function is called because the print statement is executed.
What comes to my mind is that it is because of running the subprocess call from a different thread, but I am not sure.. any ideas?
The solution is related to my other question at Raspberry Pi forum. The main problem was the inability of root to play sounds. When I was debugging this problem, I found that because of User=pi the service is started as user pi. But when I call subprocess.call in #sio.on('play') part it was under called as user root. It was happening only in #sio.on('play') part. If I did it in if __name__ == '__main__': part, the calling was under pi user. Still don't know why it was happening but solution was not using AIY hat version of Raspbian but classic version Raspbian Stretch Lite.

Prometheus python client error Address already in use

I'm running a web app with address 127.0.0.1:5000 and am using the python client library for Prometheus. I use start_http_server(8000) from the example in their docs to expose the metrics on that port. The application runs, but I get [Errno 48] Address already in use and the localhost:8000 doesn't connect to anything when I try hitting it.
If I can't start two servers from one web app, then what port should I pass into start_http_server() in order to expose the metrics?
There is nothing already running on either port before I start the app.
Some other process is utilizing the port (8000). To kill the process that is running on the port (8000), simply find the process_id [pid] of the process.
lsof -i :8000
This will show you the processes running on the port 8000 like this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python3 21271 hashed 3u IPv4 1430288 0t0 TCP *:8000 (LISTEN)
You can kill the process using the kill command like this:
sudo kill -9 21271
Recheck if the process is killed using the same command
lsof -i :8000
There should be nothing on the stdout.
When flask's debug mode is set to True, the code reloads after the flask server is up, and a bind to the prometheus server is been called a second time
Set flask app debug argument to False to solve it
This is mainly because you are restarting the server again on the port 8000.
To resolve this I created a function which will create the server after assuring the previous server or the port can be used.
You can look at https://github.com/prometheus/client_python/issues/155, here the same case is addressed
Port 8000 does not need to have a web server running on it for it to be already in use. Use your OS command line to find the process that is using up the port then kill it. If a service is also running that causes it to get spawned again, disable that process.
A simpler solution would be to use another port instead of 8000.
EDIT: Looks like it is a bug in Prometheus. Github Issue
you cannot run two http servers on the same thread.
I don't know why but the implementation of the prometheus_client doesn't runs the server on a separate thread.
my solution is
import logging.config
import os
import connexion
from multiprocessing.pool import ThreadPool
from prometheus_client import start_http_server
app = connexion.App(__name__, specification_dir='./')
app.add_api('swagger.yml')
# If we're running in stand alone mode, run the application
if __name__ == '__main__':
pool = ThreadPool(1)
pool.apply_async(start_http_server, (8000, )) # start prometheus in a different thread
app.run(host='0.0.0.0', port=5000, debug=True) . # start my server
Maybe your 8000 port is occupied. You can change it to another port, such as 8001

How to pass Unix Commands across network using python

So basically I have this remote computer with a bunch of files.
I want to run unix commands (such as ls or cat) and receive them locally.
Currently I have connected via python's sockets (I know the IP address of remote computer). But doing:
data = None
message = "ls\n"
sock.send(message)
while not data:
data = sock.recv(1024) <- stalls here forever
...
is not getting me anything.
There is an excellent Python library for this. It's called Paramiko: http://www.paramiko.org/
Paramiko is, among other things, an SSH client which lets you invoke programs on remote machines running sshd (which includes lots of standard servers).
You can use Python's subprocess module to accomplish your task. It is a built-in module and does not have much dependencies.
For your problem, I would suggest the Popen method, which runs command on remote computer and returns the result to your machine.
out = subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE, stderr=subprocess.PIPE)
t = out.stdout.read() + out.stderr.read()
socket.send(t)
where cmd is your command which you want to execute.
This will return the result of the command to your screen.
Hope that helps !!!
This is what I did for your situation.
In terminal 1, I set up a remote shell over a socket using ncat, a nc variant:
$ ncat -l -v 50007 -e /bin/bash
In terminal 2, I connect to the socket with this Python code:
$ cat python-pass-unix-commands-socket.py
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('', 50007))
sock.send('ls\n')
data = sock.recv(1024)
print data
sock.close()
$ python pass-unix-commands-socket.py
This is the output I get in terminal 1 after running the command:
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Listening on :::50007
Ncat: Listening on 0.0.0.0:50007
Ncat: Connection from 127.0.0.1.
Ncat: Connection from 127.0.0.1:39507.
$
And in terminal 2:
$ python pass-unix-commands-socket.py
alternating-characters.in
alternating-characters.rkt
angry-children.in
angry-children.rkt
angry-professor.in
angry-professor.rkt
$

How to run python websocket on gunicorn

I found this 0 dependency python websocket server from SO: https://gist.github.com/jkp/3136208
I am using gunicorn for my flask app and I wanted to run this websocket server using gunicorn also. In the last few lines of the code it runs the server with:
if __name__ == "__main__":
server = SocketServer.TCPServer(
("localhost", 9999), WebSocketsHandler)
server.serve_forever()
I cannot figure out how to get this websocketserver.py running in gunicorn. This is because one would think you would want gunicorn to run server_forever() as well as the SocketServer.TCPServer(....
Is this possible?
GUnicorn expects a WSGI application (PEP 333) not just a function. Your app has to accept an environ variable and a start_response callback and return an iterator of data (roughly speaking). All the machinery encapsuled by SocketServer.StreamRequestHandler is on gunicorn side. I imagine this is a lot of work to modify this gist to become a WSGI application (But that'll be fun!).
OR, maybe this library will get the job done for you: https://github.com/CMGS/gunicorn-websocket
If you use Flask-Sockets extension, you have a websocket implementation for gunicorn directly in the extension which make it possible to start with the following command line :
gunicorn -k flask_sockets.worker app:app
Though I don't know if that's what you want to do.

Categories