My SystemD service file looks like this:
[Unit]
Description=XXX
After=sound.target network.target
Wants=sound.target
[Service]
ExecStart=/usr/bin/python3 -u raspberry.py
WorkingDirectory=/home/pi/Desktop
Restart=always
User=pi
PrivateTmp=true
[Install]
Alias=XXX
WantedBy=multi-user.target
The python script is a classic python-socketio client which should listen for events like "listen" and "play". The main part of code looks like this:
import subprocess
import socketio
HOST = "https://XXX.ngrok.io"
sio = socketio.Client(engineio_logger=True)
...
#sio.on('play')
def play(data):
print("play")
subprocess.call(["espeak", "'Not working'"])
if __name__ == '__main__':
subprocess.call(["espeak","'Initialized'"])
sio.connect(HOST)
sio.wait()
When I set up the service to run at booting, the first calling of espeak is executed and socket connection with my server is established but then if I send an event (through my server) the second calling of espeak is not working (there is no sound). If I look into logs through journalctl -u XXX I will see that the function is called because the print statement is executed.
What comes to my mind is that it is because of running the subprocess call from a different thread, but I am not sure.. any ideas?
The solution is related to my other question at Raspberry Pi forum. The main problem was the inability of root to play sounds. When I was debugging this problem, I found that because of User=pi the service is started as user pi. But when I call subprocess.call in #sio.on('play') part it was under called as user root. It was happening only in #sio.on('play') part. If I did it in if __name__ == '__main__': part, the calling was under pi user. Still don't know why it was happening but solution was not using AIY hat version of Raspbian but classic version Raspbian Stretch Lite.
Related
When developing the telegram bot in python, I faced the problem of independently triggering notifications in the Ubuntu system.
Let's start from the beginning. For everyday notification, I use a library called "Schedule". I won't fully describe it in the code, but it looks something like this:
from multiprocessing import *
import schedule
def start_process():
Process(target=P_schedule.start_schedule, args=()).start()
class P_schedule():
def start_schedule():
schedule.every().day.at("19:00").do(P_schedule.send_message)
while True:
schedule.run_pending()
time.sleep(1)
def send_message():
bot.send_message(user_ID, 'Message Text')
There don't seem to be any errors here for correct operation. Then I loaded all this into the Ubuntu system and connected "systemd" to autorun with commands:
vim /etc/systemd/system/bot.service
[Unit]
Description=Awesome Bot
After=syslog.target
After=network.target
[Service]
Type=simple
User=bot
WorkingDirectory=/home/bot/tgbot
ExecStart=/usr/bin/python3 /home/bot/tgbot/bot.py
Restart=always
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable bot
systemctl start bot
I restart "systemd" after making edits to the code with a command:
systemctl restart bot
The problem arose in the following, when I change the time of the notification, it starts coming at the time that I specified and at the time that was before, as I understand it, "systemd" somewhere in the cache stores the old time value. How can I update "systemd" to clear this very cache.
It helped to reboot the system with the command:
sudo systemctl reboot
I have a Django project and I am using pykafka. I have created two files named producer.py and consumer.py inside the project. I have to change directory into the folder where these are present and then separately run python producer.py and consumer.py from the terminal. Everything works great.
I deployed my project online and the web-app is running so I want to run the producer and consumer automatically in the background. How do i do that?
EDIT 1: On my production server I did nohup python name_of_python_script.py & to execute it in the background. This works for the time being but is it a good solution?
You can create a systemd service MyKafkaConsumer.service under /etc/systemd/system with the following content:
[Unit]
Description=A Kafka Consumer written in Python
After=network.target # include any other pre-requisites
[Service]
Type=simple
User=your_user
Group=your_user_group
WorkingDirectory=/path/to/your/consumer
ExecStart=python consumer.py
TimeoutStopSec=180
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
In order to start the service (and configure it in order to run on boot) you should run
systemctl enable MyKafkaConsumer.service
systemctl start MyKafkaConsumer.service
To check its status:
systemctl status MyKafkaConsumer
And to see the logs:
journactl -u MyKafkaConsumer -f
(or if you want to see the last 100 lines)
journalctl -u MyKafkaConsumer -n 100
You'd need to create a similar service for your producer too.
There are a lot of options for systemd services. You can refer to this article if you need any further clarifications. It shouldn't be hard to find guides and additional material online though.
I'm running a web app with address 127.0.0.1:5000 and am using the python client library for Prometheus. I use start_http_server(8000) from the example in their docs to expose the metrics on that port. The application runs, but I get [Errno 48] Address already in use and the localhost:8000 doesn't connect to anything when I try hitting it.
If I can't start two servers from one web app, then what port should I pass into start_http_server() in order to expose the metrics?
There is nothing already running on either port before I start the app.
Some other process is utilizing the port (8000). To kill the process that is running on the port (8000), simply find the process_id [pid] of the process.
lsof -i :8000
This will show you the processes running on the port 8000 like this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python3 21271 hashed 3u IPv4 1430288 0t0 TCP *:8000 (LISTEN)
You can kill the process using the kill command like this:
sudo kill -9 21271
Recheck if the process is killed using the same command
lsof -i :8000
There should be nothing on the stdout.
When flask's debug mode is set to True, the code reloads after the flask server is up, and a bind to the prometheus server is been called a second time
Set flask app debug argument to False to solve it
This is mainly because you are restarting the server again on the port 8000.
To resolve this I created a function which will create the server after assuring the previous server or the port can be used.
You can look at https://github.com/prometheus/client_python/issues/155, here the same case is addressed
Port 8000 does not need to have a web server running on it for it to be already in use. Use your OS command line to find the process that is using up the port then kill it. If a service is also running that causes it to get spawned again, disable that process.
A simpler solution would be to use another port instead of 8000.
EDIT: Looks like it is a bug in Prometheus. Github Issue
you cannot run two http servers on the same thread.
I don't know why but the implementation of the prometheus_client doesn't runs the server on a separate thread.
my solution is
import logging.config
import os
import connexion
from multiprocessing.pool import ThreadPool
from prometheus_client import start_http_server
app = connexion.App(__name__, specification_dir='./')
app.add_api('swagger.yml')
# If we're running in stand alone mode, run the application
if __name__ == '__main__':
pool = ThreadPool(1)
pool.apply_async(start_http_server, (8000, )) # start prometheus in a different thread
app.run(host='0.0.0.0', port=5000, debug=True) . # start my server
Maybe your 8000 port is occupied. You can change it to another port, such as 8001
I found this 0 dependency python websocket server from SO: https://gist.github.com/jkp/3136208
I am using gunicorn for my flask app and I wanted to run this websocket server using gunicorn also. In the last few lines of the code it runs the server with:
if __name__ == "__main__":
server = SocketServer.TCPServer(
("localhost", 9999), WebSocketsHandler)
server.serve_forever()
I cannot figure out how to get this websocketserver.py running in gunicorn. This is because one would think you would want gunicorn to run server_forever() as well as the SocketServer.TCPServer(....
Is this possible?
GUnicorn expects a WSGI application (PEP 333) not just a function. Your app has to accept an environ variable and a start_response callback and return an iterator of data (roughly speaking). All the machinery encapsuled by SocketServer.StreamRequestHandler is on gunicorn side. I imagine this is a lot of work to modify this gist to become a WSGI application (But that'll be fun!).
OR, maybe this library will get the job done for you: https://github.com/CMGS/gunicorn-websocket
If you use Flask-Sockets extension, you have a websocket implementation for gunicorn directly in the extension which make it possible to start with the following command line :
gunicorn -k flask_sockets.worker app:app
Though I don't know if that's what you want to do.
I wrote a python XMLRPC server for my web application. The problem is whenever I start the server from shell and exit, xmlrpc server stops as well. I tried executing server script from another file thinking that it will continue to run in the background but that didn't work. Here's the code used to start a server.
host = 'localhost'
port = 8000
server = SimpleXMLRPCServer.SimpleXMLRPCServer((host, port))
server.register_function(getList)
server.serve_forever()
In the shell I just do >>python MyXmlrpcServer.py to start a server.
What do I do to be able to start a server and keep it running?
#warwaruk makes a useful suggestion; Twisted XML-RPC is simple and robust. However, if you simply want to run and manage a python process in the 'background' take a look at Supervisord. It is a simple process management system.
$ pip install supervisor
$ echo_supervisord_conf > /etc/supervisord.conf
Edit that config file to add a definition of your process thus...
[program:mycoolproc]
directory=/path/to/my/script/dir
command=python MyXmlrpcServer.py
Start supervisord and start your process
$ supervisord
$ supervisorctl start mycoolproc
Better use twisted to create an XML-RPC server. Thus you will not need writing your own server, it is very flexible, and you will be able to run in background using twistd:
#!/usr/bin/env python
import time, datetime, os, sys
from twisted.web import xmlrpc, server
from twisted.internet import reactor
class Worker(xmlrpc.XMLRPC):
def xmlrpc_test(self):
print 'test called!'
port = 1235
r = Worker(allowNone=True)
if __name__ == '__main__':
print 'Listening on port', port
reactor.listenTCP(port, server.Site(r))
reactor.run()
else: # run the worker as a twistd service application: twistd -y xmlrpc_server.py --no_save
from twisted.application import service, internet
application = service.Application('xmlrpc_server')
reactor.listenTCP(port, server.Site(r))
reactor.run()
#internet.TCPServer(port, server.Site(r)).setServiceParent(application)