Double hourly notification Python, Ubuntu, Telegram - python

When developing the telegram bot in python, I faced the problem of independently triggering notifications in the Ubuntu system.
Let's start from the beginning. For everyday notification, I use a library called "Schedule". I won't fully describe it in the code, but it looks something like this:
from multiprocessing import *
import schedule
def start_process():
Process(target=P_schedule.start_schedule, args=()).start()
class P_schedule():
def start_schedule():
schedule.every().day.at("19:00").do(P_schedule.send_message)
while True:
schedule.run_pending()
time.sleep(1)
def send_message():
bot.send_message(user_ID, 'Message Text')
There don't seem to be any errors here for correct operation. Then I loaded all this into the Ubuntu system and connected "systemd" to autorun with commands:
vim /etc/systemd/system/bot.service
[Unit]
Description=Awesome Bot
After=syslog.target
After=network.target
[Service]
Type=simple
User=bot
WorkingDirectory=/home/bot/tgbot
ExecStart=/usr/bin/python3 /home/bot/tgbot/bot.py
Restart=always
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable bot
systemctl start bot
I restart "systemd" after making edits to the code with a command:
systemctl restart bot
The problem arose in the following, when I change the time of the notification, it starts coming at the time that I specified and at the time that was before, as I understand it, "systemd" somewhere in the cache stores the old time value. How can I update "systemd" to clear this very cache.

It helped to reboot the system with the command:
sudo systemctl reboot

Related

Crontab not starting python program

I created a python program, "test.py" and have saved it under /home/pi/. When I go to run it in the terminal using "python3 /home/pi/test.py" it runs properly and speaks "hello world". The code is shown below.
import os
import alsaaudio
m = alsaaudio.Mixer()
current_volume = m.getvolume()
m.setvolume(35)
os.system("espeak 'Hello World!'")
I want this program to start whenever my raspberry pi starts up. I tried to add this line in crontab but my raspberry pi doesn't execute the command. Does anyone know why my program won't execute?
#reboot python3 /home/pi/test.py
Here is an image of the syslog
can you try adding the full path to python3:
#reboot /usr/bin/python3 /home/pi/test.py
Also, regarding wanting to run the code on when the device boots - you can run your code as a service.
To do so create a .service file under /etc/systemd/system (for example my-code.service)
Enter the following inside the file
[Unit]
Description=My python service
After=network.target
[Service]
ExecStart=/usr/bin/python3 -u test.py
WorkingDirectory=/home/pi
[Install]
WantedBy=multi-user.target
Finally enable the service (in order for it to run on boot)
sudo systemctl enable my-code
If you want to run it independently you can also run
sudo systemctl start my-code

Using scapy sniff on reboot in Raspberrypi (Systemd)

TL;DR: Why does Scapy's sniff not run at reboots from systemd?
I have the following code running on my RPI3 that specifically looks for network requests. This uses the inbuilt ETH0 wifi:
monitorConnections.py
def arp_detect(pkt):
print("Starting ARP detect")
logging.debug('Starting ARP detect')
if pkt.haslayer(ARP):
if pkt[ARP].op == 1: #network request
PHONE_name = "Unknown"
PHONE_mac_address = ""
if pkt[ARP].hwsrc in known_devices.keys():
print ("Known Phone Detected")
logging.debug('Known Phone Detected')
# Grab name and mac address
PHONE_mac_address = pkt[ARP].hwsrc
PHONE_name = known_devices[PHONE_mac_address]
print ('Hello ' + PHONE_name)
logging.debug('Hello ' + PHONE_name)
else:
# Grab mac address, log these locally
print ("Unknown Phone Detected")
logging.debug('Unknown Phone Detected')
PHONE_mac_address = pkt[ARP].hwsrc
print (pkt[ARP].hwsrc)
print("Start!")
print (sniff(prn=arp_detect, filter="arp", store=0))
When I run this via the command
python2 monitorConnections.py
This runs as designed, however I have been trying to put this in a daemon, conscious that it needs to run after the internet connection has been established. I have the following setting in my service:
MonitorConnections.service
[Unit]
Description=Monitor Connections
Wants=network-online.target
After=network.target network-online.target sys-subsystem-net-devices-wlan0.device sys-subsystem-net-devices-eth0.device
[Service]
Type=simple
ExecStart=/usr/bin/python2 -u monitorConnections.py
ExecStop=pkill -9 /usr/bin/autossh
WorkingDirectory=/home/pi/Shared/MonitorPhones
Restart=always
User=root
StandardOutput=console
StandardError=console
[Install]
WantedBy=multi-user.target
In order to find the services that I need my script to run after, I ran this command:
systemctl list-units --no-pager
To find the following services to add to my service under 'After' - these corresspond with the ethernet
services (I imagine!)
sys-subsystem-net-devices-wlan0.device
sys-subsystem-net-devices-eth0.device
As far as I can tell, this is running successfully. When I save everything and run the following:
sudo systemctl daemon-reload
sudo systemctl restart monitorConnections
This kickstarts the script beautifully. I have then set my script to run at reboot like so:
sudo systemctl enable monitorConnections
And reboot, I can see that it runs the print statement "Start", however then does not seem to run anything within the 'sniff' command, however when running
sudo systemctl -l status monitorConnections
I can see that the script is active - so it has not errored!
My question: Why is it that scapy's sniff does not seem to run at reboot? Have I missed something out
I'm honestly at the end of my wits as to what is wrong - any help about this would be greatly appreciated!
RPI3's wifi driver does not have monitoring mode. After weeks of debugging, this was narrowed down to be the issue. I hope this helps someone else.

How to autorun a python script at startup in Linux

I have a client script which needs to get invoked at start up. The script runs fine if I start it manually using systemctl start dummy.service. The server script is running on another machine.
But if I reboot my machine it does not get start and when checked the status it shows as failed with result exit-code. When I start my machine and when linux is loading all the services before going into log in screen, It shows that failed to start dummy.service, Network is unreachable. What could be the problem exactly?
Here is my dummy.service code
path /lib/systemd/system/dummy.service
Description=Dummy Service
Wants=network-online.target
After=network.target network-online.target
Conflicts=getty#tty1.service
[Service]
Type=forking
ExecStart=/usr/bin/python3 /usr/bin/client.py
StandardInput=tty-force
[Install]
WantedBy=multi-user.target
And my python script in /usr/bin is
#!/usr/bin/python3
import socket
s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.connect(("192.168.1.105",1234))
msg=s.recv(1024)
print(msg.decode("utf-8"))
systemctl enable dummy.service
if you are not root user:
chmod 744 the_pythonscript.py
chmod 644 dummy.service

Running python-socketio with SystemD to play sounds at background

My SystemD service file looks like this:
[Unit]
Description=XXX
After=sound.target network.target
Wants=sound.target
[Service]
ExecStart=/usr/bin/python3 -u raspberry.py
WorkingDirectory=/home/pi/Desktop
Restart=always
User=pi
PrivateTmp=true
[Install]
Alias=XXX
WantedBy=multi-user.target
The python script is a classic python-socketio client which should listen for events like "listen" and "play". The main part of code looks like this:
import subprocess
import socketio
HOST = "https://XXX.ngrok.io"
sio = socketio.Client(engineio_logger=True)
...
#sio.on('play')
def play(data):
print("play")
subprocess.call(["espeak", "'Not working'"])
if __name__ == '__main__':
subprocess.call(["espeak","'Initialized'"])
sio.connect(HOST)
sio.wait()
When I set up the service to run at booting, the first calling of espeak is executed and socket connection with my server is established but then if I send an event (through my server) the second calling of espeak is not working (there is no sound). If I look into logs through journalctl -u XXX I will see that the function is called because the print statement is executed.
What comes to my mind is that it is because of running the subprocess call from a different thread, but I am not sure.. any ideas?
The solution is related to my other question at Raspberry Pi forum. The main problem was the inability of root to play sounds. When I was debugging this problem, I found that because of User=pi the service is started as user pi. But when I call subprocess.call in #sio.on('play') part it was under called as user root. It was happening only in #sio.on('play') part. If I did it in if __name__ == '__main__': part, the calling was under pi user. Still don't know why it was happening but solution was not using AIY hat version of Raspbian but classic version Raspbian Stretch Lite.

Kafka Producer and Consumer Scripts to Run automatically

I have a Django project and I am using pykafka. I have created two files named producer.py and consumer.py inside the project. I have to change directory into the folder where these are present and then separately run python producer.py and consumer.py from the terminal. Everything works great.
I deployed my project online and the web-app is running so I want to run the producer and consumer automatically in the background. How do i do that?
EDIT 1: On my production server I did nohup python name_of_python_script.py & to execute it in the background. This works for the time being but is it a good solution?
You can create a systemd service MyKafkaConsumer.service under /etc/systemd/system with the following content:
[Unit]
Description=A Kafka Consumer written in Python
After=network.target # include any other pre-requisites
[Service]
Type=simple
User=your_user
Group=your_user_group
WorkingDirectory=/path/to/your/consumer
ExecStart=python consumer.py
TimeoutStopSec=180
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
In order to start the service (and configure it in order to run on boot) you should run
systemctl enable MyKafkaConsumer.service
systemctl start MyKafkaConsumer.service
To check its status:
systemctl status MyKafkaConsumer
And to see the logs:
journactl -u MyKafkaConsumer -f
(or if you want to see the last 100 lines)
journalctl -u MyKafkaConsumer -n 100
You'd need to create a similar service for your producer too.
There are a lot of options for systemd services. You can refer to this article if you need any further clarifications. It shouldn't be hard to find guides and additional material online though.

Categories