Anacron will not run notifications from python/bash scripts - python

The basics
I am trying to add desktop notifications to some fairly simple scripts running with anacron just to let me know when they are running and when they have finished. For some reason, the scripts DO run, but the notifications never get sent. If I run the scripts manually (that is to say, using ./bash_test.sh instead of sudo anacron -fdn testing), the notifications send just fine.
The scripts
The bash script I am trying to run looks like this:
#!/bin/bash
python -c 'import notifications; notifications.clamstart()'
#some clamav scanning stuff happens in here, this bit runs fine
python -c 'import notifications; notifications.clamfinish()'
and the corresponding notifications.py file looks like:
from plyer import notification
def clamstart():
notification.notify(
message="Security script has started running.",
app_name="Clam Scan Daily",
hints={"desktop-entry":"clamtk"}
)
def clamfinish():
notification.notify(
message="Security script has finished running.",
app_name="Clam Scan Daily",
hints={"desktop-entry":"clamtk"}
)
Supplemental info
These two files are in the same directory, so as far as I'm aware the import statements should work fine (and they do, when I run it with ./bash_test.sh)
I have already tried using notify-send, that was what I had set up initially and it was running into the same problem, which is why I decided to try and switch to python's plyer notify() and see if that worked.
ALL of these components work fine individually, they only stop working when I try to run them using anacron with sudo anacron -fdn testing
I believe the anacrontab is set up properly since it runs except for the notifications bit, but just in case I'll add it here too:
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# the maximal random delay added to the base delay of the jobs
RANDOM_DELAY=45
# the jobs will be started during the following hours only
START_HOURS_RANGE=3-22
#period in days delay in minutes job-identifier command
1 5 cron.daily nice run-parts /etc/cron.daily
7 25 cron.weekly nice run-parts /etc/cron.weekly
#monthly 45 cron.monthly nice run-parts /etc/cron.monthly
1 10 backup-script /home/glottophilos/backup_daily.sh
7 15 virus-scan /home/glottophilos/clamscan_daily.sh
1 10 testing /home/glottophilos/testscript.sh
I should also note that I am pretty opposed to the idea of using cron instead of anacron because this is a setup for a personal rig that is not on all the time. If there is another way of handling the scheduling though that doesn't require anacron at all, I'm happy to explore that option!
This is NOT a duplicate of Using notify-send with cron They are similar, but the answer that I posted has some structural differences and works where the other does not for reasons I'm not entirely sure of.

Solution
OK, so the solution, as per #Nick ODell's direction in the comments on the original, appears to have been doing this in the etc/anacrontab file:
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# the maximal random delay added to the base delay of the jobs
RANDOM_DELAY=45
# the jobs will be started during the following hours only
START_HOURS_RANGE=3-22
#period in days delay in minutes job-identifier command
1 5 cron.daily nice run-parts /etc/cron.daily
7 25 cron.weekly nice run-parts /etc/cron.weekly
#monthly 45 cron.monthly nice run-parts /etc/cron.monthly
1 10 backup-script sudo -u glottophilos DISPLAY=:0 DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus /home/glottophilos/backup.sh>
7 15 virus-scan sudo -u glottophilos DISPLAY=:0 DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus /home/glottophilos/clamscan-daily.sh>
1 10 testing sudo -u glottophilos DISPLAY=:0 DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus /home/glottophilos/testscript.sh>
and then using this format in the bash script (avoiding python altogether):
notify-send --app-name="Clam Scan Daily" --hint=string:desktop-entry:clamtk "Security script is running."

Related

Run python script when computer is not being used?

I recently got into machine-learning. I'm running a pythonscript that is heavy on my processor. My first idea was to setup a cron-job that was running in the background and then in python cancel the job if the time is between 06:00 and 07:00 in the morning. (The job should ideally only be canceled at certain stages.)
0 1 * * * cd ~/web/im2txt/im2txt && ./train.sh >/Users/kristoffer/Desktop/train.py 2>/Users/kristoffer/Desktop/train.log
But then I got thinking, is there someway, either in python or via shell to run a script if the computer is not being used? Is in idle or something like that?
xscreensaver can run any program that is specified in its configuration file, i.e.:
programs: \
qix -root \n\
ico -r -faces -sleep 1 -obj ico \n\
xdaliclock -builtin2 -root \n\
xv -root -rmode 5 image.gif -quit \n
then, you can add your own and let xscreensaver do the rest determining when your computer is idle.
The standard way to make your program run with lower priority compared to other processes is using the nice command:
nice -n 20 ./train.sh
The command will run all the time, but the scheduler will give it the lowest possible priority, effectively giving it CPU time only when there is nothing else to do.
Note, however, that nice will only make the process nice (hence the name) to other princesses. If no other processes are competing for CPU time, a CPU-hungry process will utilize 100% of available cores (and heat up the machine), even when niced to the lowest priority.

How to timeout code being profiled with cProfiler without modifying user code?

Often during my work I write code to read lines from a file and I process those lines one at a time.
Sometimes the line processing is complicated and the file is long, for example, today it takes roughly a minute for processing 200 lines and the total lines in the file are 175k.
I want to figure out which part of my code is taking a long time and for that I decided to use the cProfiler in Python.
The problem is that I can't actually run the whole code because that would take too long, and if I interrupt the process midway through an exit signal then I cProfiler also dies without producing a report and modifying code with logic to die after a certain reading only top K lines is annoying (because I tend to this kind of thing a lot for different types of data in my job.) I want to avoid adding options only for the sake of profiling if possible.
What would be the cleanest way to tell cProfiler to run for 3 minutes, profile what happens, stop and then report its findings?
Step 1: run your script myscript.py under the profiler for 3 minutes, outputting the profiling information to the file prof. On Linux and similar, you can do this with
timeout -s INT 3m python -m cProfile -o prof myscript.py
(Note: if you omit -s INT, SIGTERM is used instead of SIGINT, which seems to work of Python 2 but not on Python 3.) Alternatively, on any system, you should be able to do
python -m cProfile -o prof myscript.py
then press Ctrl-C at the end of 3 minutes.
Step 2: get some statistics from the prof file with something like
python -c "import pstats; pstats.Stats('prof').sort_stats('time').print_stats(20)"

schedule automate shell script running not as ROOT

I have a shell script that I want to run automatically every day at 08 AM, and I am not authorised to use the crontab because I don't have root permission
My home directory is /home/user1/.
Any suggestions?
Ideally you should have your system administrator add your user account to /etc/cron.allow - without that you do not have permission to use the crontab command, as you probably discovered.
If that is not possible, then you could use a wrapper shell script along these lines (note: untested):
#!/bin/bash
TIME="tomorrow 8am"
while :; do
# Get the current time in seconds since the Epoch
CURRENT=$(date +%s)
# Get the next start time in seconds
NEXT=$(date +%s -d "$TIME")
# Sleep for the intervening time
sleep $((NEXT-CURRENT))
# Execute the command to repeat
/home/user1/mycommand.sh
done
You start the wrapper script in the background, or e.g. in a screen session, and as long as it's active it will regularly execute your script. Keep in mind that:
Much like cron there is no real accuracy w.r.t. the start time. If you care about timing to the second, then this is not the way to go.
If the script is interrupted for whatever reason, such as a server reboot, you will have to somehow restart. Normally I'd suggest an #reboot entry in crontab, but that seems to not be an option for you.
If there is some sort of process-cleaning mechanism that kills long-term user processed you are probably out of luck.
Your system administrator may have simply neglected to allow users access to cron - or it may have been an explicit decision. In the second case they might not take too well to you leaving a couple of processes overnight in order to bypass that restriction.
Even if you dont have root permission you can set cron job. Chcek these 2 commands as user1, if you can modify it or its throwing any error.
crontab -l
If you can see then try this as well:
crontab -e
If you can open and edit, then you can run that script with cron.
by adding this line:
* 08 * * * /path/to/your/script
I don't think root permission is required to create a cron job. Editing a cronjob that's not owned by you - there's where you'd need root.
In a pinch, you can use at(1). Make sure the program you run reschedules the at job. Warning: this goes to heck if the machine is down for any length of time.

Run my python3 program on remote Ubuntu server every 30 min

I have correct python3 program looking like *.py.
I have Digital Ocean(DO) droplet with Ubuntu 14.04.
My program post message to my twitter account.
I just copy my *.py in some directory on DO droplet and run it with ssh and all works fine.
But I need to post message(rum my program) automatically every 15-30 min for example.
Iam newbie with this all.
What should i do? Step-by-step please!
cron is probably the way to go - it's built for this task. See this DigitalOcean tutorial for details.
This StackOverflow answer explicitly states what to put in for a 30 minute repeat interval.
If you don't want to use cron for some reason, you could do something like:
import time
# Repeat forever
while True:
post_to_twitter() # Call your function
# Sleep for 60 seconds / minute * 30 minutes
time.sleep(60 * 30)
First install and enable fcron. Then, sudo -s into root and run fcrontab -e. In the editor, enter */30 * * * /path/to/script.py and save the file. Change 30 to 15 if every 15 minutes is what you're after.

Saltstack salt-master service start is taking too long

Im in trouble with Saltstack since I started 2 diferent developments with Python using its API. Sometimes the services crashes and when I try to start them again or reboot the servers, the time it takes to start is about more than 24 hours. Logs are empty and if i start salt-master in debug mode nothing happens.
# dpkg -l| grep salt
ii salt-common 2014.1.5+ds-1~bpo70+1
ii salt-master 2014.1.5+ds-1~bpo70+1
Note: It's happening to me in two different machines. OS Debian sid
Whoa, 24 hours is a ridiculous amount of time to start up.
Have you added any custom grains, modules or external pillars?
Have you tried upgrading? 2014.1.10 is now out.

Categories