Currently I am experience issues with the script automatic run after wifi adapter connects to a network.
After ridiculously extended research, I've made several attempts to add script to a /etc/network/if-up.d/. Manually my script works; however it does not automatically.
User permissions:
ls -al /etc/network/if-up.d/*
-rwxr-xr-x 1 root root 703 Jul 25 2011 /etc/network/if-up.d/000resolvconf
-rwxr-xr-x 1 root root 484 Apr 13 2015 /etc/network/if-up.d/avahi-daemon
-rwxr-xr-x 1 root root 4958 Apr 6 2015 /etc/network/if-up.d/mountnfs
-rwxr-xr-x 1 root root 945 Apr 14 2016 /etc/network/if-up.d/openssh-server
-rwxr-xr-x 1 root root 48 Apr 26 03:21 /etc/network/if-up.d/sendemail
-rwxr-xr-x 1 root root 1483 Jan 6 2013 /etc/network/if-up.d/upstart
lrwxrwxrwx 1 root root 32 Sep 17 2016 /etc/network/if-up.d/wpasupplicant -> ../../wpa_supplicant/ifupdown.sh
Also, I've tried to push the command directly in /etc/network/interfaces
by adding a row
post-up /home/pi/r/sendemail.sh
Contents of sendemail.sh:
#!/bin/sh
python /home/pi/r/pip.py
After the reboot, nothing actually happen. I've even tried sudo in front
I assume that wpasupplicant is the thing which causes that, but I cannot get how to run my script in ifupdown.sh script under /etc/wpa_supplicant.
Appreciate your help!
If you have no connectivity prior to initializing the wifi interface, I would suggest adding a cron job of a bash or python script that checks for connectivity every X minutes.
Ping (host);
If host is up then run python commands or external command.
This is rather ambiguous but hopefully is of some help.
Here is an example of a script that will check if a host is alive;
import re,commands
class CheckAlive:
def __init__(self):
myCommand = commands.getstatusoutput('ping ' + 'google.com)
searchString = r'ping: unknown host'
match = re.search(searchString,str(myCommand))
if match:
# host is not alive
print 'no alive, don't do stuff';
else:
# host is alive
print 'alive, time do stuff';
Related
I'm currently working on a Linux script on my Raspberry Pi, which should call another script with a time delay. I chose the "at command" for this and let python execute the code directly in the console.
import shlex
import subprocess
subprocess.call(shlex.split("at now + 5 minutes -f /home/raspberry/Desktop/Filename.py"))
When I run the "at command" directly in the console, I get the following feedback:
warning: commands will be executed using /bin/sh
job 13 at Sun Feb 19 23:17:00 2023
Unfortunately, nothing happens at the specified time and the script is not executed. The Filename.py Code is:
#!/usr/bin/env python3
print("Hallo Welt")
Does anyone have any ideas what I forgot or need to do differently?
Reply of /var/spool/mail/raspberry
From raspberry#raspberrypi Sun Feb 19 23:17:00 2023
Return-path: <raspberry#raspberrypi>
Envelope-to: raspberry#raspberrypi
Delivery-date: Sun, 19 Feb 2023 23:17:00 +0100
Received: from raspberry by raspberrypi.fritz.box with local (Exim 4.94.2)
(envelope-from <raspberry#raspberrypi>)
id 1pTrzQ-0000we-Fr
for raspberry#raspberrypi; Sun, 19 Feb 2023 23:17:00 +0100
Subject: Output from your job 13
To: raspberry#raspberrypi
Message-Id: <E1pTrzQ-0000we-Fr#raspberrypi.fritz.box>
From: raspberry#raspberrypi
Date: Sun, 19 Feb 2023 23:17:00 +0100
sh: 47: Syntax error: word unexpected (expecting ")")
I want to print a command until it finds the main.py file and then stops.
I tried this code but according to logic it is printing the code several times and not line by line until I stop at mine where I find the main.py file.
import subprocess
#store ls -l to variable
get_ls = subprocess.getoutput("ls -l")
#transfom output to string
ls = str(get_ls)
#search for main.py file in ls
for line in ls:
main_py = line.find('main.py')
print(ls)
#if find main.py print stop and exit
if main_py == 'main.py':
print('stop...')
exit()
Output is looping this:
-rw-r--r-- 1 runner runner 9009 Feb 19 19:00 poetry.lock
-rw-r--r-- 1 runner runner 354 Feb 19 19:00 pyproject.toml
-rw-r--r-- 1 runner runner 329 Feb 25 00:10 main.py
-rw-r--r-- 1 runner runner 383 Feb 14 17:57 replit.nix
-rw-r--r-- 1 runner runner 61 Feb 19 18:46 urls.tmp
drwxr-xr-x 1 runner runner 56 Oct 26 20:53 venv
I want this output:
-rw-r--r-- 1 runner runner 9009 Feb 19 19:00 poetry.lock
-rw-r--r-- 1 runner runner 354 Feb 19 19:00 pyproject.toml
-rw-r--r-- 1 runner runner 329 Feb 25 00:10 main.py
###### stops here #######
How to fix this?
The line for line in ls isn't doing what you think it is. Instead of going line by line, it's going through ls character by character. What you want to have is for line in ls.splitlines(). You can then check if main.py is on that line by calling "main.py" in line
import subprocess
#store ls -l to variable
get_ls = subprocess.getoutput("ls -l")
#transfom output to string
ls = str(get_ls)
#search for main.py file in ls
for line in ls.splitlines():
print(line)
#if find main.py print stop and exit
if "main.py" in line:
print('stop...')
exit()
That should be more what you want I think.
You're also printing ls every loop, which you need to change to only print the current line
In my opinion, if you only want to achieve the result and don’t mind changing your logic, this the most elegant, and which is the most "pythonic" one. I like the simplicity of the os.walk() method:
import os
for root, dirs, files in os.walk("."):
for filename in files:
print(filename)
if filename == "main.py":
print("stop")
break
I'm trying to do a simple process with ansible, however I get failed when trying to run this playbook, I would just like to take the existing file in the user's temporary directory and copy it back to the ansible server inside etc/ansible/files
path and permissions
root#ansible:/etc/ansible/files# pwd
/etc/ansible/files
root#ansible:/etc/ansible/files# ls -ltr ../
total 24
-rw-r--r-- 1 root root 535 mar 27 11:23 ansible.cfg
-rw-r--r-- 1 root root 188 mar 27 15:41 hosts
drwxr-xr-x 5 root root 4096 mar 27 15:42 roles
drwxr-xr-x 2 root root 4096 mar 27 15:42 group_vars
drwxrwxrwx 2 root root 4096 mar 27 16:59 files
drwxr-xr-x 3 root root 4096 mar 27 17:01 playbook
playbook
- name: auto_collect_pingprobe
hosts: "{{ affected_host }}"
gather_facts: no
tasks:
- block:
- name: 'Copy net connect'
fetch:
src: '%temp%\net_connect.cfg'
dest: '/etc/ansible/files/net_connect.cfg'
flat: yes
rescue:
- fail:
msg: "Failure detected in playbook"
output
fatal: [192.168.238.12]: FAILED! => {
"msg": "failed to transfer file to \"/etc/ansible/files/net_connect.cfg\""
}
TASK [fail] *************************************************************************************************************************************************
task path: /etc/ansible/playbook/GEN_AUTO_COLLECT_HOST_AVAILABLE.yml:22
fatal: [192.168.238.12]: FAILED! => {
"changed": false,
"msg": "Failure detected in playbook"
}
Two things may be going on.
You may not have root permissions on the controlled node, ensure that you are using the --become flag when invoking the notebook (or use equivalent privilege escalation).
The %temp% variable may not be being read from the environment. Try replacing the src string with '{{ lookup("env", "temp") }}\net_connect.cfg'.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
How can I send and receive files remotely, and also push updates via python? We have a bunch of devices out in the market and they are all Windows ten based. How could we go about sending files to those machines and receive files from those machines? We would like to use python for this task. Any tutorials or articles could be awesome.
I wrote this script a while ago to send files to my remote SFTP server from my local laptop. The machines has each other's public keys:
import pysftp
import paramiko
fpaths = ['list/of', 'file/paths']
with pysftp.Connection(server, username='loginID') as sftp:
with sftp.cd('target/directory'):
for fpath in fpaths:
print("Sending:", fpath)
if not os.path.isdir(fpath):
sftp.put(fpath)
print("Permissioning", fpath)
sftp.chmod(os.path.basename(fpath), 755)
else:
dirname = os.path.basename(fpath)
if not sftp.isdir(dirname):
sftp.mkdir(dirname)
print("Permissioning", dirname)
sftp.chmod(os.path.basename(dirname), 755)
sftp.put_r(fpath, dirname)
sftp.walktree(dirname,
dcallback=lambda dname:print("Permissioning", dname) or sftp.chmod(dname, 755),
fcallback=lambda fname:print("Permissioning", fname) or sftp.chmod(fname, 755),
ucallback=lambda x:x)
Try using ftplib package for python ftp connection. Here is the small tutorial for that.
import ftplib
ftp = ftplib.FTP("www.python.org")
ftp.login("anonymous", "ftplib-example-1")
data = []
ftp.dir(data.append)
ftp.quit()
for line in data:
print "-", line
Executing above code example:
$ python ftplib-example-1.py
- total 34
- drwxrwxr-x 11 root 4127 512 Sep 14 14:18 .
- drwxrwxr-x 11 root 4127 512 Sep 14 14:18 ..
- drwxrwxr-x 2 root 4127 512 Sep 13 15:18 RCS
- lrwxrwxrwx 1 root bin 11 Jun 29 14:34 README -> welcome.msg
- drwxr-xr-x 3 root wheel 512 May 19 1998 bin
- drwxr-sr-x 3 root 1400 512 Jun 9 1997 dev
- drwxrwxr-- 2 root 4127 512 Feb 8 1998 dup
- drwxr-xr-x 3 root wheel 512 May 19 1998 etc
...
Else, you may go with the SSH using Paramiko. Use whichever suits you better.
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(
paramiko.AutoAddPolicy())
ssh.connect('127.0.0.1', username='none',
password='lol')
Ftplib code reference: The ftplib module
Paramiko code reference: SSH PROGRAMMING WITH PARAMIKO | COMPLETELY DIFFERENT
I wrote a Python script that will run indefinitely. It monitors a directory using PyInotify and uses the Multiprocessing module to run any new files created in those directories through an external script. That all works great.
The problem I am having is writing the output to a file. The filename I chose uses the current date (using datetime.now) and should, theoretically, roll on the hour, every hour.
now = datetime.now()
filename = "/data/db/meta/%s-%s-%s-%s.gz" % (now.year, now.month, now.day, now.hour)
with gzip.open(filename, 'ab') as f:
f.write(json.dumps(data) + "\n")
f.close() #Unsure if I need this, here for debug
Unfortunately, when the hour rolls on -- the output stops and never returns. No exceptions are thrown, it just stops working.
total 2.4M
drwxrwxr-x 2 root root 4.0K Sep 8 08:01 .
drwxrwxr-x 4 root root 12K Aug 29 16:04 ..
-rw-r--r-- 1 root root 446K Aug 29 16:59 2016-8-29-16.gz
-rw-r--r-- 1 root root 533K Aug 30 08:59 2016-8-30-8.gz
-rw-r--r-- 1 root root 38K Sep 7 10:59 2016-9-7-10.gz
-rw-r--r-- 1 root root 95K Sep 7 14:59 2016-9-7-14.gz
-rw-r--r-- 1 root root 292K Sep 7 15:59 2016-9-7-15.gz #Manually run
-rw-r--r-- 1 root root 834K Sep 8 08:59 2016-9-8-8.gz
Those files aren't really owned by root, just changed them for public consumption
As you can see, all of the files timestamps end at :59 and the next hour never happens.
Is there something that I should take into consideration when doing this? Is there something that I am missing running a Python script indefinitely?
After taking a peek. It seems as if PyInotify was my problem.
See here (https://unix.stackexchange.com/questions/164794/why-doesnt-inotifywatch-detect-changes-on-added-files)
I adjusted your code to change the file name each minute, which speeds up debugging quite a bit and yet still tests the hypothesis.
import datetime
import gzip, time
from os.path import expanduser
while True:
now = datetime.datetime.now()
filename = expanduser("~")+"/%s-%s-%s-%s-%s.gz" % (now.year, now.month, now.day, now.hour, now.minute)
with gzip.open(filename, 'a') as f:
f.write(str(now) + "\n")
f.write("Data Dump here" + "\n")
time.sleep(10)
This seems to run without an issue. Changing the time-zone of my pc was also picked up and dealt with. I would suspect, given the above, that your error may lie elsewhere and some judicious debug printing of values at key points is needed. Try using a more granular file name as above to speed up the debugging.