Secure Copy (scp) the latest file which arrives at a given folder? - python

I need to write a script in bash/python to scp the latest file which arrives at a given folder.That is I am continously getting files into a folder say (/home/ram/Khopo/) I need to scp it into xxx#192.168.21.xxx in /home/xxx/khopo/.
I googled and got this result
file_to_copy=`ssh username#hostname 'ls -1r | head -1'`
echo copying $file_to_copy ...
scp username#hostname:$file_to_copy /local/path
But I want to know whether it is possible do this such that it runs only when a new folder arrives at the source(/home/ram/Khopo/) and waits for the file to reach the folder and do it immediately when it has arrived

I would try to sync the remote directory. This should give you nice outlook, how to do that:
rsync:
https://askubuntu.com/a/105860
https://www.atlantic.net/hipaa-compliant-cloud-storage/how-to-use-rsync-copy-sync-files-servers/
or other tools for syncing:
https://en.wikipedia.org/wiki/Comparison_of_file_synchronization_software

As others have suggested you can use inotifywait, below an example of what you could do in bash:
#!/bin/bash
echo "Enter ssh password"
IFS= read -rs password # Read the password in a hidden way
inotifywait -m -e create "/folder_where_files_arrive" | while read line
do
file_to_copy=$(echo $line | cut -d" " -f1,3 --output-delimiter="")
echo copying $file_to_copy ...
if [[ -d $file_to_copy ]]; then # is a directory
sshpass -p $password scp -r username#hostname:$file_to_copy /local/path
elif [[ -f $file_to_copy ]]; then # is a file
sshpass -p $password scp username#hostname:$file_to_copy /local/path
fi
done
Then you would ideally put this script to run in background, e.g.,:
nohup script.sh &
For sshpass you can install it in ubunut/debian with:
apt install sshpass

Related

Is it possible to compile microbit python code locally?

I am running Ubuntu 22.04 with xorg.
I need to find a way to compile microbit python code locally to a firmware hex file. Firstly, I followed the guide here https://microbit-micropython.readthedocs.io/en/latest/devguide/flashfirmware.html.
After a lot of debugging, I got to this point: https://pastebin.com/MGShD31N
However, the file platform.h does exist.
sawntoe#uwubuntu:~/Documents/Assignments/2022/TVP/micropython$ ls /home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h
/home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h
sawntoe#uwubuntu:~/Documents/Assignments/2022/TVP/micropython$
At this point, I gave up on this and tried using Mu editor with the AppImage. However, Mu requires wayland, and I am on xorg.
Does anyone have any idea if this is possible? Thanks.
Mu and the uflash command are able to retrieve your Python code from .hex files. Using uflash you can do the following for example:
uflash my_script.py
I think that you want is somehow possible to do, but its harder than just using their web python editor: https://python.microbit.org/v/2
Peter Till answers the original question. The additional below adds to this answer by showing how to automate the build and load process. I use Debian. The original question states that Ubuntu is used, which is built on Debian.
A script to find and mount the micro:bit
When code is loaded to the micro:bit, the board is dismounted from the system. So each time you have new code to load, you have to remount the board.
I modified a script to find and mount the micro:bit.
#!/bin/bash
BASEPATH="/media/$(whoami)/"
MICRO="MICROBIT"
if [ $# -eq 0 ]
then
echo "no argument supplied, use 'mount' or 'unmount'"
exit 1
fi
if [ $1 == "--help" ]
then
echo "mounts or unmounts a BBC micro:bit"
echo "args: mount - mount the microbit, unmout - unmount the microbit"
fi
# how many MICRO found in udiksctl dump
RESULTS=$(udisksctl dump | grep IdLabel | grep -c -i $MICRO)
case "$RESULTS" in
0 ) echo "no $MICRO found in 'udkisksctl dump'"
exit 0
;;
1 ) DEVICELABEL=$(udisksctl dump | grep IdLabel | grep -i $MICRO | cut -d ":" -f 2 | sed 's/^[ \t]*//')
DEVICE=$(udisksctl dump | grep -i "IdLabel: \+$DEVICELABEL" -B 12 | grep " Device:" | cut -d ":" -f 2 | sed 's/^[ \t]*//')
DEVICEPATH="$BASEPATH""$DEVICELABEL"
echo "found one $MICRO, device: $DEVICE"
if [[ -z $(mount | grep "$DEVICE") ]]
then
echo "$DEVICELABEL was unmounted"
if [ $1 == "mount" ]
then
udisksctl mount -b "$DEVICE"
exit 0
fi
else
echo "$DEVICELABEL was mounted"
if [ $1 == "unmount" ]
then
udisksctl unmount -b "$DEVICE"
exit 0
fi
fi
;;
* ) echo "more than one $MICRO found"
;;
esac
echo "exiting without doing anything"
I alias this script to mm in my .bashrc file.
Automate mounting the micro:bit and flashing the python file
I use the inotifywait command to run mm and to then run uflash to load the .py file I am working on. Each time that the python file is saved, the aliased command mm is run followed by the uflash command.
while inotifywait -e modify <your_file>.py ; do mm && uflash <your_file>.py ; done
Okay, so elaborating on Peter Till's answer.
Firstly, you can use uflash:
uflash path/to/your/code .
Or, you can use microfs:
ufs put path/to/main.py
Working Ubuntu 22.04 host CLI setup with Carlos Atencio's Docker to build your own firmware
After trying to setup the toolchain for a while, I finally decided to Google for a Docker image with the toolchain, and found https://github.com/carlosperate/docker-microbit-toolchain at this commit from Carlos Atencio, a Micro:Bit foundation employee, and that just absolutely worked:
# Get examples.
git clone https://github.com/bbcmicrobit/micropython
cd micropython
git checkout 7fc33d13b31a915cbe90dc5d515c6337b5fa1660
# Get Docker image.
docker pull ghcr.io/carlosperate/microbit-toolchain:latest
# Build setup to be run once.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest yt target bbc-microbit-classic-gcc-nosd#https://github.com/lancaster-university/yotta-target-bbc-microbit-classic-gcc-nosd
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest make all
# Build one example.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \
tools/makecombinedhex.py build/firmware.hex examples/counter.py -o build/counter.hex
# Build all examples.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \
bash -c 'for f in examples/*; do b="$(basename "$f")"; echo $b; tools/makecombinedhex.py build/firmware.hex "$f" -o "build/${b%.py}.hex"; done'
And you can then flash the example you want to run with:
cp build/counter.hex "/media/$USER/MICROBIT/"
Some further comments at: Generating micropython + python code `.hex` file from the command line for the BBC micro:bit

sudo: service: command not found

I had ApplicationStop in my appspec.yml file in the previous deployment but removed it. Now, CodeDeploy is trying to find the file which was there in my previous deployment.
I found other answers but none of them are working:
sudo service codedeploy-agent stop
On typing this in CloudShell, I am getting the error - sudo: service: command not found
aws deploy create-deployment --application-name APPLICATION --deployment-group-name GROUP --ignore-application-stop-failures --s3-location bundleType=tar,bucket=BUCKET,key=KEY --description "Ignore ApplicationStop failures due to broken script"
After typing this and replacing the APPLICATION, GROUP, BUCKET, and KEY, the code deployment starts. But, deployment fails anyway with error.
My files:
appspec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /home/discordbot
hooks:
AfterInstall:
- location: scripts/RunMyBot.sh
timeout: 300
runas: root
RunMyBot.sh
#!bin/bash
easy_install pip
pip install -r /home/discordbot/requirements.txt
file="/lib/systemd/system/mypythonservice.service"
echo "[Unit]" > $file
echo "Description=My Python Service" > $file
echo "After=multi-user.target" >> $file
echo "[Service]" >> $file
echo "Type=idle" >> $file
echo "ExecStart=/usr/bin/python /home/discordbot/botMain.py" >> $file
echo "Restart=on-failure" >> $file
echo "[Install]" >> $file
echo "WantedBy=multi-user.target" >> $file
cat $file
sudo chmod 644 /lib/systemd/system/mypythonservice.service
sudo systemctl daemon-reload
sudo systemctl enable mypythonservice.service
I just can't get my code deployed successfully, 27 deployments have failed.
My python code is simple which just needs to run all the time. It accepts commands from users and returns the output. Code resides in github.

pass bash script args as named parameters to a command inside the script

I have a bash script that takes two parameters. Inside that script, I need to call ssh using a heredoc and call a method that expects the two arguments. For example:
ssh -o "IdentitiesOnly=yes" -t -i $key -l user localhost << 'ENDSSH'
/my_python_app.py -u -t tar -p $1 -f $2
ENDSSH
key is set by my script, I know that part is good.
However, my_python_app prints out args and it doesn't show any arguments for -p and -f
I would call my script like
my_script /tmp filename
I use argparse in my python app, but I am also printing out sys.argv and it gives me:
['my_python_app.py', '-u', '-t', 'tar', '-p', '-f']
Note there are no values received for -p and -f. (-u is a flag, and that is set correctly).
How do I pass $1 and $2 to my_python_app as the -p and -f values?
Remove the quotes around the here-document delimiter (i.e. use << ENDSSH instead of << 'ENDSSH'). The quotes tell the shell not to expand variable references (and some other things) in the here-document, so $1 and $2 are passed through to the remote shell... which doesn't have any parameters so it replaces them with nothing.
BTW, removing the single-quotes may not fully work, since if either argument contains whitespace or shell metacharacters, the remote end will parse those in a way you probably don't intend. As long as neither argument can contain a single-quote, you can use this:
ssh -o "IdentitiesOnly=yes" -t -i $key -l user localhost << ENDSSH
/my_python_app.py -u -t tar -p '$1' -f '$2'
ENDSSH
If either might contain single-quotes, it gets a little more complicated.
The more paranoid way to do this would be:
# store these in an array to reduce the incidental complexity below
ssh_args=( -o "IdentitiesOnly=yes" -t -i "$key" -l user )
posixQuote() {
python -c 'import sys, pipes; sys.stdout.write(pipes.quote(sys.argv[1])+"\n")' "$#"
}
ssh "${ssh_args[#]}" localhost "bash -s $(posixQuote "$1") $(posixQuote "$2")" << 'ENDSSH'
/path/to/my_python_app.py -u -t tar -p "$1" -f "$2"
ENDSSH
If you know with certainty that the destination account's shell matches the local one (bash if the local shell is bash, ksh if the local shell is ksh), consider the following instead:
printf -v remoteCmd '%q ' /path/to/my_python_app.py -u -t tar -p "$1" -f "$2"
ssh "${ssh_args[#]}" localhost "$remoteCmd"

Implement Git hook - prePush and preCommit

Could you please show me how to implement git hook?
Before committing, the hook should run a python script. Something like this:
cd c:\my_framework & run_tests.py --project Proxy-Tests\Aeries \
--client Aeries --suite <Commit_file_Name> --dryrun
If the dry run fails then commit should be stopped.
You need to tell us in what way the dry run will fail. Will there be an output .txt with errors? Will there be an error displayed on terminal?
In any case you must name the pre-commit script as pre-commit and save it in .git/hooks/ directory.
Since your dry run script seems to be in a different path than the pre-commit script, here's an example that finds and runs your script.
I assume from the backslash in your path that you are on a windows machine and I also assume that your dry-run script is contained in the same project where you have git installed and in a folder called tools (of course you can change this to your actual folder).
#!/bin/sh
#Path of your python script
FILE_PATH=tools/run_tests.py/
#Get relative path of the root directory of the project
rdir=`git rev-parse --git-dir`
rel_path="$(dirname "$rdir")"
#Cd to that path and run the file.
cd $rel_path/$FILE_PATH
echo "Running dryrun script..."
python run_tests.py
#From that point on you need to handle the dry run error/s.
#For demonstrating purproses I'll asume that an output.txt file that holds
#the result is produced.
#Extract the result from the output file
final_res="tac output | grep -m 1 . | grep 'error'"
echo -e "--------Dry run result---------\n"${final_res}
#If a warning and/or error exists abort the commit
eval "$final_res" | while read -r line; do
if [ $line != "0" ]; then
echo -e "Dry run failed.\nAborting commit..."
exit 1
fi
done
Now every time you fire git commit -m the pre-commit script will run the dry run file and abort the commit if any errors have occured, keeping your files in the stagin area.
I have implemented this in my hook. Here is the code snippet.
#!/bin/sh
#Path of your python script
RUN_TESTS="run_tests.py"
FRAMEWORK_DIR="/my-framework/"
CUR_DIR=`echo ${PWD##*/}`
`$`#Get full path of the root directory of the project under RUN_TESTS_PY_FILE
rDIR=`git rev-parse --git-dir --show-toplevel | head -2 | tail -1`
OneStepBack=/../
CD_FRAMEWORK_DIR="$rDIR$OneStepBack$FRAMEWORK_DIR"
#Find list of modified files - to be committed
LIST_OF_FILES=`git status --porcelain | awk -F" " '{print $2}' | grep ".txt" `
for FILE in $LIST_OF_FILES; do
cd $CD_FRAMEWORK_DIR
python $RUN_TESTS --dryrun --project $CUR_DIR/$FILE
OUT=$?
if [ $OUT -eq 0 ];then
continue
else
return 1
fi
done

wpa_supplicant - how to switch to different network?

What I need:
Connect to different wifi network on archlinux by calling python script.
What I am doing:
Executing the following statements from python:
wpa_passphrase "MySSID" "MyPass"> /etc/wpa_supplicant/profile.conf
wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/profile.conf
dhcpd wlan0
It works only for the first attempt. When it is executed the second time, it says dhcpd is already on.
I dont know how to switch to another network.
I have also tried wpa_cli and again, dont know how to switch to another network.
Please suggest some fix or alternatives (uncomplicated)
Your concrete Problem is that you start wpa_supplicant and dhcp instead of re-starting them. I have a script that reads
#shutdown dhc
dhclient -r
#shutdown wpa_supplicant
killall wpa_supplicant
#down interface
ifdown --force wlan0
sleep 1
#your wpa startup here:
wpa_supplicant -t -fYOUR_LOG_FILE -cYOUR_wpa_supplicant.conf -B -iwlan0
sleep 1
#restart dhc
dhclient -nw
I guess you can do this a little more nicely by configuring your /etc/network/interfaces appropriately.
Btw. in principle, it should not be necessary to restart the dhc at all. After some while it should realize that it needs to fetch a new IP, but for me this takes to long. ;)
Edit /etc/wpa_supplicant.conf
nano /etc/wpa_supplicant.conf
Complete the file to make it look like that (replacing wifi_name and wifi_key by their real values of course).
network={
ssid="wifi_name1"
psk="wifi_key1"
}
and
network={
ssid="wifi_name2"
psk="wifi_key2"
}
Then save and exit
The wifi network is now configured, we must now tell that we want to connect to it using this configuration file.
wpa_supplicant -B -i wlan0 -c <(wpa_passphrase MYSSID passphrase)
If your interface isn't named wlan0 then replace wlan0 by the real name of your interface.
We must now request an IP adress.
dhclient wlan0
If everything gone well you now see several lines containing some IP addresses and the command ping should work.
When you generate a new config with wpa_passphrase, put it in a different spot than the last to make things easier. So for example, your home wifi could be in /etc/wpa_supplicant/home.conf and your work wifi could be in /etc/wpa_supplicant/work.conf.
Then when you need to connect to your home wifi you just run
killall wpa_supplicant # With root privileges
wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/home.conf
And when you need to connect to your work wifi you run
killall wpa_supplicant # With root privileges
wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant/work.conf
Rince and repeat for any new networks you want to use. If you don't want to keep a network, like a starbucks wifi on a roadtrip, just save it to a conf that you plan on overwriting or deleting like /etc/wpa_supplicant/temp.conf.
AFAIK, you never have to rerun dhcpcd. I have dhcpcd as a startup process and whenever I switch wifis I never need to touch it.
Edit: You also don't need to run this as a python script. You can do it in the shell. If you need to write a script that quickly changes the wifi, I would recommend you use a shell script like the following for example.
#!/bin/sh
[ -z "$1" ] && exit 1
[ ! -f "/etc/wpa_supplicant/${1}.conf" ] && echo "$1 is not a wpa_supplicant config file" && exit 1
sudo killall wpa_supplicant
sudo wpa_supplicant -B -i wlan0 -c "/etc/wpa_supplicant/${1}.conf"
Run like
wifichange home
or
wifichange work
The [ -z "$1" ] section is saying to quit if you didn't input anything. (like)
wifichange
And the [ ! -f ...] section is saying to quit if you didn't input the name of a real config file.
Now I tested the script.

Categories