I had ApplicationStop in my appspec.yml file in the previous deployment but removed it. Now, CodeDeploy is trying to find the file which was there in my previous deployment.
I found other answers but none of them are working:
sudo service codedeploy-agent stop
On typing this in CloudShell, I am getting the error - sudo: service: command not found
aws deploy create-deployment --application-name APPLICATION --deployment-group-name GROUP --ignore-application-stop-failures --s3-location bundleType=tar,bucket=BUCKET,key=KEY --description "Ignore ApplicationStop failures due to broken script"
After typing this and replacing the APPLICATION, GROUP, BUCKET, and KEY, the code deployment starts. But, deployment fails anyway with error.
My files:
appspec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /home/discordbot
hooks:
AfterInstall:
- location: scripts/RunMyBot.sh
timeout: 300
runas: root
RunMyBot.sh
#!bin/bash
easy_install pip
pip install -r /home/discordbot/requirements.txt
file="/lib/systemd/system/mypythonservice.service"
echo "[Unit]" > $file
echo "Description=My Python Service" > $file
echo "After=multi-user.target" >> $file
echo "[Service]" >> $file
echo "Type=idle" >> $file
echo "ExecStart=/usr/bin/python /home/discordbot/botMain.py" >> $file
echo "Restart=on-failure" >> $file
echo "[Install]" >> $file
echo "WantedBy=multi-user.target" >> $file
cat $file
sudo chmod 644 /lib/systemd/system/mypythonservice.service
sudo systemctl daemon-reload
sudo systemctl enable mypythonservice.service
I just can't get my code deployed successfully, 27 deployments have failed.
My python code is simple which just needs to run all the time. It accepts commands from users and returns the output. Code resides in github.
Related
I am running Ubuntu 22.04 with xorg.
I need to find a way to compile microbit python code locally to a firmware hex file. Firstly, I followed the guide here https://microbit-micropython.readthedocs.io/en/latest/devguide/flashfirmware.html.
After a lot of debugging, I got to this point: https://pastebin.com/MGShD31N
However, the file platform.h does exist.
sawntoe#uwubuntu:~/Documents/Assignments/2022/TVP/micropython$ ls /home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h
/home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h
sawntoe#uwubuntu:~/Documents/Assignments/2022/TVP/micropython$
At this point, I gave up on this and tried using Mu editor with the AppImage. However, Mu requires wayland, and I am on xorg.
Does anyone have any idea if this is possible? Thanks.
Mu and the uflash command are able to retrieve your Python code from .hex files. Using uflash you can do the following for example:
uflash my_script.py
I think that you want is somehow possible to do, but its harder than just using their web python editor: https://python.microbit.org/v/2
Peter Till answers the original question. The additional below adds to this answer by showing how to automate the build and load process. I use Debian. The original question states that Ubuntu is used, which is built on Debian.
A script to find and mount the micro:bit
When code is loaded to the micro:bit, the board is dismounted from the system. So each time you have new code to load, you have to remount the board.
I modified a script to find and mount the micro:bit.
#!/bin/bash
BASEPATH="/media/$(whoami)/"
MICRO="MICROBIT"
if [ $# -eq 0 ]
then
echo "no argument supplied, use 'mount' or 'unmount'"
exit 1
fi
if [ $1 == "--help" ]
then
echo "mounts or unmounts a BBC micro:bit"
echo "args: mount - mount the microbit, unmout - unmount the microbit"
fi
# how many MICRO found in udiksctl dump
RESULTS=$(udisksctl dump | grep IdLabel | grep -c -i $MICRO)
case "$RESULTS" in
0 ) echo "no $MICRO found in 'udkisksctl dump'"
exit 0
;;
1 ) DEVICELABEL=$(udisksctl dump | grep IdLabel | grep -i $MICRO | cut -d ":" -f 2 | sed 's/^[ \t]*//')
DEVICE=$(udisksctl dump | grep -i "IdLabel: \+$DEVICELABEL" -B 12 | grep " Device:" | cut -d ":" -f 2 | sed 's/^[ \t]*//')
DEVICEPATH="$BASEPATH""$DEVICELABEL"
echo "found one $MICRO, device: $DEVICE"
if [[ -z $(mount | grep "$DEVICE") ]]
then
echo "$DEVICELABEL was unmounted"
if [ $1 == "mount" ]
then
udisksctl mount -b "$DEVICE"
exit 0
fi
else
echo "$DEVICELABEL was mounted"
if [ $1 == "unmount" ]
then
udisksctl unmount -b "$DEVICE"
exit 0
fi
fi
;;
* ) echo "more than one $MICRO found"
;;
esac
echo "exiting without doing anything"
I alias this script to mm in my .bashrc file.
Automate mounting the micro:bit and flashing the python file
I use the inotifywait command to run mm and to then run uflash to load the .py file I am working on. Each time that the python file is saved, the aliased command mm is run followed by the uflash command.
while inotifywait -e modify <your_file>.py ; do mm && uflash <your_file>.py ; done
Okay, so elaborating on Peter Till's answer.
Firstly, you can use uflash:
uflash path/to/your/code .
Or, you can use microfs:
ufs put path/to/main.py
Working Ubuntu 22.04 host CLI setup with Carlos Atencio's Docker to build your own firmware
After trying to setup the toolchain for a while, I finally decided to Google for a Docker image with the toolchain, and found https://github.com/carlosperate/docker-microbit-toolchain at this commit from Carlos Atencio, a Micro:Bit foundation employee, and that just absolutely worked:
# Get examples.
git clone https://github.com/bbcmicrobit/micropython
cd micropython
git checkout 7fc33d13b31a915cbe90dc5d515c6337b5fa1660
# Get Docker image.
docker pull ghcr.io/carlosperate/microbit-toolchain:latest
# Build setup to be run once.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest yt target bbc-microbit-classic-gcc-nosd#https://github.com/lancaster-university/yotta-target-bbc-microbit-classic-gcc-nosd
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest make all
# Build one example.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \
tools/makecombinedhex.py build/firmware.hex examples/counter.py -o build/counter.hex
# Build all examples.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \
bash -c 'for f in examples/*; do b="$(basename "$f")"; echo $b; tools/makecombinedhex.py build/firmware.hex "$f" -o "build/${b%.py}.hex"; done'
And you can then flash the example you want to run with:
cp build/counter.hex "/media/$USER/MICROBIT/"
Some further comments at: Generating micropython + python code `.hex` file from the command line for the BBC micro:bit
I made few changes in my botMain.py file. CodeDeploy is successful but the changes are not effective in the app. So, I edited my RunMyBot.sh file but still there's no change.
appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/mybot
hooks:
AfterInstall:
- location: scripts/install_dependencies.sh
timeout: 300
runas: root
ApplicationStart:
- location: scripts/RunMyBot.sh
timeout: 300
runas: root
RunMyBot.sh (new)
#!bin/bash
sudo /usr/bin/pm2 restart myBot
nohup python3 botMain.py & /dev/null 2> /dev/null < /dev/null &
RunMyBot.sh (old)
serverfile="/lib/systemd/system/mypythonservice.service"
echo "[Unit]" > $serverfile
echo "Description=My Python Service" > $serverfile
echo "After=multi-user.target" >> $serverfile
echo "[Service]" >> $serverfile
echo "Type=idle" >> $serverfile
echo "ExecStart=/usr/bin/python /home/mybot/botMain.py" >> $serverfile
echo "Restart=on-failure" >> $serverfile
echo "[Install]" >> $serverfile
echo "WantedBy=multi-user.target" >> $serverfile
cat $serverfile
sudo chmod 644 /lib/systemd/system/mypythonservice.service
sudo systemctl daemon-reload
sudo systemctl enable mypythonservice.service
The same server file script is in my instance user data also so I removed it from RunMyBot.sh
Before you deploy new version of your app, you have to stop your existing nohup. You could do this by adding ApplicationStop section to your appspec.yml.
I think the old setup was better, albeit seemingly more difficult to setup at first. With the old setup you would just restart your daemon.
you can use BeforeInstall hook to stop the running services and then deploy your new code in VM. After then, you can start your services.
AppSpec example : https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html
chd.sh
#! /bin/bash
cd django/hellodjango
exec bash
python manage.py runserver
chd.py
# a=`python chd.py`;cd $a
import os
new_dir = "django/hellodjango"
os.chdir(new_dir)
are the two ways I have tried.
Also, on terminal I have tried,
. chd.sh
./chd.sh
. ./chd.sh
I have also tried to assign to variable and then run on terminal but no success.
Spent over 4 hours trying multiple methods given on stackoverflow.com but no success yet.
The only thing that has worked yet is,
alias mycd='cd django/hellodjango'
But I will have to copy paste it everytime.
alias myrun = `cd django/hellodjango && python manage.py runserver`
And,
alias myrun = `cd django/hellodjango; python manage.py runserver`
doesn't work.
This is just a sample, there are so many django commands that I have to use repeatedly. Appreciate if you have read all this way.
If you know the link where this is discussed, please attach the link, as I was not able to find after hours of search.
Edit:
/storage/emulated/0 $
This is what the prompt appears like.
/storage/emulated/0/django/hellodjango
This is the path.
/storage/emulated/0 $ cd django/hellodjango
/storage/emulated/0/django/hellodjango $ python manage.py
runserver
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
July 25, 2020 - 19:08:42
Django version 3.0.7, using settings 'hellodjango.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
Individually works fine.
Edit:
/storage/emulated/0 $ chmod u+x chd.sh /storage/emulated/0 $ chmod u+x
rn.sh /storage/emulated/0 $ ./chd.sh ./chd.sh: cd: line 2: can't cd t:
No such file or directory /storage/emulated/0 $ chmod u+x chd.py
/storage/emulated/0 $ a=python chd.py;cd $a
~/data/ru.iiec.pydroid3/app_HOME $
Edit:
/data/user/0/tech.ula/files/support/dbclient: Caution, skipping
hostkey check for localhost
subham#localhost's password:
subham#localhost:~$ ls
subham#localhost:~$ cd
subham#localhost:~$ pwd
/home/subham
subham#localhost:~$ pkg install miniconda
-bash: pkg: command not found
subham#localhost:~$ apt install miniconda
Reading package lists... Done Building dependency tree
Reading state information... Done
E: Unable to locate package miniconda
subham#localhost:~$
subham#localhost:~$ cd ..
subham#localhost:/home$ cd ..
subham#localhost:/$ ls
bin dev host-rootfs mnt root srv
sys var boot etc lib opt run storage tmp data home
media proc sbin support usr
subham#localhost:/$ cd ..
subham#localhost:/$ cd sys
subham#localhost:/sys$ ls
ls: cannot open
directory '.': Permission denied
subham#localhost:/sys$ cd..
-bash: cd..: command not found
subham#localhost:/sys$ cd ..
subham#localhost:/$ cd storage
subham#localhost:/storage$ ls internal
subham#localhost:/storage$ cd internal
subham#localhost:/storage/internal$ ls
subham#localhost:/storage/internal$ ls -l total 0
subham#localhost:/storage/internal$ cd 0
-bash: cd: 0: No such file or directory subham#localhost:/storage/internal$
subham#localhost:/$ chmod -R 777 /host-rootfs
chmod: changing permissions of '/host-rootfs': Read-only file system
chmod: cannot read directory '/host-rootfs': Permission denied
subham#localhost:/$
https://github.com/CypherpunkArmory/UserLAnd/issues/46
I need to write a script in bash/python to scp the latest file which arrives at a given folder.That is I am continously getting files into a folder say (/home/ram/Khopo/) I need to scp it into xxx#192.168.21.xxx in /home/xxx/khopo/.
I googled and got this result
file_to_copy=`ssh username#hostname 'ls -1r | head -1'`
echo copying $file_to_copy ...
scp username#hostname:$file_to_copy /local/path
But I want to know whether it is possible do this such that it runs only when a new folder arrives at the source(/home/ram/Khopo/) and waits for the file to reach the folder and do it immediately when it has arrived
I would try to sync the remote directory. This should give you nice outlook, how to do that:
rsync:
https://askubuntu.com/a/105860
https://www.atlantic.net/hipaa-compliant-cloud-storage/how-to-use-rsync-copy-sync-files-servers/
or other tools for syncing:
https://en.wikipedia.org/wiki/Comparison_of_file_synchronization_software
As others have suggested you can use inotifywait, below an example of what you could do in bash:
#!/bin/bash
echo "Enter ssh password"
IFS= read -rs password # Read the password in a hidden way
inotifywait -m -e create "/folder_where_files_arrive" | while read line
do
file_to_copy=$(echo $line | cut -d" " -f1,3 --output-delimiter="")
echo copying $file_to_copy ...
if [[ -d $file_to_copy ]]; then # is a directory
sshpass -p $password scp -r username#hostname:$file_to_copy /local/path
elif [[ -f $file_to_copy ]]; then # is a file
sshpass -p $password scp username#hostname:$file_to_copy /local/path
fi
done
Then you would ideally put this script to run in background, e.g.,:
nohup script.sh &
For sshpass you can install it in ubunut/debian with:
apt install sshpass
I'm trying to run a command that I've installed in my home directory on a remote server. It's already been added to my $PATH in .bash_profile. I'm able to use it when logged in remotely via a normal ssh session, but Fabric doesn't seem to be pulling in my $PATH. Thus, I've tried adding it to my $PATH using Fabric's path context manager like so:
def test_path():
print('My env.path setting: %(path)s' % env)
with path('/path/to/sources/drush'):
run('echo $PATH')
run('drush')
Fabric responds with:
Executing task 'test_path'
My env.path setting:
run: echo $PATH
out: /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
out:
run: echo $PATH
out: /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/path/to/sources/drush
out:
run: drush
out: /bin/bash: drush: command not found
out:
Fatal error: run() received nonzero return code 127 while executing!
Requested: drush
Executed: /bin/bash -l -c "export PATH=\"\$PATH:\"/path/to/sources/drush\" \" && drush"
Aborting.
Thanks for looking...
The problem is in the way the PATH variable gets set - there is an additional space character at the end of it:
/bin/bash -l -c "export PATH=\"\$PATH:\"/path/to/sources/drush\" \" && drush"
^HERE
The last directory in the search path is interpreted by bash as "/path/to/source/drush " (trailing space) - an invalid directory.