Download ipynb from colab notebook Url - python

Given a list of colab notebooks how can I download the ipynb of each one of them using wget or curl?
https://colab.research.google.com/notebooks/gpu.ipynb
https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb
https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo
This question explains how to download notebooks stored on gdrive, but what about notebooks stored on github or on colab directories (colab.research.google.com/notebooks/) or other sources?

There're 2 options I recommend, assuming all the target url are in a text file. Save the code to .sh file (e.g dlnb.sh) and all the urls in a text file (e.g list.txt) like
https://colab.research.google.com/notebooks/gpu.ipynb
https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb
https://colab.research.google.com/drive/1sVsoBd9AjckIXThgtZhGrHRfFI6UUYOo
tl;dr: I would recommend to use solution 2 which use gdown (just run pip install gdown). Since wget can't save notebook with url doesn't have its name. Then run bash dlnb.sh list.txt in terminal
1.wget and cat only. This one has one raw back, we only use wget so the link that doesn't have a name will be save as random_id_here.ipynb
dlnb.sh
grabid() { fileid=$( echo "$1" | egrep -o '(\w|-){26,}' ); echo $fileid; }
cat $1 | while read line || [[ -n $line ]];
do
if [[ $line != *.ipynb ]]; then
id=$(grabid "$line")
wget -O $id.ipynb 'https://docs.google.com/uc?export=download&id='$id;
else
wget $line;
fi;
done
I take this reg ex, which is egrep -o '(\w|-){26,}' and plug it in my function, which it will extract and return id from the link
grabid() { fileid=$( echo "$1" | egrep -o '(\w|-){26,}' ); echo $fileid; }
assign id by calling grabid(), line is the url
id=$(grabid "$line")
then using while read line || [[ -n $line ]]; loop through each line and download it using wget, you can see the explantion of the while loop in the code here
wget -O $id.ipynb 'https://docs.google.com/uc?export=download&id='$id;
OR
2.A better solution by install gdown. This work similar as solution 1, but using gdown instead of wget
dlnb.sh
grabid() { fileid=$( echo "$1" | egrep -o '(\w|-){26,}' ); echo $fileid; }
cat $1 | while read line || [[ -n $line ]];
do
if [[ $line != *.ipynb ]]; then
gdown $(grabid "$line");
else
gdown $line;
fi;
done
If the url is not end with .ipynb if [[ $line != *.ipynb ]]; then gdown will grab the id $(grabid "$line"); and download it instead, while solution 1 will save the notebook as id_of_notebook.ipynb. gdown will save as its name instead.

Related

Is it possible to compile microbit python code locally?

I am running Ubuntu 22.04 with xorg.
I need to find a way to compile microbit python code locally to a firmware hex file. Firstly, I followed the guide here https://microbit-micropython.readthedocs.io/en/latest/devguide/flashfirmware.html.
After a lot of debugging, I got to this point: https://pastebin.com/MGShD31N
However, the file platform.h does exist.
sawntoe#uwubuntu:~/Documents/Assignments/2022/TVP/micropython$ ls /home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h
/home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h
sawntoe#uwubuntu:~/Documents/Assignments/2022/TVP/micropython$
At this point, I gave up on this and tried using Mu editor with the AppImage. However, Mu requires wayland, and I am on xorg.
Does anyone have any idea if this is possible? Thanks.
Mu and the uflash command are able to retrieve your Python code from .hex files. Using uflash you can do the following for example:
uflash my_script.py
I think that you want is somehow possible to do, but its harder than just using their web python editor: https://python.microbit.org/v/2
Peter Till answers the original question. The additional below adds to this answer by showing how to automate the build and load process. I use Debian. The original question states that Ubuntu is used, which is built on Debian.
A script to find and mount the micro:bit
When code is loaded to the micro:bit, the board is dismounted from the system. So each time you have new code to load, you have to remount the board.
I modified a script to find and mount the micro:bit.
#!/bin/bash
BASEPATH="/media/$(whoami)/"
MICRO="MICROBIT"
if [ $# -eq 0 ]
then
echo "no argument supplied, use 'mount' or 'unmount'"
exit 1
fi
if [ $1 == "--help" ]
then
echo "mounts or unmounts a BBC micro:bit"
echo "args: mount - mount the microbit, unmout - unmount the microbit"
fi
# how many MICRO found in udiksctl dump
RESULTS=$(udisksctl dump | grep IdLabel | grep -c -i $MICRO)
case "$RESULTS" in
0 ) echo "no $MICRO found in 'udkisksctl dump'"
exit 0
;;
1 ) DEVICELABEL=$(udisksctl dump | grep IdLabel | grep -i $MICRO | cut -d ":" -f 2 | sed 's/^[ \t]*//')
DEVICE=$(udisksctl dump | grep -i "IdLabel: \+$DEVICELABEL" -B 12 | grep " Device:" | cut -d ":" -f 2 | sed 's/^[ \t]*//')
DEVICEPATH="$BASEPATH""$DEVICELABEL"
echo "found one $MICRO, device: $DEVICE"
if [[ -z $(mount | grep "$DEVICE") ]]
then
echo "$DEVICELABEL was unmounted"
if [ $1 == "mount" ]
then
udisksctl mount -b "$DEVICE"
exit 0
fi
else
echo "$DEVICELABEL was mounted"
if [ $1 == "unmount" ]
then
udisksctl unmount -b "$DEVICE"
exit 0
fi
fi
;;
* ) echo "more than one $MICRO found"
;;
esac
echo "exiting without doing anything"
I alias this script to mm in my .bashrc file.
Automate mounting the micro:bit and flashing the python file
I use the inotifywait command to run mm and to then run uflash to load the .py file I am working on. Each time that the python file is saved, the aliased command mm is run followed by the uflash command.
while inotifywait -e modify <your_file>.py ; do mm && uflash <your_file>.py ; done
Okay, so elaborating on Peter Till's answer.
Firstly, you can use uflash:
uflash path/to/your/code .
Or, you can use microfs:
ufs put path/to/main.py
Working Ubuntu 22.04 host CLI setup with Carlos Atencio's Docker to build your own firmware
After trying to setup the toolchain for a while, I finally decided to Google for a Docker image with the toolchain, and found https://github.com/carlosperate/docker-microbit-toolchain at this commit from Carlos Atencio, a Micro:Bit foundation employee, and that just absolutely worked:
# Get examples.
git clone https://github.com/bbcmicrobit/micropython
cd micropython
git checkout 7fc33d13b31a915cbe90dc5d515c6337b5fa1660
# Get Docker image.
docker pull ghcr.io/carlosperate/microbit-toolchain:latest
# Build setup to be run once.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest yt target bbc-microbit-classic-gcc-nosd#https://github.com/lancaster-university/yotta-target-bbc-microbit-classic-gcc-nosd
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest make all
# Build one example.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \
tools/makecombinedhex.py build/firmware.hex examples/counter.py -o build/counter.hex
# Build all examples.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \
bash -c 'for f in examples/*; do b="$(basename "$f")"; echo $b; tools/makecombinedhex.py build/firmware.hex "$f" -o "build/${b%.py}.hex"; done'
And you can then flash the example you want to run with:
cp build/counter.hex "/media/$USER/MICROBIT/"
Some further comments at: Generating micropython + python code `.hex` file from the command line for the BBC micro:bit

Secure Copy (scp) the latest file which arrives at a given folder?

I need to write a script in bash/python to scp the latest file which arrives at a given folder.That is I am continously getting files into a folder say (/home/ram/Khopo/) I need to scp it into xxx#192.168.21.xxx in /home/xxx/khopo/.
I googled and got this result
file_to_copy=`ssh username#hostname 'ls -1r | head -1'`
echo copying $file_to_copy ...
scp username#hostname:$file_to_copy /local/path
But I want to know whether it is possible do this such that it runs only when a new folder arrives at the source(/home/ram/Khopo/) and waits for the file to reach the folder and do it immediately when it has arrived
I would try to sync the remote directory. This should give you nice outlook, how to do that:
rsync:
https://askubuntu.com/a/105860
https://www.atlantic.net/hipaa-compliant-cloud-storage/how-to-use-rsync-copy-sync-files-servers/
or other tools for syncing:
https://en.wikipedia.org/wiki/Comparison_of_file_synchronization_software
As others have suggested you can use inotifywait, below an example of what you could do in bash:
#!/bin/bash
echo "Enter ssh password"
IFS= read -rs password # Read the password in a hidden way
inotifywait -m -e create "/folder_where_files_arrive" | while read line
do
file_to_copy=$(echo $line | cut -d" " -f1,3 --output-delimiter="")
echo copying $file_to_copy ...
if [[ -d $file_to_copy ]]; then # is a directory
sshpass -p $password scp -r username#hostname:$file_to_copy /local/path
elif [[ -f $file_to_copy ]]; then # is a file
sshpass -p $password scp username#hostname:$file_to_copy /local/path
fi
done
Then you would ideally put this script to run in background, e.g.,:
nohup script.sh &
For sshpass you can install it in ubunut/debian with:
apt install sshpass

Error Installing opencv-python on linux /sbin/ldconfig.real: /usr/lib32/nvidia-384/libEGL.so.1 is not a symbolic link

I have been following this tutorial to install opencv and python:
https://www.pyimagesearch.com/2015/06/22/install-opencv-3-0-and-python-2-7-on-ubuntu/#comment-441393
The only difference is that I am trying to install opencv 3.3.1 instead of 3.0.0
I'm running on a laptop with Ubuntu 14.04 an i7 and NVIDIA GTX950M
The problem is that when I execute the command ldconfig
$ sudo make install
$ sudo ldconfig
I get the following message:
/sbin/ldconfig.real: /usr/lib/nvidia-384/libEGL.so.1 is not a symbolic link
/sbin/ldconfig.real: /usr/lib32/nvidia-384/libEGL.so.1 is not a symbolic link
So I found a solution to the problem:
Source: https://askubuntu.com/questions/900285/libegl-so-1-is-not-a-symbolic-link #muru #Gerard Tromp
The following is an easy-to-use version of Noisy_Botnet's solution. It
facilitates repeating the process for any update.
Create a shell script, i.e., paste the code bellow a text file, and save it with the .sh extension.
Change the executing permissions of the file i.e., go to the location of the file in the terminal and execute the following command $sudo chmod 744 nameofthefieleyoucreated.sh
execute the following command $sudo ./nameofthefileyoucreated.sh
#! /bin/sh
#
# find the file in /usr/lib
LIBEGL=`find /usr/lib/nvidia* -name libEGL.so.\* | egrep "[0-9][0-9]*\.[0-9][0-9]*$"`
LIBEGL_LINK=`echo $LIBEGL | sed 's/[0-9][0-9]*\.[0-9][0-9]*$/1/'`
printf "\n\nThe following commands will be executed:\n+++++++++++++++++++++++++++++++++++++++\n"
printf "mv $LIBEGL_LINK ${LIBEGL_LINK}.orig\nln -s $LIBEGL $LIBEGL_LINK\n\n"
while true; do
read -p "Do you wish to perform these commands? " yn
case $yn in
[Yy]* ) mv $LIBEGL_LINK ${LIBEGL_LINK}.orig; ln -s $LIBEGL $LIBEGL_LINK ; break;;
[Nn]* ) break;;
* ) echo "Please answer yes or no.";;
esac
done
# find the file in /usr/lib32
LIBEGL=`find /usr/lib32/nvidia* -name libEGL.so.\* | egrep "[0-9][0-9]*\.[0-9][0-9]*$"`
LIBEGL_LINK=`echo $LIBEGL | sed 's/[0-9][0-9]*\.[0-9][0-9]*$/1/'`
printf "\n\nThe following commands will be executed:\n+++++++++++++++++++++++++++++++++++++++\n"
printf "mv $LIBEGL_LINK ${LIBEGL_LINK}.orig\nln -s $LIBEGL $LIBEGL_LINK\n\n"
while true; do
read -p "Do you wish to perform these commands? " yn
case $yn in
[Yy]* ) mv $LIBEGL_LINK ${LIBEGL_LINK}.orig; ln -s $LIBEGL $LIBEGL_LINK ; break;;
[Nn]* ) break;;
* ) echo "Please answer yes or no.";;
esac
done

No such file or directory in find running .sh

Running this on osx...
cd ${BUILD_DIR}/mydir && for DIR in $(find ./ '.*[^_].py' | sed 's/\/\//\//g' | awk -F "/" '{print $2}' | sort |uniq | grep -v .py); do
if [ -f $i/requirements.txt ]; then
pip install -r $i/requirements.txt -t $i/
fi
cd ${DIR} && zip -r ${DIR}.zip * > /dev/null && mv ${DIR}.zip ../../ && cd ../
done
cd ../
error:
(env) ➜ sh package_lambdas.sh find: .*[^_].py: No such file or directory
why?
find takes as an argument a list of directories to search. You provided what appears to be regular expression. Because there is no directory named (literally) .*[^_].py, find returns an error.
Below I have revised your script to correct that mistake (if I understand your intention). Because I see so many ill-written shell scripts these days, I've taken the liberty of "traditionalizing" it. Please see if you don't also find it more readable.
Changes:
use #!/bin/sh, guaranteed to be on an Unix-like system. Faster than bash, unless (like OS X) it is bash.
use lower case for variable names to distinguish from system variables (and not hide them).
eschew braces for variables (${var}); they're not needed in the simple case
do not pipe output to /usr/bin/true; route it to dev/null if that's what you mean
rm -f by definition cannot fail; if you meant || true, it's superfluous
put then and do on separate lines, easier to read, and that's how the Bourne shell language was meant to be used
Let && and || serve as line-continuation, so you can see what's happening step by step
Other changes I would suggest:
Use a subshell when changing the working directory temporarily. When it terminates, the working directory is restored automatically (retained by the parent), saving you the cd .. step, and errors.
Use set -e to cause the script to terminate on error. For expected errors, use || true explicitly.
Change grep .py to grep '\.py$', just for good measure.
To avoid Tilting Matchstick Syndrome, use something other than / as a sed substitute delimiter, e.g., sed 's://:/:g'. But sed could be avoided altogether with awk -F '/+' '{print $2}'.
Revised version:
#! /bin/sh
src_dir=lambdas
build_dir=bin
mkdir -p $build_dir/lambdas
rm -rf $build_dir/*.zip
cp -r $src_dir/* $build_dir/lambdas
#
# The sed is a bit complicated to be osx / linux cross compatible :
# ( .//run.sh vs ./run.sh
#
cd $build_dir/lambdas &&
for L in $(find . -exec grep -l '.*[^_].py' {} + |
sed 's/\/\//\//g' |
awk -F "/" '{print $2}' |
sort |
uniq |
grep -v .py)
do
if [ -f $i/requirements.txt ]
then
echo "Installing requirements"
pip install -r $i/requirements.txt -t $i/
fi
cd $L &&
zip -r $L.zip * > /dev/null &&
mv $L.zip ../../ &&
cd ../
done
cd ../
The find(1) manpage says its args are [path ...] [expression], where "expression" consists of "primaries" and "operands" (-flags). '.*[^-].py' doesn't look like any expression, so it's being interpreted as a path, and it's reporting that there is no file named '.*[^-].py' in the working directory.
Perhaps you meant:
find ./ -regex '.*[^-].py'

Bash - change image urls to base64 in html

28I tried to make a script that's converting images source from normal links to base64 encoding in html files.
But there is a problem: sometimes, sed tells me
script.sh: line 25: /bin/sed: Argument list too long
This is the code:
#!/bin/bash
# usage: ./script.sh file.html
mkdir images_temp
for i in `sed -n '/<img/s/.*src="\([^"]*\)".*/\1/p' $1`;
do echo "######### download the image";
wget -P images_temp/ $i;
#echo "######### convert the image for size saving";
#convert -quality 70 `echo ${i##*/}` `echo ${i##*/}`.temp;
#echo "######### rename temp image";
#rm `echo ${i##*/}` && mv `echo ${i##*/}`.temp `echo ${i##*/}`;
echo "######### encode in base64";
k="`echo "data:image/png;base64,"`$(base64 -w 0 images_temp/`echo ${i##*/}`)";
echo "######### deletion of images_temp pictures";
rm images_temp/*;
echo "######### remplace string in html";
sed -e "s|$i|$k|" $1 > temp.html;
echo "######### remplace final file";
rm -rf $1 && mv temp.html $1;
sleep 5;
done;
I think the $k argument is too long for sed when the image is bigger than ~128ko; sed can't process it.
How do I make it work ?
Thank you in advance !
PS1: and sorry for the very very ugly code
PS2: or how do I do that in python ? PHP ? I'm open !
Your base64 encoded image can be multiple megabytes, while the system may place a limit on the maximum length of parameters (traditionally around 128k). Sed is also not guaranteed to handle lines over 8kb, though versions like GNU sed can deal with much more.
If you want to try with your sed, provide the instructions in a file rather than on the command line. Instead of
sed -e "s|$i|$k|" $1 > temp.html;
use
echo "s|$i|$k|" > foo.sed
sed -f foo.sed "$1" > temp.html

Categories