Rope is a refactoring library for Python and RopeVim is a Vim plugin which calls into Rope.
The idea of using RopeVim seems great to me, is there any documentation on "getting started" with RopeVim?
I've followed what documentation there is: https://bitbucket.org/agr/ropevim/src/tip/README.txt
I suppose I'm looking for:
look at this blog post / article
/ link it makes it all make sense.
alternate recommendations like
"forget about RopeVim", it doesn't
work very well or say "use this
instead of ropevim".
For basic renaming, hover your vim cursor over the variable/method/etc that you wish to rename and then type:
:RopeRename <enter>
From there it should be self-explanatory. It asks for the root path to the project you wish to do the renaming in. Then it asks you for the new name. Then you can preview/confirm changes.
If you have tab-complete setup in your vim command-area you can look through the other rope features by typing:
:Rope<Tab>
The documentation you found only shows the Vim particulars. If you want to see what those rope functions can do, see the rope documentation. Note, it's incomplete and points to the unittests for a full overview of what it can do.
i use this script and is the best to automate all the process
https://gist.github.com/15067
#!/bin/bash
# Plant rope vim's plugin
# This is a script to install or update 'ropevim'
# Copyright Alexander Artemenko, 2008
# Contact me at svetlyak.40wt at gmail com
function create_dirs
{
mkdir -p src
mkdir -p pylibs
}
function check_vim
{
if vim --version | grep '\-python' > /dev/null
then
echo You vim does not support python plugins.
echo Please, install vim with python support.
echo On debian or ubuntu you can do this:
echo " sudo apt-get install vim-python"
exit 1
fi
}
function get_or_update
{
if [ -e $1 ]
then
cd $1
echo Pulling updates from $2
hg pull > /dev/null
cd ..
else
echo Cloning $2
hg clone $2 $1 > /dev/null
fi
}
function pull_sources
{
cd src
get_or_update rope http://bitbucket.org/agr/rope
get_or_update ropevim http://bitbucket.org/agr/ropevim
get_or_update ropemode http://bitbucket.org/agr/ropemode
cd ../pylibs
ln -f -s ../src/rope/rope
ln -f -s ../src/ropemode/ropemode
ln -f -s ../src/ropevim/ropevim.py
cd ..
}
function gen_vim_config
{
echo "let \$PYTHONPATH .= \":`pwd`/pylibs\"" > rope.vim
echo "source `pwd`/src/ropevim/ropevim.vim" >> rope.vim
echo "Now, just add \"source `pwd`/rope.vim\" to your .vimrc"
}
check_vim
create_dirs
pull_sources
gen_vim_config
If you can live without vim, use Eric, which has rope support.
Related
I am running Ubuntu 22.04 with xorg.
I need to find a way to compile microbit python code locally to a firmware hex file. Firstly, I followed the guide here https://microbit-micropython.readthedocs.io/en/latest/devguide/flashfirmware.html.
After a lot of debugging, I got to this point: https://pastebin.com/MGShD31N
However, the file platform.h does exist.
sawntoe#uwubuntu:~/Documents/Assignments/2022/TVP/micropython$ ls /home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h
/home/sawntoe/Documents/Assignments/2022/TVP/micropython/yotta_modules/mbed-classic/api/platform.h
sawntoe#uwubuntu:~/Documents/Assignments/2022/TVP/micropython$
At this point, I gave up on this and tried using Mu editor with the AppImage. However, Mu requires wayland, and I am on xorg.
Does anyone have any idea if this is possible? Thanks.
Mu and the uflash command are able to retrieve your Python code from .hex files. Using uflash you can do the following for example:
uflash my_script.py
I think that you want is somehow possible to do, but its harder than just using their web python editor: https://python.microbit.org/v/2
Peter Till answers the original question. The additional below adds to this answer by showing how to automate the build and load process. I use Debian. The original question states that Ubuntu is used, which is built on Debian.
A script to find and mount the micro:bit
When code is loaded to the micro:bit, the board is dismounted from the system. So each time you have new code to load, you have to remount the board.
I modified a script to find and mount the micro:bit.
#!/bin/bash
BASEPATH="/media/$(whoami)/"
MICRO="MICROBIT"
if [ $# -eq 0 ]
then
echo "no argument supplied, use 'mount' or 'unmount'"
exit 1
fi
if [ $1 == "--help" ]
then
echo "mounts or unmounts a BBC micro:bit"
echo "args: mount - mount the microbit, unmout - unmount the microbit"
fi
# how many MICRO found in udiksctl dump
RESULTS=$(udisksctl dump | grep IdLabel | grep -c -i $MICRO)
case "$RESULTS" in
0 ) echo "no $MICRO found in 'udkisksctl dump'"
exit 0
;;
1 ) DEVICELABEL=$(udisksctl dump | grep IdLabel | grep -i $MICRO | cut -d ":" -f 2 | sed 's/^[ \t]*//')
DEVICE=$(udisksctl dump | grep -i "IdLabel: \+$DEVICELABEL" -B 12 | grep " Device:" | cut -d ":" -f 2 | sed 's/^[ \t]*//')
DEVICEPATH="$BASEPATH""$DEVICELABEL"
echo "found one $MICRO, device: $DEVICE"
if [[ -z $(mount | grep "$DEVICE") ]]
then
echo "$DEVICELABEL was unmounted"
if [ $1 == "mount" ]
then
udisksctl mount -b "$DEVICE"
exit 0
fi
else
echo "$DEVICELABEL was mounted"
if [ $1 == "unmount" ]
then
udisksctl unmount -b "$DEVICE"
exit 0
fi
fi
;;
* ) echo "more than one $MICRO found"
;;
esac
echo "exiting without doing anything"
I alias this script to mm in my .bashrc file.
Automate mounting the micro:bit and flashing the python file
I use the inotifywait command to run mm and to then run uflash to load the .py file I am working on. Each time that the python file is saved, the aliased command mm is run followed by the uflash command.
while inotifywait -e modify <your_file>.py ; do mm && uflash <your_file>.py ; done
Okay, so elaborating on Peter Till's answer.
Firstly, you can use uflash:
uflash path/to/your/code .
Or, you can use microfs:
ufs put path/to/main.py
Working Ubuntu 22.04 host CLI setup with Carlos Atencio's Docker to build your own firmware
After trying to setup the toolchain for a while, I finally decided to Google for a Docker image with the toolchain, and found https://github.com/carlosperate/docker-microbit-toolchain at this commit from Carlos Atencio, a Micro:Bit foundation employee, and that just absolutely worked:
# Get examples.
git clone https://github.com/bbcmicrobit/micropython
cd micropython
git checkout 7fc33d13b31a915cbe90dc5d515c6337b5fa1660
# Get Docker image.
docker pull ghcr.io/carlosperate/microbit-toolchain:latest
# Build setup to be run once.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest yt target bbc-microbit-classic-gcc-nosd#https://github.com/lancaster-university/yotta-target-bbc-microbit-classic-gcc-nosd
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest make all
# Build one example.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \
tools/makecombinedhex.py build/firmware.hex examples/counter.py -o build/counter.hex
# Build all examples.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest \
bash -c 'for f in examples/*; do b="$(basename "$f")"; echo $b; tools/makecombinedhex.py build/firmware.hex "$f" -o "build/${b%.py}.hex"; done'
And you can then flash the example you want to run with:
cp build/counter.hex "/media/$USER/MICROBIT/"
Some further comments at: Generating micropython + python code `.hex` file from the command line for the BBC micro:bit
I recently successfully embedded a python distribution with an application in Mac OS X using a homebrew installed python3.7 as per the methodology outlined in Joao Ventura's very useful two part series, provided here for reference (http://joaoventura.net/blog/2016/embeddable-python-osx/) and (http://joaoventura.net/blog/2016/embeddable-python-osx-from-src/).
The only remaining issue for me was to reduce the size of the python distribution size in the application by zip compressing the whole standard library minus lib-dynload, config-3.7m-darwin and site-packages.
My directory structures is as follows:
- python3.7/
- include/
- lib/
- python3.7/
- libpython3.7.dylib
- python3.7 <executable>
The basic initial step is to move lib-dynload and config-3.7m-darwin from lib/python3.7, so that I can compress the sodlib source files into lib/python37.zip and then move lib-dynload and config-3.7m-darwin back into now empty lib/python3.7 to end up with the desired structure:
- python3.7/
- include/
- lib/
- python3.7/
- lib-dynload/
- config-3.7m-darwin
- python37.zip
- libpython3.7.dylib
- python3.7 <executable>
To test whether it worked or not, I would check sys.path from the executable and try to import a module and check its __file__ attribute to see if it came from the zip archive.
On this basis, I would cd into lib/python3.7 and try the following:
Select all files and folders and zip using OS X's Finder's compress to generate python37.zip
Using the python zipfile module:
python -m zipfile -c python37.zip lib/python3.7/*
Using the zip method from How can you bundle all your python code into a single zip file?
cd lib/python3.7
zip -r9 ../python37.zip *
In all cases, I got it to work by setting PYTHONPATH to the zipped library, as in:
PYTHONPATH=lib/python37.zip ./python3.7`
Doing, I was able to successfully import from the zip archive and verify that the modules came from the zip archive. But without setting PYTHONPATH, it did not work.
Hence, I would very much appreciate some help to establish the correct and most straightforward way to zip the standard library such that it would be recognized automatically from sys.path (without any extra steps such as specifying the PYTHONPATH environment value which may not be possible on a user's machine).
Thanks in advance for any help provided.
S
Finally figured it out through a long process of elimination.
The only module you have to keep in site packages is os.py.
Here's a bash script for the whole process which may or may not work. It assumes you have downloaded a python source distribution from python.org
Then cd into the resultant source folder and run this script in the root
#!/usr/bin/env bash
# build_python.sh
# NOTE: need os.py to remain in site-packages or it will fail
NAME=xpython
PWD=$(pwd)
PREFIX=${PWD}/${NAME}
VERSION=3.8
VER="${VERSION//./}"
LIB=${PREFIX}/lib/python${VERSION}
MAC_DEP_TARGET=10.13
remove() {
echo "removing $1"
rm -rf $1
}
rm_lib() {
echo "removing $1"
rm -rf ${LIB}/$1
}
clean() {
echo "removing __pycache__ .pyc/o from $1"
find $1 | grep -E "(__pycache__|\.pyc|\.pyo$)" | xargs rm -rf
}
clean_tests() {
echo "removing 'test' dirs from $1"
find $1 | grep -E "(tests|test)" | xargs rm -rf
}
clean_site_packages() {
echo "removing everything in $LIB/site-packages"
rm -rf $LIB/site-packages/*
}
rm_ext() {
echo "removing $LIB/lib-dynload/$1.cpython-${VER}-darwin.so"
rm -rf $LIB/lib-dynload/$1.cpython-38-darwin.so
}
rm_bin() {
echo "removing $PREFIX/bin/$1"
rm -rf $PREFIX/bin/$1
}
./configure MACOSX_DEPLOYMENT_TARGET=${MAC_DEP_TARGET} \
--prefix=$PREFIX \
--enable-shared \
--with-universal-archs=64-bit \
--with-lto \
--enable-optimizations
make altinstall
clean $PREFIX
clean_tests $LIB
clean_site_packages
remove ${LIB}/site-packages
# remove what you want here...
rm_lib config-${VERSION}-darwin
rm_lib idlelib
rm_lib lib2to3
rm_lib tkinter
rm_lib turtledemo
rm_lib turtle.py
rm_lib ensurepip
rm_lib venv
remove $LIB/distutils/command/*.exe
remove $PREFIX/lib/pkgconfig
remove $PREFIX/share
# remove what you want here...
rm_ext _tkinter
rm_bin 2to3-${VERSION}
rm_bin idle${VERSION}
rm_bin easy_install-${VERSION}
rm_bin pip${VERSION}
mv $LIB/lib-dynload $PREFIX
cp $LIB/os.py $PREFIX
clean $PREFIX
python -m zipfile -c $PREFIX/lib/python${VER}.zip $LIB/*
remove $LIB
mkdir -p $LIB
mv $PREFIX/lib-dynload $LIB
mv $PREFIX/os.py $LIB
mkdir $LIB/site-packages
This is for a mac user, can be easily adapted for other platforms. It's not very well tested, so post feedback if you encounter any issues.
Hi Stack Overflow community, I've been encountering the titular error when trying to run the command 'jupyter notebook'
I'm running a fresh install of Scientific Linux 7. When opening a new terminal I can run the Jupyter notebook in my browser with no problem. I installed a package that includes its own Python distribution, and requires me to run a setup script before using. After running the setup script (which does various things to my environment variables) Jupyter no longer works (gives me the "No module named site" error).
Googling told me to try unsetting both PYTHONPATH and PYTHONHOME, but that didn't work. Could someone explain to me how the environment variables change how Python looks for packages? Please let me know if I can clarify something to make answering my query easier.
Thanks!
Edit: the setup script is not very illuminating as far as I can tell. For reference the package I'm looking to use is the Fermi Science Tools (http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/). Here's the code for the setup script (some of the indentation may be off a bit because I'm new to this, but rest assured the script runs without a hitch)
# Filename: fermi-init.sh
# Description: Bourne-shell flavor initialization for all FERMI software.
# Runs fermi-setup to generate a sh script tailored
# specifically to this user and FERMI software
# installation, then source that.
# Author/Date: James Peachey, HEASARC/GSFC/NASA, May 3, 1999
# Modified for HEADAS December 2001
# Adapted for FERMI September 2008
#
#if [ "x$HEADAS" = x ]; then
# echo "fermi-init.sh: WARNING -- set HEADAS and source headas-init.sh before sourcing fermi-init.sh!"
# echo "Do you wish to proceed with sourcing?"
# select yn in "Yes" "No"; do
# case $yn in
# Yes )
# if [ "x$FERMI_DIR" = x ]; then
# echo "fermi-init.sh: ERROR -- set FERMI_DIR before sourcing fermi-init.sh"
# elif [ -x "$FERMI_DIR/BUILD_DIR/fermi-setup" ]; then
# export FERMI_INST_DIR=${FERMI_DIR}
# fermi_init=`$FERMI_INST_DIR/BUILD_DIR/fermi-setup sh`
# if [ $? -eq 0 -a "x$fermi_init" != x ]; then
# if [ -f "$fermi_init" ]; then
# . $fermi_init
# fi
# rm -f $fermi_init
# fi
# unset fermi_init
# else
# echo "fermi-init.sh: ERROR -- cannot execute $FERMI_DIR/BUILD_DIR/fermi-setup"
# fi
# break;;
# No )
# break;;
# esac
#done
#else
if [ "x$FERMI_DIR" = x ]; then
echo "fermi-init.sh: ERROR -- set FERMI_DIR before sourcing fermi-init.sh"
elif [ -x "$FERMI_DIR/BUILD_DIR/fermi-setup" ]; then
export FERMI_INST_DIR=${FERMI_DIR}
fermi_init=`$FERMI_INST_DIR/BUILD_DIR/fermi-setup sh`
if [ $? -eq 0 -a "x$fermi_init" != x ]; then
if [ -f "$fermi_init" ]; then
. $fermi_init
fi
rm -f $fermi_init
fi
unset fermi_init
else
echo "fermi-init.sh: ERROR -- cannot execute $FERMI_DIR/BUILD_DIR/fermi-setup"
fi
#fi
Disclaimer: I work at Datmo, where we’re building a new tool to improve quantitative workflows.
I would highly suggest using Docker in this case. You can setup your environment using a container instead. For example, in your case you might be running on Scientific Linux 7, so you can directly install the Fermi Science Tools, but instead you could just use a standard linux installation (CentOS, Debian, or Ubuntu) and install Docker from this link.
Once you have that installed you can pull the image as is written on this page using the command:
docker pull sfegan/fermitools_ubuntu
You can then run any code you want by running docker run (docs here)
If you're working on a particularly difficult quantitative workflow (e.g. working with large scientific data sets etc), there are free tools like Datmo which use Docker to help you set this up.
I have an existing mercurial alias (for closing and merging feature branches) written in bash. The problem is, that my colleagues with windows machines cannot use it. Mercurial is already delivered with Python, so question is whether it is possible to call python code in the alias. Then it would be OS independent.
[alias]
close-feature = ![ -z "$1" ] && echo "You have to specify the issue number!" && exit 1; \
if hg branches | grep -q "fb-$1"; \
then $HG up fb-$1; $HG commit -m 'Close branch fb-$1.' --close-branch; $HG pull; $HG up default; $HG merge fb-$1; $HG commit -m 'Merge branch fb-$1 -> default.'; \
else echo "The branch fb-$1 does NOT exist!"; \
fi
question is whether it is possible to call python code in the alias
Without standalone Python - no, AFAIK. You can write Mercurial extension in Python, which will add needed command - and extension will be executed in pure Mercurial's environment
I'm using Vagrant to set up a box with python, pip, virtualenv, virtualenvwrapper and some requirements. A provisioning shell script adds the required lines for virtualenvwrapper to .bashrc. It does a very basic check that they're not already there, so that it doesn't duplicate them with every provision:
if ! grep -Fq "WORKON_HOME" /home/vagrant/.bashrc; then
echo 'export WORKON_HOME=/home/vagrant/.virtualenvs' >> /home/vagrant/.bashrc
echo 'export PROJECT_HOME=/home/vagrant/Devel' >> /home/vagrant/.bashrc
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> /home/vagrant/.bashrc
source /home/vagrant/.bashrc
fi
That seems to work fine; after provisioning is finished, the lines are in .bashrc, and I can ssh to the box and use virtualenvwrapper.
However, virtualenvwrapper doesn't work during provisioning. After the section above, this next checks for a pip requirements file and tries to install with virtualenvwrapper:
if [[ -f /vagrant/requirements.txt ]]; then
mkvirtualenv 'myvirtualenv' -r /vagrant/requirements.txt
fi
But that generates:
==> default: /tmp/vagrant-shell: line 50: mkvirtualenv: command not found
If I try and echo $WORKON_HOME from that shell script, nothing appears.
What am I missing to have those environment variables available, so virtualenvwrapper will run?
UPDATE: Further attempts... it seems that doing source /home/vagrant/.bashrc has no effect in my shell script - I can put echo "hello" in the .bashrc file , and that isn't output during provisioning (but is if I run source /home/vagrant/.bashrc when logged in.
I've also tried su -c "source /home/vagrant/.bashrc" vagrant in the shell script but that is no different.
UPDATE 2: Removed the $BASHRC_PATH variable, which was confusing the issue.
UPDATE 3: In another question I got the answer as to why source /home/vagrant/.bashrc wasn't working: the first part of the .bashrc file prevented it from doing anything when run "not interactively" in that way.
The Vagrant script provisioner will run as root, so it's home dir (~) will be /root. In your script if you define BASHRC_PATH=/home/vagrant, then I believe your steps will work: appending to, then sourcing from /home/vagrant/.bashrc.
Update:
Scratching my earlier idea ^^ because BASHRC_PATH is already set correctly.
As an alternative we could use .profile or .bash_profile. Here's a simplified example which sets environment variable FOO, making it available during provisioning and after ssh login:
Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = "hashicorp/precise32"
$prov_script = <<SCRIPT
if ! grep -q "export FOO" /home/vagrant/.profile; then
sudo echo "export FOO=bar" >> /home/vagrant/.profile
echo "before source, FOO=$FOO"
source /home/vagrant/.profile
echo "after source, FOO=$FOO"
fi
SCRIPT
config.vm.provision "shell", inline: $prov_script
end
Results
$ vagrant up
...
==> default: Running provisioner: shell...
default: Running: inline script
==> default: before source, FOO=
==> default: after source, FOO=bar
$ vagrant ssh -c 'echo $FOO'
bar
$ vagrant ssh -c 'tail -n 1 ~/.profile'
export FOO=bar
I found a solution, but I don't know if it's the best. It feels slightly wrong as it's repeating things, but...
I still append those lines to .bashrc, so that virtualenvwrapper will work if I ssh into the machine. But, because source /home/vagrant/.bashrc appears to have no effect during the running of the script, I have to explicitly repeat those three commands:
if ! grep -Fq "WORKON_HOME" $BASHRC_PATH; then
echo 'export WORKON_HOME=$HOME/.virtualenvs' >> $BASHRC_PATH
echo 'export PROJECT_HOME=$HOME/Devel' >> $BASHRC_PATH
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> $BASHRC_PATH
fi
WORKON_HOME=/home/vagrant/.virtualenvs
PROJECT_HOME=/home/vagrant/Devel
source /usr/local/bin/virtualenvwrapper.sh
(As an aside, I also realised that during vagrant provisioning $HOME is /root, not the /home/vagrant I was assuming.)
The .bashrc in Ubuntu box does not work. You have to create the .bash_profile and add:
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
As mentioned in your other Q, Vagrant prohibits interactive shells during provisioning - apparently, only for some boxes (need to reference this though). For me, this affects the official Ubuntu Trusty and Xenial boxes.
However, you can simulate an interactive bash shell using sudo -H -u USER_HERE bash -i -c 'YOUR COMMAND HERE'
Answer taken from: https://stackoverflow.com/a/30106828/4186199
This has worked for me installing Ruby via rbenv and Node via nvm when provisioning the Ubuntu/trusty64 and xenial64 boxes.