There is an application where I downloaded the source code and would like to modify it.
https://github.com/ownaginatious/fbchat-archive-parser
Is there a way to run this program by entered a command such as "python3 main.py" rather than installing the program. When the program is installed, I would simply run the command fbcap.
This project has a setup.py file. The "standard" way to run such a thing is to install it (preferably inside a virtualenv) and then run it. Virtualenvs are cheap and lightweight so it's easy to do.
First, create a virtualenv. This might be slightly different depending on your platform. I presume you're using Python 3 so these instructions should work for you. Let's assume you created it in /tmp/venv1
Activate it using . /tmp/venv1/bin/activate (don't forget the initial .). This also might be different if you're on Windows.
Now install your program using python setup.py install.
Run it using fbcap.
This will allow you to run the program in a clean fresh python environment and when you're done experimenting, you can simply delete the virtualenv directory.
Related
I have an open source project called Djengu. To install it, the user must clone the repo and run make to initiate the setup script. The setup script creates a Python virtual environment using virtualenv. The command goes like
virtualenv -p python3.8 .python3.8_env
I'd like to pin the Python version to avoid anything breaking. I also cannot assume that any given user will have a python3.8 binary installed on their machine. And I cannot assume that they have pyenv installed either.
I imagine I will have to make a trade off somewhere. How can I pin Python without making assumptions on what the user has installed? Is there a standard way to do something like this?
Since your Djengu project is a development environment, I think it's completely fine to require that your users first install pyenv before calling make. Just tell them to do so in the Readme. You can then use their pyenv in your Makefile to install the Python version you need.
I have a python library that I am wanting to help out with and fix some issues. I just don't know how to test my changes given the complexity of how python/pip installs libraries.
I have the library installed with pip and I can run python code connecting to the library by doing an "from import *". But now that I want to make changes to it I pulled the code with git and plan to branch to work on my changes. That's fine. I will then do a pull request to merge any changes given tests pass.
But after I make a change, how do I integrate my changes into python to test out the changes I made with the library? Can pip install my custom/modified version of the library?
I have looked around and haven't successfully found an answer to this but perhaps I'm not looking in the right spot.
Can pip install my custom/modified version of the library?
Yes.
There are various ways of approaching this question. A common solution is the use of Python virtual environments. This allows you to create an isolated Python environment that does not share the same packages as your system Python install. You can then install things into it (such as your modified Python library) to test it out.
To get started, you need the virtualenv tool. This is probably available as a package for your distribution, but you can also install it using pip. Once you have it, you can run in the same directory as your code:
virtualenv .venv
This creates a virtuelenv named .venv. You can call it anything you want, but naming it .venv (or anything starting with a .) means it won't clutter up the output of ls in your workspace.
Next, you need to activate the virtualenv:
. .venv/bin/activate.sh
This modifies your $PATH to place the virtualenv at the front of the list of directories. Now when you type python or pip, you'll be using the virtualenv version.
If your code has a setup.py file, you can install it like this:
pip install -e .
The -e means you want to perform an "editable" install, which means python will use the code "in place", and any changes you make will be immediately visible to the code you use for testing.
When you're done, you can run:
deactivate
This will remove the changes that activate made to your environment.
For more information:
Pipenv & Virtual Environments discusses a higher level tool for managing virtual environments.
Virtualenvwrapper is another take on a higher level management tool.
I am working on several projects on the same PyCharm. Like I "attached" them all together. But I recently noticed some weird behaviors. Like when I import a library I haven't installed yet to my script. It shows me a little error as expected. But when I try to install that using python -m pip install my_library, it tells me that it has already installed. I recently noticed that this is because it's using and other pip from another project. I doesn't use the one in the venv folder in the project. Also to run the scripts sometimes it uses python.exe from pythons original directory. It's a whole mess and I have no idea how I can solve it. Sometimes my projects requires different versions of the same library and you can imagine what happens when I change the version.
I make sure each project is using their own interpreter. Don't know what else to do other than this. I am using Python3.6.4 PyCharm2018.3.2 running on Windows10
it sounds like all your projects are configured to use the system's interpreter instead of the virtual environment you set up for each of them.
Follow this instruction to fix it https://www.jetbrains.com/help/pycharm-edu/creating-virtual-environment.html
In terms of using different version of the python library, you can address that by specifying it in requirements.txt file, which you can put in your venv folder for each project. then you can just do pip install -r requirements.txt after you set up your venv. (you need to ensure that the venv is activated - you don't need to worry about this if you have configured the project in PyCharm to use the venv's python interpreter.) You can check this by going to Terminal in your PyCharm and you should see (venv_name) hostusername#host:~/project_folder$
I've been trying to switch my coding to Linux.
I have ironed out most of my issues but one last thing I am not able to find any explanation of is the virtualization of make and bash commands.
I have installed PyCharm which virtualizes everything from what I have seen.
However, when I am cloning repositories from Github, the instructions require building some code using make and then installing them and later on using bash to build dependencies.
I am running the commands in PyCharm terminal but instead of installing into the venv, it's installing the data in /usr/xxx instead.
How do I tell PyCharm to use bash and make in a similar way to pip to virtualize the setup ?
Edit:
One of the projects in question is gym-gazebo which requires:
git clone https://github.com/erlerobot/gym-gazebo/blob/master/INSTALL.md
Then make and make install which installs it in the root
Later on there is also
bash setup_kinetic.bash
Which also uses root folders and not venv
I was able to install it but it is not virtualized the way it should when compared with coding on Windows
Basically the answer based on Evert and CharlesDuffy, what I'm looking for is containerization, since the libraries I'm looking for are C based with a python wrapper (something like that)
Docker, Singularity and Conda are on of the solutions.
I'd like to start developing an existing Python module. It has a source folder and the setup.py script to build and install it. The build script just copies the source files since they're all python scripts.
Currently, I have put the source folder under version control and whenever I make a change I re-build and re-install. This seems a little slow, and it doesn't settle well with me to "commit" my changes to my python install each time I make a modification. How can I cause my import statement to redirect to my development directory?
Use a virtualenv and use python setup.py develop to link your module to the virtual Python environment. This will make your project's Python packages/modules show up on the sys.path without having to run install.
Example:
% virtualenv ~/virtenv
% . ~/virtenv/bin/activate
(virtenv)% cd ~/myproject
(virtenv)% python setup.py develop
Virtualenv was already mentioned.
And as your files are already under version control you could go one step further and use Pip to install your repo (or a specific branch or tag) into your working environment.
See the docs for Pip's editable option:
-e VCS+REPOS_URL[#REV]#egg=PACKAGE, --editable=VCS+REPOS_URL[#REV]#egg=PACKAGE
Install a package directly from a checkout. Source
will be checked out into src/PACKAGE (lower-case) and
installed in-place (using setup.py develop).
Now you can work on the files that pip automatically checked out for you and when you feel like it, you commit your stuff and push it back to the originating repository.
To get a good, general overview concerning Pip and Virtualenv see this post: http://www.saltycrane.com/blog/2009/05/notes-using-pip-and-virtualenv-django
Install the distrubute package then use the developer mode. Just use python setup.py develop --user and that will place path pointers in your user dir location to your workspace.
Change the PYTHONPATH to your source directory. A good idea is to work with an IDE like ECLIPSE that overrides the default PYTHONPATH.