I primarily work these days with Python 2.7 and Django 1.3.3 (hosted on Heroku) and I have multiple projects that I maintain. I've been working on a Desktop with Ubuntu running inside of a VirtualBox, but recently had to take a trip and wanted to get everything loaded up on my notebook. But, what I quickly discovered was that virtualenv + Github is really easy for creating projects, but I struggled to try and get them moved over to my notebook. The approach that I sort of came up with was to create new virtualenv and then clone the code from github. But, I couldn't do it in the folder that I really wanted because it would say the folder is not empty. So, I would clone it to a tmp folder than them cut/paste the everthing into where I really wanted it. Not TERRIBLE, but I just feel like I'm missing something here and that it should be easier. Maybe clone first, then mkvirtualenv?
It's not a crushing problem, but I'm thinking about making some more changes (like getting ride of the VirtualBox and just going with a Dual boot system) and it would be great if I could make it a bit smoother. :)
Finally, I found and read a few posts about moving git repos between computers, but I didn't see any dealing with Virtualenv (maybe I just missed it).
EDIT: Just to be clear and avoid confusion, I'm not try to "move" the virtualenv. I'm just talking about best way to create a new one. Install the packages, and then clone the repo from github.
The only workflow you should need is:
git clone repo_url somedir
cd somedir
virtualenv <name of environment directory>
source <name of environment directory>/bin/activate
pip install -r requirements.txt
This assumes that you have run pip freeze > requirements.txt (while the venv is activated) to list all the virtualenv-pip-installed libraries and checked it into the repo.
That's because you're not even supposed to move virtualenvs to different locations on one system (there's relocation support, but it's experimental), let alone from one system to another. Create a new virtualenv:
Install virtualenv on the other system
Get a requirements.txt, either by writing one or by storing the output of pip freeze (and editing the output)
Move the requirements.txt to the other system, create a new virtualenv, and install the libraries via pip install -r requirements.txt.
Clone the git repository on the other system
For more advanced needs, you can create a bootstrapping script which includes virtualenv + custom code to set up anything else.
EDIT: Having the root of the virtualenv and the root of your repository in the same directory seems like a pretty bad idea to me. Put the repository in a directory inside the virtualenv root, or put them into completely separate trees. Not only you avoid git (rightfully -- usually, everything not tracked by git is fair game to delete) complaining about existing files, you can also use the virtualenv for multiple repositories and avoid name collisions.
In addition to scripting creating a new virtualenv, you should make a requirements.txt file that has all of your dependencies (e.g Django1.3), you can then run pip install -r requirements.txt and this will install all of your dependencies for you.
You can even have pip create this for you by doing pip freeze > stable-req.txt which will print out you dependencies as there are in your current virtualenv. You can then keep the requirements.txt under version control.
The nice thing about a virtualenv is that you can describe how to make one, and you can make it repeatedly on multiple platforms.
So, instead of cloning the whole thing, clone a method to create the virtualenv consistently, and have that in your git repository. This way you avoid platform-specific nasties.
Related
I am looking for a way to setup a GitHub repo in a way that allows someone to clone it and then immediately run the code without needing to setup a virtual environment and then pip install all of the needed packages.
Perhaps I am off base and it is the case that they must install these packages and they can somehow be auto imported, but I'm not sure how that is done.
Or perhaps you can add the virtual environment folder as another folder in the repo, but I feel like you would still have to pip install in that case. (I'm not too familiar with virtual environments.)
I don't want to mess up the project, so I have not messed around with this too much.
Any help would be much appreciated!
Actually, you can create a requirements.txt file for the environment, then anyone (or any build server) that receives a copy of the project needs only to run the
pip install -r requirements.txt
command to reinstall the packages on which the app depends within the active environment.
Hope this gives you some inspiration. Detailed steps please view this tutorial: Create a requirements.txt file for the environment
My question is do i have to install django every single time in my virtual environment in order to run my python files? and is this taking up bunch of space on my machine? My project also uses "matplotlib" and every virtual environment i create it also asks me to import the matplotlib module too. its getting annoying. do i have to do this every time?
Im new to Django. I wanted to run some python files in django but they weren't working, so after some research i found out i needed to run my pycharm project in a virtual environment in order to run these python files.
my folders look like this pycharmProjects -> my project
I enter pycharmProjects and I set up virtual environment using "pienv shell". Then i run "python3 manage.py runserver". It turns out i must install django in the virtual environment before the files run.
Short answer is no, you don't have to use a virtual environment at all and can install your dependancies globally instead. However you will soon find that it will cause a lot of issues. The main reason you would create a virtual environment is to give control of your dependancies and prevent bugs that could be caused because of them having their wires crossed between projects.
Short answer yes.
If you create a virualenv you have to install all packages, that your program needs.
Long answer:
You could install django system wide and then create a virtualenv with the option
--system-site-packages then django would be used from your globally installed python.
(Or you install everything just in your global python, put I personally don't think this is good practice)
If you work with many different projects I think you will avoid a lot of trouble if you use one virtualenv per project.
Trouble meaning that one project breaks, because one pip install for another project changed the version of one package and one project can't handle the newer version.
I would recommend to create a requirements.txt file for each project, that lists the dependencies then you can create the virtualenv with following command
pip install -r requirements.txt
if you have requirement.txt files, then you can create virtualenvs rather quickly if going back to an old project and you can delete the virtualenvs whenever you run out of disk space. If you want to be an the safe side, type pip freeze > pipfreeze.txt prior to deleting the virtualenv and use pip install -r pipfreeze.txt if you want to create one with the same modules and the same versions.
You also might want to look at direnv or autoenv if working on a linux like system.
This will automatically switch to the required virtualenv when changing to a project's working directory.
I have a python library that I am wanting to help out with and fix some issues. I just don't know how to test my changes given the complexity of how python/pip installs libraries.
I have the library installed with pip and I can run python code connecting to the library by doing an "from import *". But now that I want to make changes to it I pulled the code with git and plan to branch to work on my changes. That's fine. I will then do a pull request to merge any changes given tests pass.
But after I make a change, how do I integrate my changes into python to test out the changes I made with the library? Can pip install my custom/modified version of the library?
I have looked around and haven't successfully found an answer to this but perhaps I'm not looking in the right spot.
Can pip install my custom/modified version of the library?
Yes.
There are various ways of approaching this question. A common solution is the use of Python virtual environments. This allows you to create an isolated Python environment that does not share the same packages as your system Python install. You can then install things into it (such as your modified Python library) to test it out.
To get started, you need the virtualenv tool. This is probably available as a package for your distribution, but you can also install it using pip. Once you have it, you can run in the same directory as your code:
virtualenv .venv
This creates a virtuelenv named .venv. You can call it anything you want, but naming it .venv (or anything starting with a .) means it won't clutter up the output of ls in your workspace.
Next, you need to activate the virtualenv:
. .venv/bin/activate.sh
This modifies your $PATH to place the virtualenv at the front of the list of directories. Now when you type python or pip, you'll be using the virtualenv version.
If your code has a setup.py file, you can install it like this:
pip install -e .
The -e means you want to perform an "editable" install, which means python will use the code "in place", and any changes you make will be immediately visible to the code you use for testing.
When you're done, you can run:
deactivate
This will remove the changes that activate made to your environment.
For more information:
Pipenv & Virtual Environments discusses a higher level tool for managing virtual environments.
Virtualenvwrapper is another take on a higher level management tool.
I have just started learning Django, and I'm facing some problems regarding the 'copy' of the projects.
I have two computers and I would like to use both computers for my development. When I was learning PHP (at that time I didnt even know how to use Github), all I had to do was set up a webserver on both computers, and upload the whole files through Google Drive (from one computer) and then download it from the other computer.
However, it seems to me that Django is somewhat different since it is a framework and has a lot of setting ups before starting a project (including virtual environment; I am following a Youtube tutorial and it says that I would be better off if I used virtualenv). I thought it wouldn't work just by downloading the whole project folder to the other computer.
Currently, I have uploaded the whole virtual environment folder on Github.
So, to list my questions,
When downloading it on the other computer, should I setup the virtual environment on that computer and then download the folder?...
Is there any way that I can only sync or commit the files that has been changed in the project automatically?
(That is, i have to change many files in django projects(views, urls, settings... etc) but it would be hard to remember all the files that i have changed and then seperately commit those ones)
Any help would be appreciated.
Edit: Consider using pipenv
I suggest that you also install virtualenvwrapper (here). virtualenvwrapper keeps all files except your project at another location so your project directory contains only your files and you can safely use git add --all.
After its installed, do:
$ mkdir my-project; cd my-project
$ mkvirtualenv my-env-name
$ pip install django <more-good-stuff>
$ pip freeze > requirements.txt
$ git init; git add --all; git commit -m "Initial Commit"
... push to github ...
Now go to other machine, and install virtualenv and virtualenvwrapper
$ git clone <url> my-project; cd my-project
$ mkvirtualenv my-env-name
$ pip install -r requirements.txt
... continue your work, commit and push push and win at life :D
you usually don't want to commit everything blindly. It's better if you use git status to see all the files that you changed and then git add those that you want to commit. Once you've verified that you really want to commit all files, you can simply git add --all (more here). And then you git commit.
And, yes, virtualenv is the way to go. You want to create a virtualenv for your project and have all your dependencies in a requirements.txt file. This will allow to have only your source code and no libraries in your git repo, making it much cleaner. This also can allow you to have a set of verified libraries in production, and if you want to try out a new library you can just install it in your local virtualenv. Or even have two virtualenvs and switch, or whatever, and it does not mess your code repo or other machines' installations.
I'm using Git for version control on a Django project.
As much as possible, all the source-code that is not part of the project per se, but that the project depends on, is brought in as Git submodules. These live on a lib directory and have to be included on python path. The directory/files layout looks like:
.git
docs
lib
my_project
apps
static
templates
__init__.py
urls.py
manage.py
settings.py
.gitmodules
README
What, would you say, is the best practice for including the libs on python path?
I am using virtualenv, so I could easily sym-link the libraries to the virtualenv's site-packages directory. However, this will tie the virtualenv to this specific project. my understanding is that the virtualenv should not depend on my files. instead, my files should depend on the virtualenv.
I was thinking of using the same virtualenv for different local copies of this project, but if I do things this way I will lose that capability. Any better idea how to approach this?
Update:
The best solution turned out being to let pip manage all the dependencies.
However, this means not being able to use git submodules, as pip can't yet handle relative paths properly. So, external dependencies will have to live on the virtualenv (typically: my_env/src/a_python_module).
I'd still prefer to use submodules, to have some of the dependencies living on my project tree. This makes more sense to me as I've already needed to fork those repos to change some bits of them, and will likely have to change them some more in the future.
dump all your installed packages in a requirement file (requirements.txt looks the standard naming) using
pip freeze > requirements.txt
everytime you need a fresh virtualenv you just have to do:
virtualenv <name> --no-site-packages
pip install -r requirements.txt
the install -r requirements.txt works great also if you want to update to newer packages
just keep requirements.txt in sync with your packages (by running pip freeze every time something changes) and you're done, no matter how many virtualenv you have.
NOTE: if you need to do some development on a package you can install that using the -e (editable) param, this way you can edit the package and you don't have to uninstall/install every time you want to test new stuff :)