If I search for class in non-project items, I get a too many matches:
All except one match are from virtualenvs in a .tox directory.
I know that I can change the workdir in via the tox.ini file.
Goal: I don't want my IDE to see these venvs.
But, which directory to use?
/tmp/ is not good, because this gets deleted on reboot
/var/tmp is not good because some team members use a Windows-PC.
How to set the workdir of tox, so that IDEs don't see these files? This needs to work for Linux, windows, mac.
Related
I am using MacOS Monterey 12.3.
Once I initialize git for my Python (Python3.9) project, if I set up virtualenv, all of the sudden, git can no longer track any changes made in the given directory.
To see if initializing git and virtualenv in the same directory causes any issue, I first created a directory "directory_above" and ran git init there. Then, I created a sub directory "directory_below" in "directory_above", and I set up virtualenv in the sub directory. Even without activating vurtialnenv in the sub directory, git cannot track any changes made in the directory. git status simply gives me
nothing to commit
As far as I remember, this kind of setup worked fine before, and recently, git started to fail to work with virtualenv.
Has anyone encountered the same issue in the past? If so, how did you solve the issue? I spent some time looking the same issue and solution, but I couldn't find it on here.
it sounds like you ran virtualenv . -- but you probably want virtualenv venv or some other subdirectory
virtualenv writes a .gitignore file which contains the following contents:
$ cat venv/.gitignore
# created by virtualenv automatically
*
that * there will cause all of the contents to be ignored
either delete that file (not recommended) or make your virtualenv in a subdirectory of your project
For reference I've looked at the following links.
Python Imports, Paths, Directories & Modules
Importing modules from parent folder
Importing modules from parent folder
Python Imports, Paths, Directories & Modules
I understand that I'm doing is wrong and I'm trying to avoid relative path and changing things in via sys.path as much as possible, though if those are my only options, please help me come up with a solution.
Note, here is an example of my current working directory structure. I think I should add a little more context. I started off adding __init__.py to every directory so they would be considered packages and subpackages, but I'm not sure that is what I actually want.
myapp/
pack/
__init__.py
helper.py
runservice/
service1/
Dockerfile
service2/
install.py
Dockerfile
The only packages I will be calling exist in pack/ directory, so I believe that should be the only directory considered a package by python.
Next, the reason why this might get a little tricky, ultimately, this is just a service that builds various different containers. Where the entrypoints will live in python service*/install.py where I cd into the working directory of the script. The reason for this, I don't want container1 (service1) to know about the codebase in service2, as its irrelevant I would like and the code to be separated.
But, by running install.py, I need to be able to do: from pack.helper import function but clearly I am doing something wrong.
Can someone help me come up with a solution, so I can leave my entrypoint to my container as cd service2, python install.py.
Another important thing to note, within the script I have logic like:
if not os.path.isdir(os.path.expanduser(tmpDir))
I am hoping any solution we come up with, will not affect the logic here?
I apologize for the noob question.
EDIT:
Note, I I think I can do something like
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
But as far as I understand, that is bad practice....
Fundamentally what you've described is a supporting library that goes with a set of applications that run on top of it. They happen to be in the same repository (a "monorepo") but that's okay.
The first step is to take your library and package it up like a normal Python library would be. The Python Packaging User Guide has a section on Packaging and distributing projects, which is mostly relevant; though you're not especially interested in uploading the result to PyPI. You at the very least need the setup.py file described there.
With this reorganization you should be able to do something like
$ ls pack
pack/ setup.py
$ ls pack/pack
__init__.py helper.py
$ virtualenv vpy
$ . vpy/bin/activate
(vpy) $ pip install -e ./pack
The last two lines are important: in your development environment they create a Python virtual environment, an isolated set of packages, and then install your local library package into it. Still within that virtual environment, you can now run your scripts
(vpy) $ cd runservice/service2
(vpy) $ ./install.py
Your scripts do not need to modify sys.path; your library is installed in an "expected" place.
You can and should do live development in this environment. pip install -e makes the virtual environment's source code for whatever's in pack be your actual local source tree. If service2 happens to depend on other Python libraries, listing them out in a requirements.txt file is good practice.
Once you've migrated everything into the usual Python packaging scheme, it's straightforward to transplant this into Docker. The Docker image here plays much the same role as a Python virtual environment, in that it has an isolated Python installation and an isolated library tree. So a Dockerfile for this could more or less look like
FROM python:2.7
# Copy and install the library
WORKDIR /pack
COPY pack/ ./
RUN pip install .
# Now copy and install the application
WORKDIR /app
COPY runservice/service2/ ./
# RUN pip install -r requirements.txt
# Set standard metadata to run the application
CMD ["./install.py"]
That Dockerfile depends on being run from the root of your combined repository tree
sudo docker build -f runservice/service2/Dockerfile -t me/service2 .
A relevant advanced technique is to break this up into separate Docker images. One contains the base Python plus your installed library, and the per-application images build on top of that. This avoids reinstalling the library multiple times if you need to build all of the applications, but it also leads to a more complicated sequence with multiple docker build steps.
# pack/Dockerfile
FROM python2.7
WORKDIR /pack
COPY ./ ./
RUN pip install .
# runservice/service2/Dockerfile
FROM me/pack
WORKDIR /app
COPY runservice/service2/ ./
CMD ["./install.py"]
#!/bin/sh
set -e
(cd pack && docker build -t me/pack .)
(cd runservice/service2 && docker build -t me/service2 .)
I cloned a repo and I'm trying to run my tests and i'm getting an interpreter error:
Interpreter path does not exist: C:\Users\username\Source\Repos\citcodownloader\env\Scripts\python.exe
The project downloaded a .sln, solutions view that I opened it with and I thought it set up my enviornment but it doesn't seem to be doing so. Not sure what to do from here.
The best thing you can do is create a (or use an existing) Virtual Environment. It looks like your program is looking for one in the folder "env". Try this:
Open a terminal (Windows key + R, then type cmd and press enter)
Navigate to your repo folder using chdir C:\path\to\your\repo
Run the command env\Scripts\activate.bat (if there is no folder called "env" in your repo, use my instructions below)
Try running your program again.
I hope this helps, post a comment if it doesn't and I'll add as much detail or explanation as you need. Good luck!
For Googlers or people who that doesn't help, look for these files in your repo:
requirements.txt (a list of plugins you need to set up a virtual environment)
venv/ (folder containing a virtual environment)
Solution
If a folder named 'venv' or 'virtualenv' does NOT exist,
run this command to create it: python -m venv venv (or for Python 3: python3 -m venv venv). If it does exist, move forward.
You have a virtual environment! Now enter into it using: source venv/bin/active (on Unix or OSX, see the link above for the Windows command).
If requirements.txt is there, run this command next: pip install -r requirements.txt. If not, move forward.
Run the program again (via whatever method the repo says you should use). If you get 'error: module is not installed' use the command pip install moduleNameHere and run the program again.
Keep doing step 4 for each missing module, once the program is working use pip freeze > requirements.txt to create a requirements file and save yourself the headache next time. :)
I've just installed virtualenv and virtualenvwrapper on my computer. Now I want to use it to work with Django. When I run mkvirtualenv django, from ~, the interpreter stays there. Does that mean I can create my django files there? Or is the environment not that virtual? Should I create my own folder instead where I work on the project? I thought mkvirtualenv would create one for me automatically and take me there upon running workon, otherwise, what's the point of even using virtualenvwrapper?
EDIT: These few lines from my .bash_profile might help you:
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/Devel
source /usr/local/bin/virtualenvwrapper.sh
virtualenwrapper will create the virtualenv in your ~$WORKON_HOME/ directory. This is the venv only and is distinct from any associated "project" directory you might (or not...) want to use and which virtualenvwrapper will indeed not create.
IOW, at this point you are exactly in the same directory as when you ran the mkvirtualenv command.
If you want to associate this venv to a project directory, you have to create this directory (if it does not exist yet), and then - with your venv activated - run setvirtualenvproject /path/to/your/projectdir (or cd /path/to/your/projectdir and here run setvirtualenvproject without argument).
Once done with this, next time you activate your venv with workon myenv, you will be automagically cd'ed to your project directory too, and the cdproject command will bring you back there if you cd elsewhere.
As for other reasons to use (or not) virtualenwrapper, you can read the doc and find out by yourself what other features it adds to the raw virtualenv and whether you want those features or not.
FWIW the behaviour you expected (creating both the venv AND the project dir) is given by the mkproject command
The main advantage of virtualenvwrapper is the separation of your environment from a specific working directory. Just activate your environment with:
workon django
The prompt should change to:
(django)
Now you are free to work from any directory you want.
I'm embracing VirtualEnvWrapper - and like what I see a lot. However as I try to get going I'm not seeing the behaviour I expect when trying to set up project directory association with virtual envs.
I've installed virtualenv and -wrapper. I can create envs and "workon" lists them fine. I can deactivate and rm them happily. So all appears functional. I read the docs regarding project mgmt. (Also a good video tutorial, and the desired proj association behaviour explained at 10:39 )
When I try to associate a work directory with an env, it accepts my cmds fine, but when I "workon" the project, it does not put me into my designated working directory.
e.g. I have a working area ~/Ross_code (and I've set this in my .bashrc as $PROJECT_HOME). In there is an existing project folder ~/Ross_code/superproj
So now I create an env with
mkvirtualenv superp
Then I go to my existing project dir and associate it with the env:
cd ~/Ross_code/superproj
setvirtualenvproject
Setting project for superp to /Users/ross/Ross_code/superproj
Then I exited the virtual env with "deactivate" and reactivated with
workon superp
But the present working dir remains my ~/ folder.
I checked the .project file which seems to have been set properly by the call to setvirtualenvproject:
cdvirtualenv
more .project
/Users/ross/Ross_Code/superproj
but calling "workon" never sticks me into the expected spot. I thought maybe the env and the project directory needed to be of the same name, but that didn't make any difference either.
Any idea why that very attractive project association capability doesn't work for me?
-Ross.
LATER - More info:
I tried to also use the mkproject command, which should create a directory for my code in the $PROJECT_HOME area, and create the virtualenv at the same time and associate them with each other.
Calling
mkproject junkproj
does in fact create the project directory nicely, and sticks me into the virtualenv, and cd's into the junkproj directory. But when I deactivate, and then "workon junkproj" again, I'm still left in my ~/ directory, rather than going into the project directory in $PROJECT_HOME
:(
The problem here is that the newer versions (this hit me upgrading from ubuntu 14.04 to 16.04) of virtualenvwrapper use a slightly different protocol for the setvirtualenvproject parameters:
setvirtualenvproject [virtualenv_path project_path]
In order to make the association you want in any virtual env, be in the project folder and the virtualenv and use:
setvirtualenvproject $VIRTUAL_ENV .
The dot is for the present directory - or you can use the path of the directory you want workon to take you to. Once you do this workon will switch you to the folder you want and cdproject will work as expected.
If you used the old protocol, you'll have a .project file in your project folder - you can move this to the $VIRTUAL_ENV folder rather than invoking the command with the new protocol to make the association. The file just contains the project directory you want to associate with virtualenvwrapper shortcut commands like workon and cdproject.
workon doesn't auto change directory to project or environment directory.
You can do this with the postactivate script - there's a really quick how-to in the second half of the virtualenvwrapper tips and tricks section.