VS Code does not recognise imports because of docker - python

Good day. I was using virtual environment in development stage (for python projects, of course). But one day, I decided to do everything using best practices, and followed one course. In that course, tutor uses docker-compose for all operations, and installs all dependencies to inside docker container. But I'm using vs code, and since no virtual environment is created, vs code does not recognize imports, because it looks at the main interpreter, in which I don't have those dependencies. One way is to install all dependencies to the main interpreter, and I think it is not recommended. Another way, I think, is to create a virtual env for just containing the dependencies for vs code to refer it. But I'm not sure it is best practice or not. What is the best way of developing a python project using docker, and vs code ?
Fun fact: In tutors Vs code, there is no problem :)

The first thing to understand is that the interpreter of the virtual environment is isolated from the real environment, so there is no need to call the dependencies in the virtual environment while using the main interpreter.
I think the best way is to unify all operations in the virtual environment, so that the environment will not be disordered.
You can also refer to this document for more information about vscode and docker

morning,here is an example,https://github.com/miguelgrinberg/flasky
you can learn from this project struct.
in a word.you can use virtualenv in your develop env,when you commit your code,you may ingore venv dir,when you deploy your project ,you may use docker-compose to deploy.
feel free

Related

How to distribute a Python virtual environment?

I have created a Python virtual environment using virtualenv for Python 2.7.18 64 bit and another virtual environment using venv for Python 3.5.4 64 bit.
I was hoping to be able to commit these items into version control so that other users of the project could access them without having to setup a Python environment themselves. Another issue is that some of the work stations will not have access to internet to easily create a virtual environment from scratch, so using a requirements.txt file is not a valid solution.
It seems like there are a fair amount of issues preventing a virtual environment (whether using virtualenv or venv) from being easily 'copied' and executed on another system.
Is what I am describing even possible? I have tried tinkering with the 'activate' scripts to remove some of the hard coded pathing but that doesn't seem to do the trick.
Thanks
Have you considered using Docker? If you just have an image (or use docker-compose for multiple images), the user will not need to start a virtual environment.

Python script in systemd: virtual environment or real environment

I have been trying to run a python script on start-up (on a Pi). I initially did this via an .sh script triggered by cron.
Posting with a problem on StackExchange Pi (https://raspberrypi.stackexchange.com/questions/110868/parts-of-code-not-running-when-autostarting-script-in-crontab) the suggestion is to use systemd.
The person helping me there has suggested not using a virtual environment when executing the Python script (they note their limited familiarity with Python), and using the real environment instead. But other resources strongly suggest the use of a virtual environment (e.g. https://docs.python.org/3/tutorial/venv.html).
In the hope of setting this up correctly could anyone weigh in on the correct approach?
Use the virtual environment. I don't see any reason not to. At some point you might want to have multiple Python applications run at the same time on that system, and these applications might require different versions of the same dependency and then you would be back to square one, so... Use the virtual environment.
When configuring systemd, crontab, or whatever, make sure to use the python binary that is placed inside the virtual environment's bin directory, so that there is no need to activate the virtual environment:
/path/to/venv/bin/python -m my_executable_module
/path/to/venv/bin/python /path/to/my_script.py
/path/to/venv/bin/my_executable_script
systemd is going to try to run your script on startup so your virtual environment will not have been activated yet. You can (maybe) avoid that issue by telling systemd to use the python in the virtualenv's bin, with the appropriate environment variables. Or you can activate as a pre-run step for that script's launch in systemd. Maybe.
But on balance I'd make it easy on systemd and your OS and ignore the virtualenv absolutists. Get the script to work on your dev machine using virtualenv all you want, but then prep systemd to use the global python, with suitable packages installed. You can always use virtualenvs on that pi, for scripts that don't have to work with systemd. Systemd doesn't always have the clearest error messages.
(If you need to import custom modules you could inject directories into sys.path in your script. This could even avoid installing packages, for the global Python, entirely.)
This answer is certainly opinion-based.

How should I move my completed Django Project in a Virtual Environment?

I started learning django a few days back and started a project, by luck the project made is good and I'm thinking to deploy it. However I didn't initiate it in virtual environment. have made a virtual environment now and want to move project to that. I want to know how can I do that ? I have created requirements.txt whoever it has included all the irrelevant library names. How can I get rid of them and have only that are required for the project.
Django is completely unrelated to the environment you run it on.
The environment represents which python version are you using (2,3...) and the libraries installed.
To answer your question, the only thing you need to do is run your manage.py commands from the python executable in the new virtual environment. Of course install all of the necessary libraries in the new environment if you haven't already did so.
It might be a problem if you created a python3 environment while the one you created was in python2, but at that point it's a code portability issue.

Python Tools for Visual Studio & virtualenv - can use packages only present in deactivated env

I'm working on a Django project in VS 2015 Community with PTVS, which has been very useful for a free tool. I recently realized that I should be using virtual environments during development, and found out that Python 3 includes this feature by default, and PTVS 2.0+ supports it- cool!
I created a couple environments as an experiment, and in one I installed the celery[redis] bundle since I'm trying to figure out how to implement a background task. I was having trouble getting the basic celery tutorial task to work, so I decided to remove the environment from my Django project, deactivate it, and start over.
However, once I removed it via PTVS and ran deactivate from the command line inside the environment directory, I could still run celery commands from my top-level project directory. I've never installed celery globally- only to my test environment via the Python Environments menu in PTVS.
Why is this? Am I thinking of virtual environments as too similar to truly discrete environments such as containers? My impression from reading PTVS venv documentation is that if a package is present in a virtual environment which is then deactivated, I shouldn't be able to use it in other environments (or globally). I thought it might be an issue with my Windows PATH, but I didn't see anything related to Python or celery.
Apologies if this is a duplicate- it's been difficult to find questions on the PTVS implementation of venv rather than Python + virtualenv in general.

Do I need to use virtualenv with Vagrant?

I was used VirtualBox manual setups with virtualenvs inside them to run Django projects on my local machine. Recently I discovered Vagrant and decided to switch to it, because it seems very easy and useful.
But I can not figure - do I need still use virtualenv Vagrant VM, is it encouraged practice or forbidden?
As Devin stated, it is not necessary to use virtualenv when you deploy to a vagrant machine as long as you are the sole user of the machine. However, I would still enable the use of a virtualenv, setup.py, etc. even if you do not use it for development or deployment.
In my (not so) humble opinion, any Python project should:
Include a .cvsignore, .gitignore, .hgignore, ... file that ignores the common Python intermediate files as well as virtualenv directories.
A requirements.txt file that lists the required packages in a pip-compliant format
Include a Makefile with the following targets:
environment: create the virtual environment using virtualenv or pyvenv
requirements: install required packages using pip and the requirements.txt file
develop: run setup.py develop using the virtual environment
test: run setup.py test
clean: remove intermediate files, coverage reports, etc.
maintainer-clean: remove the virtual environment
The idea is to keep the Makefile as simple as possible. The dependencies should be set up so that you can clone the repository (or extract the source tarball) and run make test. It should create a virtual environment, install the requirements, and run the unit tests.
You can also include a Vagrantfile and a vagrant target in the Makefile that runs vagrant up. Add a vagrant destroy to the maintainer-clean target while you are at it.
This makes your project usable by anyone that is using vagrant or developing without it. If (when) you need to use deploy alongside another project in a vagrant or physical environment, including a clean setup.py and a Vagrantfile that describes your minimal environment makes it simple to install into a virtual environment or a shared vagrant machine.
If you run one vagrant VM per project, then there is no direct reason to use virtualenv.
If other contributors do not use vagrant, but do use virtualenv, then you might want to use it and support it to make their lives easier.
Virtualenv and other forms of isolation (Docker, dedicated VM, ...) are not necessarily mutually exclusive. Using virtualenv is still a good idea, even in an isolated environment, to shield the virtual system Python from your project packages. *nix systems use plethora of Python based utilities dependent on specific versions of packages being available in system Python and you don't want to mess with these.
Mind that virtualenv can still only go as far as pure Python packages and doesn't solve the situation with native extensions that will still mix with the system.

Categories