I have created a Python virtual environment using virtualenv for Python 2.7.18 64 bit and another virtual environment using venv for Python 3.5.4 64 bit.
I was hoping to be able to commit these items into version control so that other users of the project could access them without having to setup a Python environment themselves. Another issue is that some of the work stations will not have access to internet to easily create a virtual environment from scratch, so using a requirements.txt file is not a valid solution.
It seems like there are a fair amount of issues preventing a virtual environment (whether using virtualenv or venv) from being easily 'copied' and executed on another system.
Is what I am describing even possible? I have tried tinkering with the 'activate' scripts to remove some of the hard coded pathing but that doesn't seem to do the trick.
Thanks
Have you considered using Docker? If you just have an image (or use docker-compose for multiple images), the user will not need to start a virtual environment.
Related
I want to create a python virtual environment in a Windows network folder, and then activate and use this venv on another computer. My question is, do I need to have python installed on all these machines that i want to use this virtual environment?
If so, why do I need to install python everywhere if it's already inside this venv i will create, along with all the necessary packages?
Virtual environments are not designed to be portable. For instance, if you have entry points installed then moving them to another machine would break their shebang lines. And even if you do it on your local machine there's no guarantee something else wasn't structured to be directory-specific such as your username! Take in consideration that when creating Python venv, it will be pointed to your python path where your username is part of the path, so its not a good idea. Although why not upload py files to the cloud, share via Bluetooth to phone or copy-paste to a usb stick.
I have created a virtual environment named knowhere and I activate it in cmd using code .\knowhere\Scripts\activate. I have installed some libraries into this environment.
I have some python scripts stored on my pc. When I try to run them they are not working since they are not running in this virtual environment. Now how to make these scripts run.
Also is there any way to make "knowhere" as my default environment.
Virtual environments are only necessary when you want to work on two projects that use different versions of the same external dependency, e.g. Django 1.9 and Django 1.10 and so on. In such situations virtual environment can be really useful to maintain dependencies of both projects.
If you simply want your scripts to use Python libraries just install them on your system and you won't have that problem.
I have been trying to run a python script on start-up (on a Pi). I initially did this via an .sh script triggered by cron.
Posting with a problem on StackExchange Pi (https://raspberrypi.stackexchange.com/questions/110868/parts-of-code-not-running-when-autostarting-script-in-crontab) the suggestion is to use systemd.
The person helping me there has suggested not using a virtual environment when executing the Python script (they note their limited familiarity with Python), and using the real environment instead. But other resources strongly suggest the use of a virtual environment (e.g. https://docs.python.org/3/tutorial/venv.html).
In the hope of setting this up correctly could anyone weigh in on the correct approach?
Use the virtual environment. I don't see any reason not to. At some point you might want to have multiple Python applications run at the same time on that system, and these applications might require different versions of the same dependency and then you would be back to square one, so... Use the virtual environment.
When configuring systemd, crontab, or whatever, make sure to use the python binary that is placed inside the virtual environment's bin directory, so that there is no need to activate the virtual environment:
/path/to/venv/bin/python -m my_executable_module
/path/to/venv/bin/python /path/to/my_script.py
/path/to/venv/bin/my_executable_script
systemd is going to try to run your script on startup so your virtual environment will not have been activated yet. You can (maybe) avoid that issue by telling systemd to use the python in the virtualenv's bin, with the appropriate environment variables. Or you can activate as a pre-run step for that script's launch in systemd. Maybe.
But on balance I'd make it easy on systemd and your OS and ignore the virtualenv absolutists. Get the script to work on your dev machine using virtualenv all you want, but then prep systemd to use the global python, with suitable packages installed. You can always use virtualenvs on that pi, for scripts that don't have to work with systemd. Systemd doesn't always have the clearest error messages.
(If you need to import custom modules you could inject directories into sys.path in your script. This could even avoid installing packages, for the global Python, entirely.)
This answer is certainly opinion-based.
Bottom Line:
I can get everything to work by configuring two separate virtual environments, one for pyCharm and one for the CLI. Is this really necessary or should I be able to use 1 virtual environment for both as I expected?
More Detailed explanation:
I'm very new so this is probably a facepalm type of question so i'll try to be terse.
I'm using Linux Mint, Python 3.6, django 3.0.3, and pyCharm 2019.3.1.
I can create a virtual env using venv in the cli and it works.
I can also create a NEW virtual env in pyCharm through the settings: Project: Interpreter interface, and it works, however it doesn't have venv as an option, it only has virtualenv.
But if I try to activate the virtual env i created in pyCharm from the cli (using virtualenv of course, not venv), it fails hard and thinks i'm using python 2.7 which isn't even installed on my system. If it try to point pyCharm at the virtual env I setup on the cli, I get an error 134.
Is this just a known/expected issue? Must I have two virtual environments for every project I want to access via both pyCharm AND the cli? And I assume this is unrelated but I also find it odd that pyCharm lists my interpreter as python 3.7, which also is not installed on my system. I'm using 3.6 alone.
Thanks for your time.
At this time, I'm going to just answer this as: you need a separate virtual env for each (pyCharm and CLI) as this approach is not difficult or time-consuming and I have not had any issues working in this way.
I was required to install anaconda for a CS course and used spyder and Rstudio.
Then, for a different class I used pycharm.
When I type on the command line "python -V" I get:
Python 3.6.1 :: Anaconda 4.4.0 (x86_64)
and I have no idea why it relates the python version I have installed with Anaconda (and why not pycharm?). I understand that the OS runs python 2.7 (shouldn't I get that instead? and when I type python3 -V get which version of python 3 I have?) and when I use something like Pycharm or Spyder I can choose which version I want from the ones I have installed and use it within the program, not for the terminal.
I just want to have everything in order and under control. I don't think I understand what Anaconda really is (to me is like a program that has more programs in it...). How do I keep anaconda to itself ? 1313
Also, should the packages I installed through Terminal work on both pycharm and spyder/anaconda even though when I used pycharm I used python 3.5 and anaconda 3.6?
I think I need definitions and help to get everything in order in my head and the computer.
Pycharm is just an application to help you write code. Pycharm itself does not run python code. This is why in PyCharm, you need to set the interpreter for a project, which could be any python binary. In PyCharm, go to Preferences > Project > Project Interpreter to see where you would set the python environment being used for a given project. This could point to any python installation on your machine, whether that is the python 2.7 located at /usr/bin/python or a virtual environment in your project dir.
The industry standard way to "keep things in order" is to use what are called virtual environments. See here: https://docs.python.org/3/library/venv.html. A virtual environment is literally just a copy of a python environment (binaries and everything) so whatever directory you specify. This allows you to configure your environment to however you need in your project without interfering with other projects you might have. For example, say project A requires django 1.9.2 but project b requires 1.5.3. By having a virtual environment for each project, dependencies won't conflict.
Since you have python3.6, I would recommend going to you project directory in a terminal window. Running python -m venv .venv to create a hidden directory which contains a local python environment of whatever your 3.6 python installation. You could then set your project interpret to use that environment. to connect to it on the command line, run source .venv/bin/activate from where you created your virtual environment. run which python again and see that python is now referencing your virtual environment :)
If you are using a mac (which I believe you are from what you said about python2.7), what likely happened is that your anaconda installer put the Python bin directory on your PATH environment variable. Type in which python to see what the python alias is referencing. You can undo this if you want by editing your ~/.bash_profile file if you really want.
You are more or less correct about anaconda. It is itself another distribution of python and contains a load of common libraries/dependencies that tend to make life easier. For a lot of data analysis, you likely won't even need to install another dependency with pip after downloading anaconda.
I suspect this won't be all too helpful at first as it is a lot to learn, but hopefully this points you in the right direction.