how to make my own copy of a python package - python

I have a python script that I use to analyze data. I rely on number crunching packages like numpy and others to work with my data. However, the packages constantly evolve and some functions depreciate, etc. This forces me to go through the script several times per year to fix errors and make it work again.
One of the solutions is to keep an older version of numpy. However, there are other packages that require a new version of numpy.
So the question I have is: Is there a way to 1) keep multiple versions of a package installed or 2) have a local copy the package located in the directory of my script so I am in control what I am importing. For example, I can have my own package where I will have all the different packages and versions I need.
Later, I can simply import libraries I want
from my_package.numpy_1_15 as np115
from my_package.numpy_1_16_4 as np1164
and later in my code, I can decide which function to use from which numpy version. For example:
index = np115.argwhere(x == 0)
This is my vision of the solution to my problem where I want to keep using old functions from previous versions of numpy (or other libraries). In addition, in this way, I can always have all the libraries needed with me in my script directory. So, if I need to run the script on a different machine I don't need to spend hours figuring out if everything is compatible.
Here are possible proposed solutions and why they do not solve my problem.
Virtual Environments in Python or Anaconda.
There are a bunch of introductions (for example) available that explain how to use them. However, virtual environments require maintenance and initial setup. Imagine, if I can just have a python code that performs well a specific computational task independent on what year it is and what packages are installed on any machine. The code can be shared among different research groups and will always work.
python create standalone executable linux
I can create standalone executable (example). However, it will be compiled​ and cannot be dynamically changed the really nice feature of Python

Related

Problems when distributing a python drag&drop file

I am writing a script for a project I am working on, where you can easily drag&drop a Orthomosaic of a drone survey on the .py-File and it automatically calculates a bunch of vegetation indices. It is thought, that this script will be distributed to the partners, so that all project members use the same way of calculating the indices and statistics.
The problem is, that I can only manage to get it run on my Computer at home - if I try to run it on my or my colleagues work computers, it does not work. On my colleagues computer it seems to struggle with the sys.argv[] arguments of the script, whereas on my work computer I get the error that the variable name is too long (or something like that - I am currently in HomeOffice and did not write down the exact error)
Since it works on my computer at home, I suspect it has to do with the packages that have to be installed (gdal, rasterio, geopandas, ...) or the environment in which the packages are installed and possibly various python versions being installed on the work computers, even though I already reinstalled python on my working computer, which did not solve the problem.
I am sorry if my descriptions are scarce and vague.
So my questions would be: Do you have experience with distributing a drag&drop-File? Is there a way to build an executable, which can be run without the packages being installed?
If you are interested I can send you my scripts and some example files.
Thanks and best wishes,
Matthias

Explain why Python virtual environments are “better”? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 months ago.
Improve this question
I have yet to come across an answer that makes me WANT to start using virtual environments. I understand how they work, but what I don’t understand is how can someone (like me) have hundreds of Python projects on their drive, almost all of them use the same packages (like Pandas and Numpy), but if they were all in separate venv’s, you’d have to pip install those same packages over and over and over again, wasting so much space for no reason. Not to mention if any of those also require a package like tensorflow.
The only real benefit I can see to using venv’s in my case is to mitigate version issues, but for me, that’s really not as big of an issue as it’s portrayed. Any project of mine that becomes out of date, I update the packages for it.
Why install the same dependency for every project when you can just do it once for all of them on a global scale? I know you can also specify —-global-dependencies or whatever the tag is when creating a new venv, but since ALL of my python packages are installed globally (hundreds of dependencies are pip installed already), I don’t want the new venv to make use of ALL of them? So I can specify only specific global packages to use in a venv? That would make more sense.
What else am I missing?
UPDATE
I’m going to elaborate and clarify my question a bit as there seems to be some confusion.
I’m not so much interested in understanding HOW venv’s work, and I understand the benefits that can come with using them. What I’m asking is:
Why would someone with (for example) have 100 different projects that all require tensorflow to be installed into their own venv’s. That would mean you have to install tensorflow 100 separate times. That’s not just a “little” extra space being wasted, that’s a lot.
I understand they mitigate dependency versioning issues, you can “freeze” packages with their current working versions and can forget about them, great. And maybe I’m just unique in this respect, but the versioning issue (besides the obvious difference between python 2 and 3) really hasn’t been THAT big of an issue. Yes I’ve run into it, but isn’t it better practise to keep your projects up to date with the current working/stable versions than to freeze them with old, possibly no longer supported versions? Sure it works, but that doesn’t seem to be the “best” option to me either.
To reiterate on the second part of my question, what I would think is, if I have (for example) tensorflow installed globally, and I create a venv for each of my 100 tensorflow projects, is there not a way to make use of the already globally installed tensorflow inside of the venv, without having to install it again? I know in pycharm and possibly the command line, you can use a — system-site-packages argument (or whatever it is) to make that happen, but I don’t want to include ALL of the globally installed dependencies, cuz I have hundreds of those too. Is —-system-site-packages -tensorflow for example a thing?
Hope that helps to clarify what I’m looking for out of this discussion because so far, I have no use for venv’s, other than from everyone else claiming how great they are but I guess I see it a bit differently :P
(FINAL?) UPDATE
From the great discussions I've had with the contributors below, here is a summation of where I think venv's are of benefit and where they're not:
USE a venv:
You're working on one BIG project with multiple people to mitigate versioning issues among the people
You don't plan on updating your dependencies very often for all projects
To have a clearer separation of your projects
To containerize your project (again, for distribution)
Your portfolio is fairly small (especially in the data science world where packages like Tensorflow are large and used quite frequently across all of them as you'd have to pip install the same package to each venv)
DO NOT use a venv:
Your portfolio of projects is large AND requires a lot of heavy dependencies (like tensorflow) to mitigate installing the same package in every venv you create
You're not distributing your projects across a team of people
You're actively maintaining your projects and keeping global dependency versions up to date across all of them (maybe I'm the only one who's actually doing this, but whatever)
As was recently mentioned, I guess it depends on your use case. Working on a website that requires contribution from many people at once, it makes sense to all be working out of one environment, but for someone like me with a massive portfolio of Tensorflow projects, that do not have versioning issues or the need for other team members, it doesn't make sense. Maybe if you plan on containerizing or distributing the project it makes sense to do so on an individual basis, but to have (going back to this example) 100 Tensorflow projects in your portfolio, it makes no sense to have 100 different venv's for all of them as you'd have to install tensorflow 100 times into each of them, which is no different than having to pip install tensorflow==2.2.0 for specific old projects that you want to run, which in that case, just keep your projects up to date.
Maybe I'm missing something else major here, but that's the best I've come up with so far. Hope it helps someone else who's had a similar thought.
I'm a data scientist and sometimes I run into these things called "virtual environments" and I don't get what the use case is? I already have all of these packages and modules and widgets downloaded! Why should I set up a separate place where I manage all of the stuff I'm already managing globally?
Python is a very powerful tool. In this answer consider two such ways to swing the metaphorical hammer:
Data Science
Software Engineering
For a data scientist (working alone) using Python to write a poc for a research paper, make a lstm nn, or predict the price of TSLA dependent on the frequency of Elon Musk's tweets all that really matters is being able to use the best library (tensorflow, pytorch, sklearn, ...) for whatever task they're trying to get done. In whatever directory they're working in when they need it. It is very tempting to use one global Python installation and just use the same stuff everywhere. Frankly, this is probably fine. As it's just one person managing their own space. So the configuration of their machine would be one single Python environment and everything, everywhere uses it. Or if the data scientist wanted to they could have a single directory that contains a virtual environment and some sub directories containing all the scripts (projects) they work on.
Now consider a software engineer who has multiple git repos with complete CI/CD pipelines that each build into separate entities that then get deployed to some cloud environment. Them and the 9 other people on their team need to be able to be sure that they are all making changes that won't break any piece of the code. For example in Python 3.6 the function dict.popitem subtly changed from returning a random element in a dict to LIFO order guaranteed. It's pretty easy to see that that could cause issues if Jerry had implemented a function that relies on the original random nature of the function and Bob implemented a function with the LIFO behavior guaranteed. This team of engineers would have git repos that each contain a single virtual environment (a single isolated Python environment) that allows them to manage dependencies for that "project".
The data scientist has one Python installation/environment that allows them to do whatever.
The engineer has a Python installation and a bunch of environments so that they can work across multiple repos with multiple people and (hopefully) nothing breaks.
I can see where you're coming from with your question. It can seem like a lot of work to set up and maintain multiple virtual environments (venvs), especially when many of your projects might use similar or even the same packages.
However, there are some good reasons for using venvs even in cases where you might be tempted to just use a single global environment. One reason is that it can be helpful to have a clear separation between your different projects. This can be helpful in terms of organization, but it can also be helpful if you need to use different versions of packages in different projects.
If you try to share a single venv among all of your projects, it can be difficult to use different versions of packages in those projects when necessary. This is because the packages in your venv will be shared among all of the projects that use that venv. So, if you need to use a different version of a package in one project, you would need to change the version in the venv, which would then affect all of the other projects that use that venv. This can be confusing and make it difficult to keep track of what versions of packages are being used in which projects.
Another issue with sharing a single venv among all of your projects is that it can be difficult to share your code with others. This is because they would need to have access to the same environment (which contains lots of stuff unrelated to the single project you are trying to share). This can be confusing and inconvenient for them.
So, while it might seem like a lot of work to set up and maintain multiple virtual environments, there are some good reasons for doing so. In most cases, it is worth the effort in order to have a clear separation between your different projects and to avoid confusion when sharing your code with others.
It's the same principle as in monouser vs multiuser, virtualization vs no virtualization, containers vs no containers, monolithic apps vs micro services, etcetera; to avoid conflict, maintain order, easily identify a state of failure, among other reasons as scalability or portability. If necessary apply it, and always keeping in mind KISS philosophy as well, managing complexity, not creating more.
And as you have already mentioned, considering that resources are finite.
Besides, a set of projects that share the same base of dependencies of course that is not the best example of separation necessity.
In addition to that, technology evolve taking into account not redundancy of knowingly base of commonly used resources.
Well, there are a few advantages:
with virtual environments, you have knowledge about your project's dependencies: without virtual environments your actual environment is going to be a yarnball of old and new libraries, dependencies and so on, such that if you want to deploy a thing into somewhere else (which may mean just running it in your new computer you just bought) you can reproduce the environment it was working in
you're eventually going to run into something like the following issue: project alpha needs version7 of library A, but project beta needs library B, which runs on version3 of library A. if you install version3, A will probably die, but you really need to get B working.
it's really not that complicated, and will save you a lot of grief in the long term.
There are several motivations for venvs,
or for their moral equivalent: conda environments.
1. author a package
You create a cool "scrape my favorite site" package
which graphs a timeseries of some widget product.
Naturally it depends on BeautifulSoup.
You happened to have html5lib 1.1 lying around
due to some previous project, so you tested with that.
A user downloads your scrape-widget package from pypi,
happens to have lxml 4.7.1 available, and finds
that scraping crashes when using that library.
Wouldn't it have been better for your package
to specify that user shall run against the same
deps that you tested with?
2. use a package
Same scenario, but now you're using someone's scrape-widget
package. Author tested with lxml 4.7.1 but you have lxml 4.9.1,
which behaves differently, and this makes the app behave
differently, crashing in ways the author never saw.
3. use two packages
You want to run both scrape-frobozz-magic-widgets
and scrape-acme-widget. Their authors tested using
different versions of requests, and of lxml.
Changing dep changes the app behavior.
You can only use one or the other, unless you're
willing to re-run pip quite frequently.
4. collaborate on a team
You write code that has deps.
So does your colleague.
You have to coordinate things,
so testing on one laptop
instills confidence the test
would succeed on other laptops.
5. use CI
You have a teammate named Jenkins, and
want to communicate to him that you used
a specific version of a dep when you saw the test succeed.
6. get a new laptop
Things were working.
Then your laptop exploded,
you got a new one,
and you (quickly) want to see things work again.
Some of your deps were downrev, due to
recently released bugs and breaking changes.
Reading a file full of dep versions from your github repo
lets you immediately reproduce the state of the world
back when things were working.

As a python package maintainer, how can I determine lowest working requirements

While it is possible to simply use pip freeze to get the current environment, it is not suitable to require an environment as bleeding edge as what I am used too.
Moreover, some developer tooling are only available on recent version of packages (think type annotations), but not needed for users.
My target users may want to use my package on slowly upgrading machines, and I want to get my requirements as low as possible.
For example, I cannot require better than Python 3.6 (and even then I think some users may be unable to use the package).
Similarly, I want to avoid requiring the last Numpy or Matplotlib versions.
Is there a (semi-)automatic way of determining the oldest compatible version of each dependency?
Alternatively, I can manually try to build a conda environment with old packages, but I would have to try pretty randomly.
Unfortunately, I inherited a medium-sized codebase (~10KLoC) with no automated test yet (I plan on making some, but it takes some time, and it sadly cannot be my priority).
The requirements were not properly defined either so that I don't know what it has been run with two years ago.
Because semantic versionning is not always honored (and because it may be difficult from a developper standpoint to determine what is a minor or major change exactly for each possible user), and because only a human can parse release notes to understand what has changed, there is no simple solution.
My technical approach would be to create a virtual environment with a known working combination of Python and libraries versions. From there, downgrade one version by one version, one lib at a time, verifying that it still works fine (may be difficult if it is manual and/or long to check).
My social solution would be to timebox the technical approach to take no more than a few hours. Then settle for what you have reached. Indicate in the README that lib requirements may be overblown and that help is welcome.
Without fast automated tests in which you are confident, there is no way to automate the exploration of the N-space (each library is a dimension) to find a some minimums.

(base): conda, (env1): CPython, (env2): Some python-distro. Possible?

I have a question which arises pretty much instantly after reading all the basics about environments in python. However, I was not able to find anything on the internet.
Let’s say I have an Anaconda installation and everything works fine. Everybody knows there are many many python distributions out there. Let’s assume, I want to play with some specific distribution (lets say, I am interested in CPython which can be downloaded from python.org). And: I want to have it in an environment isolated from other distributions. Like this:
Base-Environment: Conda – Environment
Environment1: CPython
Environment2: Miniconda-distribution
Environment3: Some other distribution and so on…
I want the Environment1 to be created on the basis of files published by the Python Software Foundation (python.org). Similar applies for other environments.
Can I achieve that using conda? Can I achieve that for some specific distributions only? How can I achieve that using conda? Can someone write a comprehensive answer?

Is this a good idea -- using symlinks to rationalize behavior of Python distribution tools

Short version --
Is it safe to symlink together multiple Python path / install locations so all reference the same files -- to paper over a messy history of installations using different distribution tools & options?
Longer version --
One area of Python that's mystified me longer than others :) is the diverse behavior of the various distribution tools (although this reference is pretty froody).
One of the issues is default install locations. My belief (maybe incorrect) is that it's hard to avoid using more than one distribution tool, and each has a different opinion about where packages should be installed. It's easy to end up with packages in multiple locations, and if you're using more than one Python brand / system, you then need to explicitly manage Python path if you want to use the same libraries from more than one environment / brand / system.
I now understand how to avoid this problem using the distro tools options that explicitly name the install directory to use, and / or via virtualenv. However I also need to solve the problem I currently have without rejiggering my system & re-installing a bunch of things.
My solution to this was to move the contents of all but one folder to the one remaining folder, and recreate the others as symbolic links to the one remaining folder. So far it seems to work well. Can you think of any downsides to this approach? It seemed a little too easy :)

Categories