Patch Django Site Package From a Pull Request Using Pip - python

I need to apply pull request 51 to a locally installed Site Package in my Django project but I am not sure how it should be done without applying directly to the local library.
Is there a way to add reference to the pull request in requirements.txt or git config?

You shouldn't modify installed packages.
Fork the project and apply the PR to your fork, then point requirements.txt at the fork.

Related

How do I ensure pip gets package from internal pypi?

I have an application with a requirements.txt which includes a number of third party libraries along with one internal package which must be downloaded from a private pypi instance. Something like:
boto3
flask
flask-restplus
gunicorn
an_internal_package
The problem is that an_internal_package is named something quite common and occludes a package already available on the global pypi. For example, let's call it twisted. The problem I've run into is that setting --extra-index-url within requirements.txt seems to still grab twisted from the global pypi.
--extra-index-url=https://some.internal.pypi.corp.lan
boto3
flask
flask-restplus
gunicorn
twisted # actually an internal package
How can I indicate that twisted should be loaded exclusively from the private pypi and not from the global one?
You could link directly to the package on your internal index instead:
boto3
flask
flask-restplus
gunicorn
https://some.internal.pypi.corp.lan/simple/twisted/Twisted-19.2.0.tar.bz2
This has the effect of pinning the dependency, but this is generally considered best practice anyways.
You can refer index for solution, its little tricky. you should be handling both private pypi and main pypi.
instead of using --extra-index-url you should be using --index-url. however read i recommend you to read through the given link

Python Development in multiple repositories

We are trying to find the best way to approach that problem.
Say I work in a Python environment, with pip & setuptools.
I work in a normal git flow, or so I hope.
So:
Move to feature branch in some app, make changes.
Move to feature branch in a dependent lib - Develop thing.
Point the app, using "-e git+ssh" to the feature branch of the dependent lib.
Create a Pull Request.
When this is all done, I want to merge stuff to master, but I can't without making yet another final change to have the app (step 3 above) requirements.txt now point to the main branch of the feature.
Is there any good workflow for "micro services" or multiple dependent source codes in python that we are missing?
Python application workflow from development to deployment
It looks like you are in search for developing Python application, using git.
Following description is applicable to any kind of Python based application,
not only to Pyramid based web ones.
Requirements
Situation:
developing Python based solution using Pyramid web framework
there are multiple python packages, participating in final solution, packages might be dependent.
some packages come from public pypi, others might be private ones
source code controlled by git
Expectation:
proposed working style shall allow:
pull requests
shall work for situations, where packages are dependent
make sure, deployments are repeatable
Proposed solution
Concepts:
even the Pyramid application released as versioned package
for private pypi use devpi-server incl. volatile and release indexes.
for package creation, use pbr
use tox for package unit testing
test, before you release new package version
test, before you deploy
keep deployment configuration separate form application package
Pyramid web app as a package
Pyramid allows creation of applications in form of Python package. In
fact, whole initial tutorial (containing 21 stages) is using exactly this
approach.
Despite the fact, you can run the application in develop mode, you do not have
to do so in production. Running from released package is easy.
Pyramid uses nice .ini configuration files. Keep development.ini in the
package repository, as it is integral part for development.
On the other hand, make sure, production .ini files are not present as they
should not mix with application and belong to deployment stuff.
To make deployment easier, add into your package a command, which prints to
stdout typical deployment configuration. Name the script e.g. myapp_gen_ini.
Write unittests and configure tox.ini to run them.
Keep deployment stuff separate from application
Mixing application code with deployment configurations will make problem at
the moment, you will have to install to second instance (as you are likely to
change at least one line of your configuration).
In deployment repository:
keep here requirements.txt, which lists the application package and other
packages needed for production. Be sure you specify exact package version at
least for your application package.
keep here production.ini file. If you have more deployments, use one branch per deployment.
put here tox.ini
tox.ini shall have following content:
[tox]
envlist = py27
# use py34 or others, if your prefer
[testenv]
commands =
deps =
-rrequirements.txt
Expected use of deployment respository is:
clone it to the server
run tox, this will create virtualenv .tox/py27
activate the virtualenv by $ source .tox/py27/bin/activate
if production.ini does not exist in the repo yet, run command
$ myapp_gen_ini > production.ini to generate template for production
configuration
edit the production.ini as needed.
test, it works.
commit the production.ini changes to the repository
do other stuff needed to deploy the app (configure web server, supervisord etc.)
For setup.py use pbr package
To make package creation simpler, and to keep package versioning related to git
repository tags, use pbr. You will end up with setup.py being only 3 lines
long and all relevant stuff will be specified in setup.cfg in form of ini
file.
Before you build the first time, you have to have some files in git repository,
otherwise it will complain. As you use git, this shall be no problem.
To assign new package version, set $ git tag -a 0.2.0 and build it. This will
create the package with version 0.2.0.
As a bonus, it will create AUTHORS and ChangeLog based on your commit
messages. Keep these files in .gitignore and use them to create AUTHORS.rst
and ChangeLog.rst manually (based on autogenerated content).
When you push your commits to another git repository, do not forget to push the tags too.
Use devpi-server as private pypi
devpi-server is excellent private pypi, which will bring you following advantages:
having private pypi at all
cached public pypi packages
faster builds of virtual environments (as it will install from cached packages)
being able to use pip even without having internet connectivity
pushing between various types of package indexes: one for development
(published version can change here), one for deployment (released version will not change here).
simple unit test run for anyone having access to it, and it will even collect
the results and make them visible via web page.
For described workflow it will contribute as repository of python packages, which can be deployed.
Command to use will be:
$ devpi upload to upload developed package to the server
$ devpi test <package_name> to download, install, run unit test,
publish test results to devpi-server and clean up temporary installation.
$ devpi push ... to push released package to proper index on devpi-server or even on public pypi.
Note, that all the time it is easy to have pip command configured to consume
packages from selected index on devpi server for $ pip install <package>.
devpi-server is also ready for use in continuous integration testing.
How git fits into this workflow
Described workflow is not bound to particular style of using git.
On the other hand, git can play it's role in following situations:
commit: commit message will be part of autogenerated ChangeLog
tag: defines versions (recognized by setup.py based on pbr).
As git is distributed, having multiple repositories, branches etc.,
devpi-server allows similar distribution as each user can have it's own
working index to publish to. Anyway, finally there will be one git repository
with master branch to use. In devpi-server will be also one agreed
production index.
Summary
Described process is not simple, but the complexity is relevant to complexity of the task.
It is based on tools:
tox
devpi-server
pbr (Python package)
git
Proposed solution allows:
managing python packages incl. release management
unit testing and continuous integration testing
any style of using git
deployment and development having clearly defined scopes and interactions.
Your question assumes multiple repositories. Proposed solution allows decoupling multiple repositories by means of well managed package versions, published to devpi-server.
We ended up using git dependencies and not devpi.
I think when git is used, there is no need to add another package repository as long as pip can use this.
The core issue, where the branch code (because of a second level dependency) is different from the one merged to master is not solved yet, instead we work around that by working to remove that second level dependency.

Aldryn - DjangoCMS install addons not present in "Manage Addons"

I am quite a Django n00b, and figured using Aldryn for my first real django site would be a good idea!
I have successfully installed and implementer Aldryn News & Blog.
Now I would like to install Aldryn Search that is not accessible from the "Manage Addons" under the Aldryn control panel.
I very confused on how to install an addon like Aldryn Search that is not accessible from within "Manage Addons". Should I somehow use the "Add custom Addon" and register the package as a new custom addon.
Or should I create a local development environment and somehow install the addon and upload it? (does it exist a tutorial for this?)
Thank you!
There are various ways in which to install arbitrary Django packages into an Aldryn project.
The quick, easy way
The easiest, quickest way is simply to place the module(s) you need into the project directory, thus placing them on the Python path. You need then to make sure that your settings.py, urls.py and so on are appropriately configured. Then you can push these changes to Aldryn itself. This is described in Adding a new application to your Aldryn project - the quick and easy way.
The create-an-Addon way
A more involved way to do it, that has benefits for long-term use and re-use, is to turn the package into a private or public Aldryn Addon. This is described in Developing an Addon application for Aldryn.
A middle way
Another way is somewhere between the two. Add the package to the project's requirements.in - you can do this in various ways, for example:
# standard install from PyPI
some-package==1.2.3
# install from an archive
https://example.com/some-package1.2.3.tar.gz#egg=some-package==1.2.3
# install from a GitHub repository
git+http://git#github.com/some-org/some-package.git#egg=some-package==1.2.3
You will need to make sure that your settings.py, urls.py and so on are appropriately configured.
Run aldryn project update. This in effect redeploys your project locally, except for:
docker-compose run --rm web python manage.py migrate - you need to run any migrations manually, unlike on Aldryn.
Finally, git add, commit and push your changes to your project, and redeploy it on Aldryn.
This method isn't yet documented in the Aldryn support system, but will be soon.
That's a very valid question in my opinion since add-ons are wrapped into an additional directory that makes the django app inside invisible to Django's INSTALLED_APPS.
If you add them to addons-dev they are ignored by git.
A possible solution (even if maybe not the cleanest) would be to unignore addons-dev by adding !/addons-dev to the .gitignore in the project's root directory and then add -e /app/addons-dev/aldryn-package-name to requirements.in (outside the section generated/overwritten by Aldryn). That's what aldryn project develop aldryn-package-name does (for the local environment).
Similarly, if you have a git repository that contains the code (like aldryn-search) you would use -e git+https://github.com/aldryn/aldryn-search.git in requirements.in
In case you need to apply changes to the addon code best practise would be forking the original repository and then checking out your fork as per above instructions.

Deploying websites in django virtual machine

Sorry I'm new to this specific topic.
I have a website implemented in django and AskBot it also has a DB (postgreSQL). I want to create a deployment package which can be distributed to any customer; such that this customer can have their own server. Taking into consideration that the deployment package should be platform independent; so it should work on all operating systems.
Can you tell me what are the available tools to achieve this?
virtualenv is a really good tool but I think Vagrant is what you're looking for.
https://www.vagrantup.com/
It should enable you to easily set up your system regardless of the platform and it's free as well and quite well documented. I'd suggest you give it a look over !
From my point of view, database always should be created before deployment. And the information of the database must be posted to the settings.py
for the application it self, I think virtualenv can be very helpful in these cases with requirements.txt
You run the application in your virtual environment and then export your dependencies using
pip freeze > requirements.txt
Then in the new server you create the database, and insert the configuration in your settings, then install dependences
pip install -r /path/to/requirements.txt
Run migrations, and you are done.

Using git post-receive hook to deploy python application in virtualenv

My goal is to be able to deploy a Django application to one of two environments (DEV or PROD) based on the Git branch that was committed and pushed to a repository. This repository is hosted on the same server as the Django applications are being run on.
Right now, I have two virtualenvs set up. One for each environment. They are identical. I envision them only changing if the requirements.txt is modified in my repository.
I've seen tutorials around the internet that offer deployments via git by hosting the repository directly in the location where the application will be deployed. This doesn't work for my architecture. I'm using RhodeCode to host/manage the repository. I'd like to be able to use a post-receive (or other if it's more appropriate) hook to trigger the update to the appropriate environment.
Something similar to this answer will allow me to narrow down which environment I want to focus on.
When I put source activate command in an external script (ie. my hook), the script stops at that command. The virtualenv is started appropriately, but any further actions in the script (ie. pip install -r requirements.txt or ./manage.py migrate) aren't executed.
My question, is how can I have that hook run the associated virtualenv? Or, if it is already running, update it appropriately with the new requirements.txt, South migrations, and application code?
Is this work flow overly complicated? Theoretically, it should be as simple as git push to the appropriate branch.

Categories