How to enable DjangoIntegration in sentry - python

I am using the "new" sentry-sdk 0.9.0
The sdk as initialized as follows
import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
sentry_sdk.init(integrations=[DjangoIntegration(), ], dsn="...")
The events and exception do arrive at sentry.io. However, I'm getting the following warnings:
We recommend you update your SDK from version 0.9.0 to version 0.9.2
We recommend you enable the 'django' integration We recommend you
enable the 'tornado' integration
The first one is because I haven't upgraded to 0.9.2 yet. I'm not using tornado, so this warning surprises me. And when it comes to the django integration recommendation, I'm puzzled.
Any ideas or suggestions what I am missing?
Thanks!!

I'm the guy who implemented those alerts. OP and I had a private conversation on this and the verdict is that those alerts are just not 100% reliable and can be ignored if they make no sense.
The alerts just take the installed packages and look if there are any packages that we would have an integration for that is not enabled yet. This approach has problems when you e.g. use Django and Celery, but only enable the Django integration in the web worker and the Celery integration in the background worker (as far as I understood this is not what OP ran into though).
I think the way forward is to make those alerts permanently dismissable, because I don't see a way right now to make them accurate. The motivation to inform people about integrations they might want to use, not to tell them what they have to do.
That said, I am interested in cases where those alerts show nonsense. Feel free to post here or write me at markus#sentry.io.

in your case you need to install sentry-sdk[django]
pip3 install sentry-sdk[django]
if same error in flask, then
pip3 install sentry-sdk[flask]

Related

How should I deploy a web application to Debian?

Ideally I’d like to build a package to deploy to Debian. Ideally the installation process would check the system has the required dependencies installed, as well as configure Cronjobs, set up users etc.
I’ve tried googling around and I understand a .deb is the format I can distribute in - but that is as far as I got since I’m getting confused now with the tooling I need to get up to speed with. The other option is to just git clone on the server and configure the environment manually… but that’s not preferable for obvious reasons.
How can I get started with building a Debian package and is that the right direction for deploying web applications? If anyone could point me in the right direction tools-wise and perhaps a tutorial that would be massively appreciated :) also if you advise to just take the simple route with git, happy to take that advice as well if you explain why. if it makes any difference I’m deploying one nodejs and one python web application
You can for sure package everything as a Linux application; for example using pyinstaller for your python webapp.
Besides that, it depends on your use case.
I will focus on the second part of your question,
How can I get started with building a Debian package and is that the right direction for deploying web applications?
as that seems to be what you are after when considering other alternatives to .dev already in your question.
I want to deploy 1-2 websites on my linux server
In this case, I'd say manually git clone and configure everything. Its totally fine when you know that there won't be much more running on the server and is pretty hassle free.
Why spend time packaging when noone will need the package ever again after you just installed it on your server?
I want to distribute my webapps to others on Debian
Here a .deb would make total sense. For example Plex media server and other applications are shipped like this.
If the official Debian wiki is too abstract, there are also other more hands on guides to get you started quickly. You could also get other .deb Packages and extract them to see what they are made up from. You mentioned one of your websites is using python, so I just suspect it might be flask or Django. If it's Django, there is an example repository you might want to check out.
I want to run a lot of stuff on my server / distribute to other devs and platforms / or scale soon
In this case I would make the webapps into docker containers. They are easy to build, share, and deploy. On top you can easily bundle all dependencies and scripts to make sure everything is setup right. Also they are easy to run and stop. So you have a simple "on/off" switch if your server is running low on resources while you want to run something else. I highly favour this solution, as it also allows you to easily control what is running on what ip when you deploy more and more applications to your server. But, as you pointed out, it runs with a bit of overhead and is not the best solution on weak hardware.
Also, if you know for sure what will be running on the server long term and don't need the flexibility I would probably skip Docker as well.

confused about Solr Django and Haystack installation

I'm following the tutorial in the Haystack for using solr in django. I download haystack ad it to my installed apps, and I like to check my development to make sure my apps still working. So when I go to my my localhost it says
A server error occurred. Please contact the administrator.
and in my terminal it says
raise MissingDependency("The 'solr' backend requires the installation of 'pysolr'. Please refer to the documentation.")
and when I go to the pysolr documentation it seems as if it's to be used without haystack. There is no mention pysolr in haystack docs and no mention of haystack in pysolr docs. Not only that, but pysolr gives an example that says
# If on Python 2.X
Im using python 3. I understand theres a learning curve but is there anything that has all the resources in one post? Or must I just trial and error it out? and also can it be Kind of up to date? 2.x to 3.5 is a big gap. There are surprisingly no google videos or vimeo videos on this. any and all help is welcome. I know anything worth having or knowing isn't easy to come by but sheesh? the few sites Ive seen also have the url like this in urls.py
(r'^search/', include('haystack.urls')),
but if I do it like that I get an error
regex_pattern = pattern.regex.pattern
AttributeError: 'tuple' object has no attribute 'regex'
this may seem like nothing to someone experienced, but to the untrained this can lead to a lot of confusion to the proper syntax.
Haystack uses different adapters to talk to different backend services, such as ElasticSearch or Solr.
The Solr adapter uses pysolr:
You’ll also need a Solr binding, pysolr. The official pysolr package, distributed via PyPI, is the best version to use (2.1.0+). Place pysolr.py somewhere on your PYTHONPATH.
You shouldn't have to do anything with pysolr yourself, as that is only used inside the haystack Solr adapter. Be sure to follow the tutorial for setting up the schema and getting the indexing running.
just pip install pysolr and move about your day.
Personally, I used(in my virtual env):
python3 -m pip install pysolr

Celery versus djcelery

I am confused between the differences between these two applications while trying to setup celery on my django project.
What are the differences between the two if any? When reading tutorials online I see them both used, and i'm not sure which would be best for me. It appears that djcelery is kinda like celery but tailored for django? But celery doesn't need to be included in intalled apps while djcelery does.
Thank you
Django-celery was a project that provided Celery integration for django, but it is no longer required.
You don't have to install django-celery anymore. Since version 3.1 django is supported out of the box.
So to install celery you can use pip:
pip install -U Celery
This is a note from Celery First Steps with Django Tutorial
Note:
Previous versions of Celery required a separate library to work with
Django, but since 3.1 this is no longer the case. Django is supported
out of the box now so this document only contains a basic way to
integrate Celery and Django. You will use the same API as non-Django
users so it’s recommended that you read the First Steps with Celery
tutorial first and come back to this tutorial. When you have a working
example you can continue to the Next Steps guide.
When using Django, you should install django-celery from PyPI. Celery will be installed as a dependency.
Djcelery hooks your django project in with Celery, which is a more general tool used with a variety of application stacks.
Here is Celery's getting started with Django guide, which describes installing django-celery and setting up your first tasks.
Previous versions of Celery required a separate library to work with Django, but since 3.1 this is no longer the case. Django is supported out of the box now so this document only contains a basic way to integrate Celery and Django. You’ll use the same API as non-Django users: https://docs.celeryproject.org/en/latest/django/first-steps-with-django.html#configuring-your-django-project-to-use-celery

How would I go about plugging mongoengine into pyramid?

I've created a basic mongoengine app using the pyramid_mongodb scaffold...however I'd like to include mongoengine. I'm wondering what I should actually keep from the scaffolds code.
Not a answer regarding the scaffold. I wouldn't recommend using the scaffold since it's not really usable for root_factory and so on, the subscribers isn't really needed too.
I wrote an addon for pyramid. It's called pyramid_mongo.
Documentation:
http://packages.python.org/pyramid_mongo/
Github:
https://github.com/llacroix/pyramid_mongo
I saw your question today and felt it could be a good addon to the plugin.
I just pushed it to github so you need to clone it from there for now, installing using pip will load the old version without support for mongoengine.
In other words in your config, do everything like in the docs and add something like:
mongo.mongoengine=true
It will attach mongo from the config to mongoengine. All other api will work with or without mongoengine and mongoengine should work. It just added it today, it doesn't support multiple connections and multiple dbs. I can also add support for multiple dbs too. But I feel mongoengine may do some things on his own that could conflict with my plugin like authorization.
Once I write tests, I'll push it to python packages and it will be possible to install from pip or easy_install. For now, pull it from github

Caching Python requirements for production deployments

I'm building various python-based projects that use pip/buildout to install dependencies. But I don't like the idea of someone deleting a github project and crippling my apps, or a network outage meaning I can't perform a deployment.
How do other people solve this?
I've got various ideas, but I think perhaps the one that sounds most promising would be some kind of caching proxy server. I'd point pip to use this internal proxy server which would cache a copy of the downloaded project, and periodically check for updates (if there's a net connection) before serving cached versions.
Does anything like this already exist?
Use case:
I have a project which I deploy to web server 1. I add new features with a remote dependency, and when I come to update to the production web server, PyPi is down so I can't deploy. Or perhaps when I come to set up a new web server, a dependency has disappeared from github or wherever.
How can I make it so my deployments/dev environments can always be brought up regardless of what happens in the wider world?
Also, when I deploy, I won't deploy over the top of existing code. Rather I'll build a new virtualenv and switch over to it so I can rollback if anything goes wrong. So each time I deploy I'll need to rebuild my environment and will need dependencies to exist.
So I'm looking for a solution that will insulate me against short-term network outages to servers hosting dependencies, as well as guarding against projects being deleted.
You should keep a "reference copy" of the projects on which you depend.
If someone removes the project from GitHub (and PyPi and all the mirrors, and every other site on the net) then you have the source and can now distribute it.
I have exactly the same requirements, and also use buildout to manage my deployments. I try not to install ANY of my package dependencies system-wide; I let buildout install eggs for all of them into my buildout. That way if I depend on a newer version of some package in rev N+1 of my project, and at "go-live" time N+1 falls on its face, I can roll back to N and automatically get the packge dependencies that N worked with.
We run a private eggbasket server, and configure buildout to fetch packages only from that. Server contents were initialized by allowing buildout to grab eggs from the network one time, then copying the downloaded eggs.
This way, upgrades to each package are totally under control and I can ensure that 2 successive buildouts of the same snapshot of my code will build out the same thing. When I want to upgrade all, I will let buildout fetch most-recent-versions again, test test test, then copy my eggs to the eggbasket server to go into production mode.
This is what I'm looking for:
http://pypi.python.org/pypi/collective.eggproxy

Categories