maybe it's a silly question, but I didn't found much while googling around.
So I'm on the way to transform my development environment to a deploy environment. I connected django and nginx using uwsgi, put both in docker containers... so far no problem.
But I'm using django-rq, so I need a Worker process. In all these nice examples about deploying django I didn't found much about deploying django-rq. All I found was "create a docker container and use the manage.py " like this:
CMD python manage.py rqworker [queue1] [queue2]
Really? Should I just start the worker like this? I think manage.py is just for testing!?
You can create a systemd service in Ubuntu then enable and start the service.
FYR: https://github.com/rq/django-rq#deploying-on-ubuntu
Related
I have a very simple API (2 routes) which just has GET requests, and doesnt need any authentication or anything for now.
I want to know what is the best and appropariate way to deploy my API for production. I am unable to use docker, and would like to do it the server way.
So i have a few questions:
On the fastapi documentation it says you can do uvicorn main:app --host 0.0.0.0 --port 80 but i was thinking if that is the correct way for production? Do i just enter that command, and will the API automatically start listening on the servers IP address? Also is this method efficient and will it be able to handle all the requests? Or what would i change for it to be faster?
When should i use a process manager?
When should i use multiple workers? And what benefits do they provide?
When should i use Gunicorn as mentioned here? https://www.uvicorn.org/deployment/#gunicorn
I am just a little confused on how to deploy this because one article says do this, another says do this.
If for whatever reasons you don't like to use Docker-Ce, the best way is to create a systemd-service unit for your application so every time it goes down, systemd will try to restart it, then run it with servers like wgsi or gunicorn.
This link can help about systemd-services too:
https://blog.miguelgrinberg.com/post/running-a-flask-application-as-a-service-with-systemd
P.S note that the way you serve gunicorn isn't really related to docker or systemd-service, for both approaches you need to config gunicorn.
To answer your Question:
How can I deploy FastAPI manually on a Ubuntu Server?
You can check out this video tutorial on how to
Deploy FastAPI on Ubuntu
The deployment has the following architecture within a single Ubuntu VM.
As you take a look at the Architectural diagram above for FastAPI Deployment, it shows a single VM deployment.
Within the Ubuntu VM, there are two systemd services namely caddy.service and gunicorn.service up and running. The gunicorn.service runs the FastAPI application and the caddy.service exposes the FastAPI application running on Gunicorn as a reverse proxy with the help of uvicorn.workers.UvicornWorker worker class. In addition to this, our FastAPI communicates to PostgreSQL database server in an asynchronous fashion with the help of databases package that provides simple asyncio support for PostgreSQL database.
I have a Python/Flask project (API) that contains a few workers that must be run continuously. They connect to Redis using an outside provider (https://redislabs.com/). I didn't find how can I configure Openshift to run my workers. When using Heroku, it was as simple as:
web: gunicorn wsgi --log-file -
postsearch: python manage.py worker --queue post-search
statuses: python manage.py worker --queue statuses
message: python manage.py worker --queue message
invoice: python manage.py worker --queue invoice
But for Openshift, despite googling many things, I was not able to find anything to help me. Ideally, I would avoid deploying my application to each gears. How can I run multiple workers with OpenShift?
Taken from Getting Started with Openshift by Katie J. Miller and Steven Pousty
Cartridge
To get a gear to do anything, you need to add a cartridge. Cartridges are the plugins that house the framework or components that can be used to create and run an application. One or more cartridges run on each gear, and the same cartridge can run on many gears for clustering or scaling. There are two kind of cartridges:
Standalone
These are the languages or application server that are set up to serve your web content, such as JBoss, Tomcat, Python, or Node.js. Having one of these cartridges is sufficient to run an application.
Embedded
An embedded cartridge provides functionality to enhance your application, such as database or Cron, but cannot be used on its own to create and application.
TL;DR: you must use cartridges to run a worker process. The documentation can be found here and here, and the community-mantained examples here and a series of blog post begins here
A cartridges is a bunch of file and a manifest to let OS know how to run the cartridge and how to resolve a deps.
But let's build something. Create a Django/Python app, the result is:
Now install your (custom) cartridge from the link on the bottom or from the command line tool, you can use the link to the cartridge repository.
OpenShift's integration with external services is done by configuring the relevant environment variables as explained at: https://developers.openshift.com/external-services/index.html#setting-environment-variables
Heroku's apps rely on a REDISCLOUD_URL env var that is automatically provisioned - you'll need to set up something similar in your OpenShift deployment with the applicable information about your database from the service's dashboard.
My web host does not have python and I am trying to build a machine learning application. I know that heroku lets you use python. I was wondering if I could use heroku as a python server? As in I would let heroku do all of the python processing for me and use my regular domain for everything else.
Yes, and it may be a pain at first but once it is set I would say Heroku is the easiest platform to continually deploy to. However, it is not intuitive - don't try and just 'take a stab' at it; follow a tutorial and try and understand why Heroku works the way it does.
Following the docs is a good bet; Heroku has great documentation for the most part.
Here's the generalized workflow for deploying to Heroku:
Locally, create your project and use virtualenv to install/manage
libraries.
Initialize a git repository in the base dir for your
Python project; create a heroku remote (heroku create)
Create a
procfile for Heroku to use when starting gunicorn (or see
the options for using waitress/etc); this is used by Heroku to start your process
cd to your base dir; freeze
your virtualenv (pip freeze > requirements.txt) and add/commit
requirements.txt. This tells Heroku what packages need to be installed, a requirement for your deployment to work. If you are trying to run a Python project and there are required packages missing, the app will be unable to start and Heroku will display an Internal Server Error.
Whenever changes are made, git commit your changes and git push heroku master to push all commits to Heroku. This will cause Heroku to restart the server application with your updated deployment. If there's a failure, you can use heroku rollback to just return to your last deployment.
In reality, it's not a pain in the ass, just particular. Knowing the rules of Heroku, you are able to manage your deployment with command-line git commands with ease.
One caveat - If deploying Django, Flask applications etc there are peculiarities to account for; specifically, non-project files (including assets) should NOT be stored on Heroku as Heroku periodically restarts your 'dyno' (server instance(s)), loading the whole project from the latest push to Heroku. With Django and Flask, this typically means serving assets/static/media files from an Amazon S3 bucket.
That being said, if you use virtualenv properly, provision your databases, and follow Heroku practices for serving files and commiting updates, it is (imho) the absolute best platform out there for ease of use, reliable uptime, and well-oiled rolling deployments.
One last tip - if you are creating a Django app, I'd suggest starting your project out of this boilerplate. I have a custom one I use for new projects and can start and publish a project in minutes.
Yes, you can use Heroku as a python server. I put a Python Flask server on Heroku but it was a pain: Heroku seemed to have some difficulties, and there were lots of conflicting advice on getting around those. I eventually got it working, can't remember what web page had the ultimate answer but you might look at this one: http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xviii-deployment-on-the-heroku-cloud
Have you done your Python Server on Heroku by using twisted?
I don't know if this can help you.
I see the doc 'Getting Started on Heroku with Python' is about the Django.
It is sure that Heroku can use Twisted from docs
Pure Python applications, such as headless processes and evented web frameworks like Twisted, are fully supported.
django-twisted-server has twisted in django but it isn't on Heroku.
We have an application written in django. We are trying a deployment scenario which will have one docker running apache, the second docker running django and the third docker running the DB server. In most of the documentation it is mentioned that apache and django will sit on the same machine (django in virtualenv to be precise), is there any way we can ask apache to talk to mod_wsgi sitting on a remote machine which has the django application?
mod_wsgi would be the wrong technology if you want to do this. It runs as part of Apache itself, so there literally is nothing to run in the Django container.
A better way would be to use gunicorn to run Django in one container, and have the other running the webserver as a proxy - you could use Apache for this, although it's more common to use nginx.
I have a small infrastructure plan that does not include Django. But, because of my experience with Django, I really like Celery. All I really need is Redis + Celery to make my project. Instead of using the local filesystem, I'd like to keep everything in Redis. My current architecture uses Redis for everything until it is ready to dump the results to AWS S3. Admittedly I don't have a great reason for using Redis instead of the filesystem. I've just invested so much into architecting this with Docker and scalability in mind, it feels wrong not to.
I was searching for a non-Django database scheduler too a while back, but it looked like there's nothing else. So I took the Django scheduler code and modified it to use SQLAlchemy. Should be even easier to make it use Redis instead.
It turns out that you can!
First I created this little project from the tutorial on celeryproject.org.
That went great so I built a Dockerized demo as a proof of concept.
Things I learned from this project
Docker
using --link to create network connections between containers
running commands inside containers
Dockerfile
using FROM to build images iteratively
using official images
using CMD for images that "just work"
Celery
using Celery without Django
using Celerybeat without Django
using Redis as a queue broker
project layout
task naming requirements
Python
proper project layout for setuptools/setup.py
installation of project via pip
using entry_points to make console_scripts accessible
using setuid and setgid to de-escalate privileges for the celery deamon