odoo mod_wsgi schedulers not working - python

When I deploy openerp/odoo using mod_wsgi, I found my schedulers stop working, can any one help how can I get my cron/schedulers working. If I deploy it using mod_proxy it will solve the issue but I want to deploy using mod_wsgi.

The schedulers don't work when running through wsgi because your Odoo instances are just workers. AFAIK you just run a standalone instance on a 127.0.0.1 port and it runs your scheduled tasks.

Related

right way to deploy a django-rq worker

maybe it's a silly question, but I didn't found much while googling around.
So I'm on the way to transform my development environment to a deploy environment. I connected django and nginx using uwsgi, put both in docker containers... so far no problem.
But I'm using django-rq, so I need a Worker process. In all these nice examples about deploying django I didn't found much about deploying django-rq. All I found was "create a docker container and use the manage.py " like this:
CMD python manage.py rqworker [queue1] [queue2]
Really? Should I just start the worker like this? I think manage.py is just for testing!?
You can create a systemd service in Ubuntu then enable and start the service.
FYR: https://github.com/rq/django-rq#deploying-on-ubuntu

How can I deploy FastAPI manually on a Ubuntu Server?

I have a very simple API (2 routes) which just has GET requests, and doesnt need any authentication or anything for now.
I want to know what is the best and appropariate way to deploy my API for production. I am unable to use docker, and would like to do it the server way.
So i have a few questions:
On the fastapi documentation it says you can do uvicorn main:app --host 0.0.0.0 --port 80 but i was thinking if that is the correct way for production? Do i just enter that command, and will the API automatically start listening on the servers IP address? Also is this method efficient and will it be able to handle all the requests? Or what would i change for it to be faster?
When should i use a process manager?
When should i use multiple workers? And what benefits do they provide?
When should i use Gunicorn as mentioned here? https://www.uvicorn.org/deployment/#gunicorn
I am just a little confused on how to deploy this because one article says do this, another says do this.
If for whatever reasons you don't like to use Docker-Ce, the best way is to create a systemd-service unit for your application so every time it goes down, systemd will try to restart it, then run it with servers like wgsi or gunicorn.
This link can help about systemd-services too:
https://blog.miguelgrinberg.com/post/running-a-flask-application-as-a-service-with-systemd
P.S note that the way you serve gunicorn isn't really related to docker or systemd-service, for both approaches you need to config gunicorn.
To answer your Question:
How can I deploy FastAPI manually on a Ubuntu Server?
You can check out this video tutorial on how to
Deploy FastAPI on Ubuntu
The deployment has the following architecture within a single Ubuntu VM.
As you take a look at the Architectural diagram above for FastAPI Deployment, it shows a single VM deployment.
Within the Ubuntu VM, there are two systemd services namely caddy.service and gunicorn.service up and running. The gunicorn.service runs the FastAPI application and the caddy.service exposes the FastAPI application running on Gunicorn as a reverse proxy with the help of uvicorn.workers.UvicornWorker worker class. In addition to this, our FastAPI communicates to PostgreSQL database server in an asynchronous fashion with the help of databases package that provides simple asyncio support for PostgreSQL database.

Deploying django(windows) with apache(on unix)

I know it might be a bad design but since we are developing the django website on our laptops which runs Win7, I thought it would be better to run django on a Windows platform only in production.
(Laptop is not powerful enough to run a Unix VM inside and our Unix team doesn't provide any Unix server with UI access (Only Putty) so using an IDE is impossible on Unix.)
I have deployed django with gunicorn and nginx on a Linux server very easily, but this time I have to deploy django on a Windows server with Apache on another Unix server (I know it sucks).
Our middleware team is asking(forcing) to run django components on a separate server so that they can manage their Apache (on Unix) instance comfortably. As far as I understand, Apache and django should reside on the same server for mod_wsgi to work.
Is this possible to keep Apache on a Unix machine and make a django website run from a Windows machine?
If not, what are the best possible solutions in my case? (Switch django on Unix? Use waitress on Django windows? Do not separate Apache and Django? etc.)
Regards,
Aditya
Try deploying on IIS instead, as it is the native Web Server on Windows Servers.
Checkout the django-windowsauth package, you can use it to deploy your project to IIS using few simple commands. https://github.com/danyi1212/django-windowsauth.
The best thing in my modest point of view is to create a unix docker image of your project

How to run various workers on OpenShift?

I have a Python/Flask project (API) that contains a few workers that must be run continuously. They connect to Redis using an outside provider (https://redislabs.com/). I didn't find how can I configure Openshift to run my workers. When using Heroku, it was as simple as:
web: gunicorn wsgi --log-file -
postsearch: python manage.py worker --queue post-search
statuses: python manage.py worker --queue statuses
message: python manage.py worker --queue message
invoice: python manage.py worker --queue invoice
But for Openshift, despite googling many things, I was not able to find anything to help me. Ideally, I would avoid deploying my application to each gears. How can I run multiple workers with OpenShift?
Taken from Getting Started with Openshift by Katie J. Miller and Steven Pousty
Cartridge
To get a gear to do anything, you need to add a cartridge. Cartridges are the plugins that house the framework or components that can be used to create and run an application. One or more cartridges run on each gear, and the same cartridge can run on many gears for clustering or scaling. There are two kind of cartridges:
Standalone
These are the languages or application server that are set up to serve your web content, such as JBoss, Tomcat, Python, or Node.js. Having one of these cartridges is sufficient to run an application.
Embedded
An embedded cartridge provides functionality to enhance your application, such as database or Cron, but cannot be used on its own to create and application.
TL;DR: you must use cartridges to run a worker process. The documentation can be found here and here, and the community-mantained examples here and a series of blog post begins here
A cartridges is a bunch of file and a manifest to let OS know how to run the cartridge and how to resolve a deps.
But let's build something. Create a Django/Python app, the result is:
Now install your (custom) cartridge from the link on the bottom or from the command line tool, you can use the link to the cartridge repository.
OpenShift's integration with external services is done by configuring the relevant environment variables as explained at: https://developers.openshift.com/external-services/index.html#setting-environment-variables
Heroku's apps rely on a REDISCLOUD_URL env var that is automatically provisioned - you'll need to set up something similar in your OpenShift deployment with the applicable information about your database from the service's dashboard.

Django and apache on different dockers

We have an application written in django. We are trying a deployment scenario which will have one docker running apache, the second docker running django and the third docker running the DB server. In most of the documentation it is mentioned that apache and django will sit on the same machine (django in virtualenv to be precise), is there any way we can ask apache to talk to mod_wsgi sitting on a remote machine which has the django application?
mod_wsgi would be the wrong technology if you want to do this. It runs as part of Apache itself, so there literally is nothing to run in the Django container.
A better way would be to use gunicorn to run Django in one container, and have the other running the webserver as a proxy - you could use Apache for this, although it's more common to use nginx.

Categories