I have found a lot of examples explaining how we can use uwsgi and emperor mode to achieve multiple apps deployment. This means for me: several apps folder with one vassal (ini, socket, application.py) per app.
I was not yet able to find examples of configurations where there were only one app folder with multiple vassals. This should allow me to serve a multi-tenants app (each customer has its own database). I tested this with two instances. This "seems" to work well.
Is this a good practice or do you have similar setup ?
Does this provide complete isolation between vassals instances ?
Here is my setup. I am using nginx/uwsgi/python stack (I make use of emperor_pg module). This setup allows uwsgi to spawn one vassal per customer A and B. Customers are using url : customerN.mydomain.com/fe1/web
Nginx config :
# This virtual host catches all incoming traffic from port 80 (security should be considered if not talking on local
# network)
server {
listen 80;
# We capture here the subdomain. It is used to designate a customer entity.
server_name ~^(?<subdomain>.+)\.mydomain\.fr$;
# We use a pattern for creating sockets name and path.
# This allows to spawn vassals automatically by detecting changes in the vassals pg table (emperor_pg)
# Pattern used is : /tmp/$subdomain$appname.sock
location ~favicon\.ico$ {
root /opt/app-current/web/;
}
location ~^\/(?<app_name>.+)\/web\/ {
root /opt/web-content/$subdomain/;
}
location ~^\/(?<app_name>.+)\/ {
# Routing to socket designated by standard pattern
include uwsgi_params;
uwsgi_pass unix://tmp/$subdomain$app_name.sock;
}
# When calling root of entity's subdomain, we launch the default app by routing traffic to index.socket
location / {
include uwsgi_params;
uwsgi_pass unix://tmp/$subdomain.sock;
}
}
The emperor upstart script using emperor_pg :
# Emperor configuration upstart script in `/etc/init/uwsgi.conf` :
# uWSGI - Manage uWSGI Application Server
description "uWSGI Emperor Mode"
start on (filesystem and net-device-up IFACE=lo)
stop on runlevel [!2345]
respawn
# We use pg mode. This allows to scan a postgresql database.
# requires sudo apt-get install uwsgi-plugin-emperor-pg
exec /usr/bin/uwsgi --uid www-data --gid www-data --plugin emperor_pg --emperor "pg://host=dbserver.com.com user=saasautomator dbname=saasautomator;SELECT name,config,ts,uid,gid,socket FROM vassals" --logto /var/log/uwsgi.log
And an example of vassals conf files in database. They are using the same application folder :
saasautomator=> SELECT * FROM vassals;
idvassals | name | config | ts | uid | gid | socket
-----------+---------------------------------+----------------------------------------------------------------+---------------------+-------+-------+---------------------------------------
3 | customerAfe1.ini | [uwsgi] +| 2004-10-19 10:23:54 | uwsgi | uwsgi | /tmp/customerAfe1.sock
| | master = true +| | | |
| | vaccum = true +| | | |
| | chdir = /opt/app/ +| | | |
| | plugins = python +| | | |
| | wsgi-file = /opt/app/fe1/application.py +| | | |
| | processes = 4 +| | | |
| | threads = 2 +| | | |
| | stats = 127.0.0.1:9191 | | | |
4 | customerBfe1.ini | [uwsgi] +| 2004-10-19 10:23:55 | uwsgi | uwsgi | /tmp/customerBfe1.sock
| | master = true +| | | |
| | vaccum = true +| | | |
| | chdir = /opt/app/ +| | | |
| | plugins = python +| | | |
| | wsgi-file = /opt/app/fe1/application.py +| | | |
| | processes = 4 +| | | |
| | threads = 2 +| | | |
| | stats = 127.0.0.1:9192 | | | |
(2 rows)
Thanks !
"Is this a good practice or do you have similar setup ?"
I do have similar setup. I won't answer if it's good practice, because it's opinion based, but there are small differences between this and copying code base for every customer. Main difference is: if someone will break into one app and he will be able to modify it's code, it will affect all apps... But if there are no differences in configuration of apps that will allow breaking in such way one, but not other - it won't matter. If someone breaks into one, he can reproduce that on others...
Also fixing changes that were made after hacking your app will be easier with shared codebase.
I can't see any other difference that will matter...
"Does this provide complete isolation between vassals instances ?"
Not complete, as mentioned before, but anything else can be isolated. This is like sharing one library between applications in system, which happens all the time. With proper configuration, you can also separate databases and other resources.
If filesystem permissions are applied correctly, you can limit ability to change code of your application by itself, so it will limit ability to change code by some hacker.
Related
am a beginner in python and wants to test some code in webjob and function app. usually i write code using c# so in visual studio we have templates to create webjob/function app so that we get all required files and init code. Now using python i need the required file structure and init code.
The folder/file structure for a Python Functions project looks like:
<project_root>/
| - .venv/
| - .vscode/
| - my_first_function/
| | - __init__.py
| | - function.json
| | - example.py
| - my_second_function/
| | - __init__.py
| | - function.json
| - shared_code/
| | - __init__.py
| | - my_first_helper_function.py
| | - my_second_helper_function.py
| - tests/
| | - test_my_second_function.py
| - .funcignore
| - host.json
| - local.settings.json
| - requirements.txt
| - Dockerfile
init method for webjob/function using python
init__.py
import azure.functions as func
import logging
def main(req: func.HttpRequest,
obj: func.InputStream):
logging.info(f'Python HTTP triggered function processed: {obj.read()}')
Please follow developer guide of Azure functions using python and Python Azure Functions using VS Code.
New to python packaging (but not python) so please excuse if questions are all over the place.
Trying to package a python module ( project-A ) which uses bunch of common scripts from another directory ( lib).
Hitting a wall on how to include this lib directory in the final package artifact.
The setup.py , working directory structure and how i plan the final package installation structure are shown below .
Few more questions -
The relative location of lib changes between my work-directory and installed package . Is there a way to import in both cases without changing the code ?
Is this structure better or should i move everything related to project_A under its own directory , that way each project will have its own setup.py, MANIFEST, conf directory, etc.. ( location of lib would remain the same, not sure if each package should have its own tests directory )
release=subprocess.check_output fails if someone runs this outside a git repo. Is there a way to add a default value in such cases ?
An example i came across had lib moved to
/usr/lib/python3.4/site-packages/project_A/_lib . Liked this approach. any idea how to achieve this in setup.py
setup.py ( currently creates two packages )
setup(
name='project_A',
version='0.15',
# release is not supported in bdist rpm
#release=subprocess.check_output(["git", "rev-list", "--count", "--first-parent", "HEAD"]).rstrip(),
# if creating outside git
release="0.0.1",
author='foo',
author_email='hello#world.com',
url='http://www.hello.com',
long_description="README.txt",
#install_requires=['bottle','requests','supervisor'], # currently not working
#dependency_links = ['https://pypi.python.org/packages/source/b/bottle/bottle-0.12.8.tar.gz'],
packages=['project_A'],
include_package_data=True,
package_data={'images' : ['hello.gif']},
data_files=[
#('/etc/init.d/', ['project_Actl']), # some startup script
('/var/log/project_A',[]),
('/etc/project_A/conf/',['conf/project_A.conf'])
],
description="Hello World testing setuptools",
tests_require=['pytest'],
cmdclass = {
'test': PyTest,
'clean': CleanCommand
}
)
Project-A installation directory -
/usr/lib/python3.4/site-packages/project_A
|
|____project_A/
| |
| |____project_A.py
|
|____lib
| |
| |______init__.py
| |____parseArguments.py
| |____setupLogger.py
| |____cleanup.py
Python workspace directory -
.
|____setup.py
|
|____MANIFEST.in
|
|____README.md
|
|____project_A/
| |
| |____project_A.py
|
|____project_B/
| |
| |____project_B.py
|
|____conf/
| |
| |____project_A.conf
| |____project_B.conf
|
|____lib
| |
| |______init__.py
| |____parseArguments.py
| |____setupLogger.py
| |____cleanup.py
|
|
|____images/
| |____hello.gif
|
|____tests
| |
| |____project_A
| | |____test_B_example.py
|
| |____project_B
| | |____test_A_example.py
I have a project with the following hierarchy:
create_fresh_databases
|
|--src
| |
| -- configurations
| |
| |-- config.py
| |-- config.xml
| -- __init__.py
|
---- create_fresh_databases.py
the config.py file contains Configurations class
the file create_fresh_databases.py looks:
from create_fresh_databases.src.configurations.config import Configurations
def main():
con = Configurations("conf.xml")
if __name__ == "__main__":
main()
but I'm getting error:
ModuleNotFoundError: No module named 'create_fresh_databases.src'; create_fresh_databases is not a package
My IDE is pycharm, and I have python 3.6
How can I fix it ?
I think you should organise the package structure like this:
Package:
|
|--Configurations:
| |
| |--Config.py
| |
| |--Config1.xml
|
|--__init__.py
Then you can import it as Package.configurations.config
I just deployed my very first django project, and I am trying to use pgloader v3.5 to migrate some important data from sqlite3 to postgres. It's successful, well, semi-successful since the data written in English are migrated whereas the data written in Russian (Cyrillic) aren't.
Tried:
Then, I tried a generic way - dump a datadump.json file of db.sqlite3 and python manage.py loaddata datadump.json, then the error:
django.db.utils.DataError: Problem installing fixture '/home/user/project/datadump.json': Could not load boutique.Category(pk=1): character with byte sequence 0xd0 0x90 in encoding "UTF8" has no equivalent in encoding "LATIN1"
datadump.json
Then, I checked the datadump.json file:
[{"model": "boutique.category", "pk": 1, "fields": {"gender": 1, "name": "\u0410\u043a\u0441\u0435\u0441\u0441\u0443\u0430\u0440\u044b", "description": "", "uploaded_date": "2020-03-02T08:20:49.786Z"}}, ...
checked
I checked the data in db.sqlite3 is intact, and data (mainly strings) in Russian aren't transferred in the new postgres database.
system
Ubuntu 18, bionic
django 3.0.3
Question:
I think the question is how to load/convert non-English json data? Not sure where to start to solve the problem. Thanks in advance!
Additional Info:
Thank you so much guys! Now I know the encoding is wrong:
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+---------+-------+--------------------------
postgres | postgres | LATIN1 | en_US | en_US |
template0 | postgres | LATIN1 | en_US | en_US | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | LATIN1 | en_US | en_US | =c/postgres +
| | | | | postgres=CTc/postgres
va | postgres | LATIN1 | en_US | en_US | =Tc/postgres +
| | | | | postgres=CTc/postgres +
| | | | | va_db_admin=CTc/postgres
Does this mean I have to recreate the database?
I want to execute the seeds.py script located in restaurants_project/scripts/. This seed file would populate my database with restaurant records based on the CSV data within the same folder by using my Django model Restaurant. I've found that the only way to include the project settings (i.e. restaurant_project.settings) along with my necessary models was to sys.path.append('MY_PROJECTS_RELATIVE_PATH') in the seeds file. I obviously don't want to do this in production (or do i?) so what's the best way to include those settings without getting the following ImportError:
ImportError: Could not import settings 'restaurants_project.settings' (Is it on sys.path? Is there an import error in the settings file?): No module named 'restaurants_project'
Directory Tree:
|___restaurants_project
| |______init__.py
| |______pycache__
| | |______init__.cpython-34.pyc
| | |____settings.cpython-34.pyc
| |____settings.py
| |____urls.py
| |____wsgi.py
|____restaurants
| |______init__.py
| |______pycache__
| | |______init__.cpython-34.pyc
| | |____admin.cpython-34.pyc
| | |____models.cpython-34.pyc
| |____admin.py
| |____migrations
| | |______init__.py
| |____models.py
| |____tests.py
| |____views.py
|____scripts
| |______init__.py
| |______pycache__
| | |______init__.cpython-34.pyc
| |____restaurant_inspections_2014.csv
| |____seeds.py
| |____unique_restaurant_ids.txt
seeds.py:
import os, sys
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'restaurants_project.settings')
import django
django.setup()
import csv
csvfile = open('restaurant_inspections_2014.csv')
reader = csv.reader(csvfile)
headers = next(reader, None)
for row in reader:
print(row)
import pdb; pdb.set_trace()
The best way to write a script that uses the settings of a project is to write a custom management command:
https://docs.djangoproject.com/en/1.8/howto/custom-management-commands/
Basically you should wrap your script inside a class that extends BaseCommand and put it inside the management/commands directory of an app.