I just hosted my website on the digital ocean by following below link.
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04
It works like a charm.
But i also want to host multiple site on the single drop let. I've no idea that how to host multiple site on the single droplet. Does name matters while creating gunicorn service file and socket file. I mean do I need to create separate service and socket file for separate project and also do i need to create separate sock file for separate project.
You can run as much as resources (RAM, Disk space) you have. For this, there is some tips i list them below:
Have separate virtualenvs for each site, inside its project folder.
Manage Database names to prevent conflicts
Don't use port 8000 and reserve it for tests.
Create separate systemd service for each project. (remember to use separate name for each service)
Therefore you should create separate socket for each site.
First start with 1 worker per site, to lower your resources costs.
Create separate nginx block for each site you have.
with these tips you can have multiple sites in an single droplet easily.
Yes you just have to create separate *.service and *.socket files for each project.
And just don't forget to change all strings in this tutorial from
gunicorn.service
gunicorn.socket
to
your_new_project.service
your_new_project.socket
when I had similar question this answer from DO website helped me.
You just have to change the project name and server_name when doing the "Configure Nginx to Proxy Pass to Gunicorn" part. If done correctly, after you restart nginx both websites will work.
Related
hey I'm currently working on a website (Photo selling services) and now I wanna deploy it on a public host,
I didn't change the database and I'm using Django's SQLite as my database, Is it gonna be a problem or it's fine?
and also I'm handling the downloads with my views and template and the files (photos) will be downloaded from my database and I wanted to know do I need one host for my application and another for putting my photos in? or I can just run the whole website on one host without a problem ( same as I'm running it on my local host).
I prefer not to use SQLite in production because:
It is just a single file and it may get deleted suddenly by anyone that has access to that.
No user management.
Limited data types.
For serving files on a heavy-traffic website it's good to have CDN to serve files from.
I wrote a Flask web application for a system that our company uses. However, we have another web application, which is running on Node.JS. The "problem" is that my colleague writes everything on node, while I write everything in Python.
We want to implement both applications on one webpage - for example:
My application will run on example.com/assistant
His application will run on example.com/app1 and example.com/app2
How can we do this? Can we somehow implement the templates that I use with his templates and vice-versa?
Thank you in advance!
V
Serving different apps from the same domain
You can use haproxy for directing requests to specific service based on ACL rules.
You could use path_beg rule, to direct any request beginning with specific path to be directed to corresponding server. See example below.
/etc/haproxy/haproxy.cfg
# only relevant part of the config file
# assumes all apps are on one machine
frontend http-in
bind *:80
acl py_app1 path_beg /assistant
acl node_app1 path_beg /app1
acl node_app2 path_beg /app2
default_backend main_servers
backend py_app1
server flask_app 127.0.0.1:5000
backend node_app1
server nodejs1 127.0.0.1:4001
backend node_app2
server nodejs2 127.0.0.1:4002
backend main_servers
server other1 127.0.0.1:3000 # nginx, apache, or whatever
Sharing template code between apps
This would be harder, as you would need to both agree on some kind of format, which needs to be language and framework-agnostic, and probably logic-less.
Mustache claims to be "framework-agnostic way to render logic-free views". I used it sparringly a few years ago so this one is first that came to mind, however you should do more research on this, maybe there is some better fit.
Python implementation
JS implementation
The problem would be to actually keep the templates always in sync with apps, and not break functionality of the views. If a template changes then you would need to test all apps that use this template file. Also, you probably will block one another from updating your apps at different times, because if one of you change the template files, then you must come to a consensus, update all relevant apps, and deploy them at one time.
I'm currently working on a website where i want the user to upload one or more images, my flask backend will do some changes on these pictures and then return them back to the front end.
Where do I optimally save these images temporarily especially if there are more then one user at the same time on my website (I'm planning on containerizing the website). Is it safe for me to save the images in the folder of the website or do I need e.g. a database for that?
You should use a database, or external object storage like Amazon S3.
I say this for a couple of reasons:
Accidents do happen. Say the client does an HTTP POST, gets a URL back, and does an HTTP GET to retrieve the result. But in the meantime, the container restarts (because the system crashed; your cloud instance got terminated; you restarted the container to upgrade its image; the application failed); the container-temporary filesystem will get lost.
A worker can run in a separate container. It's very reasonable to structure this application as a front-end Web server, that pushes messages into a job queue, and then a back-end worker picks up messages out of that queue to process the images. The main server and the worker will have separate container-local filesystems.
You might want to scale up the parts of this. You can easily run multiple containers from the same image; they'll each have separate container-local filesystems, and you won't directly control which replica a request goes to, so every container needs access to the same underlying storage.
...and it might not be on the same host. In particular, cluster technologies like Kubernetes or Docker Swarm make it reasonably straightforward to run container-based applications spread across multiple systems; sharing files between hosts isn't straightforward, even in these environments. (Most of the Kubernetes Volume types that are easy to get aren't usable across multiple hosts, unless you set up a separate NFS server.)
That set of constraints would imply trying to avoid even named volumes as much as you can. It makes sense to use volumes for the underlying storage for your database, and it can make sense to use Docker bind mounts to inject configuration files or get log files out, but ideally your container doesn't really use its local filesystem at all and doesn't care how many copies of itself are running.
(Do not rely on Docker's behavior of populating a named volume on first use. There are three big problems with it: it is on first use only, so if you update the underlying image, the volume won't get updated; it only works with Docker named volumes and not other options like bind-mounts; and it only works in Docker proper and not in Kubernetes.)
Other decisions are possible given other sets of constraints. If you're absolutely sure you will never ever want to run this application spread across multiple nodes, Docker volumes or bind mounts might make sense. I'd still avoid the container-temporary filesystem.
I'm hoping to be pointed in the right direction as far as what tools to use while in the process of developing an application that runs on two servers per client.
[Main Server][Client db Server]
Each client has their own server which has a django application managing their respective data, in addition to serving as a simple front end.
The main application server has a more feature-rich front end, using the same models/db schemas. It should have full read/write access to the client's database server.
The final desired effect would be a typical SaaS type deal:
client1.djangoapp.com => Connects to mysql database # client1_IP
client2.djangoapp.com => Connects to mysql database # client2_IP...
Thanks in advance!
You could use different settings files, let's say settings_client_1.py and settings_client_2.py, import common settings from a common settings.py file to keep it DRY. Then add respective database settings.
Do the same with wsgi files, create one for each settings. Say, wsgi_c1.py and wsgi_c2.py
Then, in your web server direct the requests for client1.djangoapp.com to wsgi_c1.py and client2.djangoapp.com to wsgi_c2.py
I decided to try using fileconveyor in order to write a simple app that will be able to sync a directory (with very small word files) across all my computers.
In order to do that I also installed pydtpdlib so as to write a simple ftp server that fileconveyor will link to.
pydtpdlib comes with a number of examples so I used one of them to run a server on 0.0.0.0:2121 and configured file conveyor to connect to it which it did, reporting back that it is
- Fully up and running now.
The ftp server also logged the connection as such
USER 'user' logged in.
FTP session closed (disconnect).
But I am not quite sure on what to do now.
1.How can I make the ftp server save uploaded files to a directory of my choosing?
2.Will fileconveyor be able to sync the files both ways?
3.If yes how is that possible, as it would have to track changes on the files in the remote machine?
4.Is what I am trying to do a good idea or should I be using file conveyor differently, possibly not with pyftpdlib but some other service?
Answer to 1: You can configure a directory per user and per anonymus user, see example (add_user/add_anonymous)
Answers to pp 2 and 3: I don't think it's possible to make a reliable implementation of a two-way sync application using only a standard FTP server on the one of the sides. Such apps need more information than FTP protocol provides.
Answer to 4: Why do you need pyftpdlib? I believe it's good for building a customized embedded FTP server. You can use any popular FTP servers like ProFTPD or Filezilla. They are well documented and you can find a lot of HOW-TOs.
BTW why don't you want to use Dropbox?