Editing config file inside Docker image on client site - python

I have created and pushed a docker image to Docker Hub. Am pulling the image on the other side on the client machines. However there are config files inside the image that are client site specific (change from site to site) - for example the addresses of the RTSP cameras per site. How would I edit these files on each client site? Do I need to manually vim each image on each client site manually or is there a simpler way?
Or is the solution to extract these config files entirely from the image, copy them separately to client site and somehow change the code to reach these files outside the image?
thanks

You better keep your image in DockerHub as a baseimage w/o any dynamic config in it (or simply ignore it).
On the client side, you need to create your local image from the baseimage from the DockerHub with replacing via COPY or by mounting it as Volume.
OR as #Klaus D. commented

Related

How do I access files hosted on web via python?

I am working on a system (python program) that runs on local machine but it needs to fetch data hosted somewhere on web (images in my case).
What it does is:
Send a SQL query to webhost (localhost currently)
The response sends back the names of images (it is stored in an array called fetchedImages lets assume).
Now once I have all the names of required images all I want to do is access the file directly from localhost and copy it to local machine. But this is what my problem is:
I am trying to access it as:
source = "localhost/my-site/images"
localDir = "../images"
for image in fetchedImages:
copy(source+image,localDir)
but the problem is that, the localhost is created using XAMPP and I cannot access localhost since python doesn't accept it as path. How can I access localhost if it isn't created via SimpleHTTPServer but XAMPP?
It can be solved using requests as:
import requests as req
from StringIO import StringIO
from PIL import Image
source = "http://localhost/my-site/images/"
localDir = "../images"
for image in fetchedImages:
remoteImage = req.get(source+image)
imgToCopy = Image.open(StringIO(remoteImage.content))
imgToCopy.save(localDir+image)
the requests will access the web resource thus making system easy to work with dynamic paths (localhost/my-site or www.my-site.com) and then copy those resources to local machine for processing.

What's the optimal way to store image data temporarily in a containerized website?

I'm currently working on a website where i want the user to upload one or more images, my flask backend will do some changes on these pictures and then return them back to the front end.
Where do I optimally save these images temporarily especially if there are more then one user at the same time on my website (I'm planning on containerizing the website). Is it safe for me to save the images in the folder of the website or do I need e.g. a database for that?
You should use a database, or external object storage like Amazon S3.
I say this for a couple of reasons:
Accidents do happen. Say the client does an HTTP POST, gets a URL back, and does an HTTP GET to retrieve the result. But in the meantime, the container restarts (because the system crashed; your cloud instance got terminated; you restarted the container to upgrade its image; the application failed); the container-temporary filesystem will get lost.
A worker can run in a separate container. It's very reasonable to structure this application as a front-end Web server, that pushes messages into a job queue, and then a back-end worker picks up messages out of that queue to process the images. The main server and the worker will have separate container-local filesystems.
You might want to scale up the parts of this. You can easily run multiple containers from the same image; they'll each have separate container-local filesystems, and you won't directly control which replica a request goes to, so every container needs access to the same underlying storage.
...and it might not be on the same host. In particular, cluster technologies like Kubernetes or Docker Swarm make it reasonably straightforward to run container-based applications spread across multiple systems; sharing files between hosts isn't straightforward, even in these environments. (Most of the Kubernetes Volume types that are easy to get aren't usable across multiple hosts, unless you set up a separate NFS server.)
That set of constraints would imply trying to avoid even named volumes as much as you can. It makes sense to use volumes for the underlying storage for your database, and it can make sense to use Docker bind mounts to inject configuration files or get log files out, but ideally your container doesn't really use its local filesystem at all and doesn't care how many copies of itself are running.
(Do not rely on Docker's behavior of populating a named volume on first use. There are three big problems with it: it is on first use only, so if you update the underlying image, the volume won't get updated; it only works with Docker named volumes and not other options like bind-mounts; and it only works in Docker proper and not in Kubernetes.)
Other decisions are possible given other sets of constraints. If you're absolutely sure you will never ever want to run this application spread across multiple nodes, Docker volumes or bind mounts might make sense. I'd still avoid the container-temporary filesystem.

How can I add a simple png picture to my bokeh website?

I alread tried to add a picture by using a div container, but I always got a 404 error: "404 GET img/image.png (my-ip) 1.27ms".
What i'm doing wrong. Due to a similar issue on stackoverflow that method should work - I guess.
image_div = Div(text="<img src='img/image.png'>")
curdoc().add_root(image_div)
>> bokeh serve /dir/image.py --allow-websocket-origin=my-website:5006
A browser 100% cannot load local filesytem paths from a remote server. The images must be hosted and served by a real web server, ie. they must have actual http (or https) URLs in the img tag. You have three basic options:
Serve the images from some other remote web server
Run separate web server on this machine to serve the image files
Make the Bokeh app be a directory style Bokeh app which can serve files in a static subdirectory.
Which one is best for you depends heavily on the particulars of your situation.

How to host multiple site in single droplet

I just hosted my website on the digital ocean by following below link.
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04
It works like a charm.
But i also want to host multiple site on the single drop let. I've no idea that how to host multiple site on the single droplet. Does name matters while creating gunicorn service file and socket file. I mean do I need to create separate service and socket file for separate project and also do i need to create separate sock file for separate project.
You can run as much as resources (RAM, Disk space) you have. For this, there is some tips i list them below:
Have separate virtualenvs for each site, inside its project folder.
Manage Database names to prevent conflicts
Don't use port 8000 and reserve it for tests.
Create separate systemd service for each project. (remember to use separate name for each service)
Therefore you should create separate socket for each site.
First start with 1 worker per site, to lower your resources costs.
Create separate nginx block for each site you have.
with these tips you can have multiple sites in an single droplet easily.
Yes you just have to create separate *.service and *.socket files for each project.
And just don't forget to change all strings in this tutorial from
gunicorn.service
gunicorn.socket
to
your_new_project.service
your_new_project.socket
when I had similar question this answer from DO website helped me.
You just have to change the project name and server_name when doing the "Configure Nginx to Proxy Pass to Gunicorn" part. If done correctly, after you restart nginx both websites will work.

sorl-thumbnail ThumbnailException Error After Cloning EC2 Instance

I cloned a working EC2 instance to create a secondary staging server. Everything is working as it should with the exception of sorl-thumbnail.
Before I describe the errors I'm receiving, I think it might be helpful to describe the stack I'm working with. It involves 3 EC2 instances; an app server running django in combination with Nginx and Gunicorn; a database running MySQL and Redis; and a media server running Nginx. The app server uses NFS to mount the media directory from the media server locally. All appropriate ports are open in AWS and the app server has been added /etc/exports on the media server.
On to the issue I am seeing... The img src attribute for all images that should be generated by sorl-thumbnail is empty. When I take a look at my django app's log, I see an entry like this for every missing image:
[04/29/2013 13:11:54] DEBUG : Could not find thumbnail image for rendering </media/images/12345.jpg>
ThumbnailException: Source file: '/images/12345.jpg' does not exist.
[04/29/2013 13:11:54] DEBUG : Could not retrieve image for </media/images/12345.jpg>
However, 12345.jpg does exist at /media/images/.
I spent most of Friday trying to run down the issue to no avail. Has anyone come across anything like this?
Generated data like image thumbnails is often stored in a (comparatively) temporary filesystem location, and How sorl-thumbnail operates suggest the same:
When you use the thumbnail template tag sorl-thumbnail looks up the
thumbnail in a Key Value Store. The key for a thumbnail is generated
from its filename and storage. [...] It is worth noting that sorl-thumbnail does not check if
source or thumbnail exists if the thumbnail key is found in the Key
Value Store.
Note: This means that if you change or delete a source file or delete the
thumbnail, sorl-thumbnail will still fetch from the Key Value Store.
Therefore it is important that if you delete or change a source or
thumbnail file notify the Key Value Store.
[emphasis mine]
Now, Amazon EC2 instances usually feature two distinct storage types, namely the persistent Amazon Elastic Block Store (Amazon EBS) volumes, which are copied when cloning an instance, and also the Amazon EC2 Instance Store volumes (usually referred to as ephemeral storage), which are lost when cloning an instance; see my answer to how to take backup of aws ec2 instance/ephemeral storage? for more on this difference/problem.
So presumably your thumbnails have been stored on the ephemeral volume and would need to be generated now accordingly.

Categories