I am working on a project that's evolved from one Dockerfile supporting several apps to one Dockerfile per app.
This generally works better than having them all together in one, but I'd like to share one Python library file among the apps without duplicating it.
I don't see a good way to do this, at least with the structure as currently set up: all apps have individual Bitbucket repos.
I don't think it's worth it to change the repo structure just for this, but is there some easier way I'm missing?
You could create one dockerfile containing the basic stuff for all your applications.
To reuse the basic image you have to push the image to your registry. And then you can use this basic image in your application images
FROM yourrepo/baseimage:latest
The main issue with this solution will be that the image is not updated at runtime. So if you update the baseimage you have to rebuild all your application images.
So you should use a CI/CD pipeline if the number of application containers is growing.
Related
This might be a novice question, so please excuse.
We have a small python development team and our repo is organized as indicated below. We have a custom library that is shared across multiple scripts (wrappers) and then libraries specific to each wrapper. The below structure is maintained under Git. This has worked so far for development. Now we would like to release the wrappers, but individually. As in, we need to release wrapper1 targeting a separate audience (different timelines and needs) and at a later time wrapper2. Both need to contain the shared_library and only their specific library. What is the best way to do this?
repo/:
wrapper1.py
wrapper2.py
shared_library/:
module1.py
module2.py
wrapper1_specific_lib/:
wrapper1_module1.py
wrapper1_module2.py
wrapper2_specific_lib/:
wrapper2_module1.py
wrapper2_module2.py
We have considered the following solutions:
Re-org as three separate repos: wrapper1, wrapper2 and shared_library and release separately.
Have two separate repos for wrapper1 and wrapper2 and periodically sync the shared library (!!?!!)
Leave it like this but explore if it is possible to use Git in some way to release selected files, folders specific to each wrapper.
Seeking you help on python code-organization for better release management using Git. Thanks in advance!!
You have 3 products, each with its own release timeline. Wrapper1 users are not interested in seeing wrapper2 code, and vice versa. Breaking out into 3 repos would be the simplest approach.
Do take care to package automated unit tests into the shared_library repo. Both of the dependent apps should be able to successfully run their own tests plus the shared tests. This will become important as the released apps submit new feature requests and try to pull the shared_library in different directions.
The stack I selected for my project are Python, R and MongoDB. However, I'd like to adopt Docker for this project but when I did my research on the internet, I pretty much found example for MySQL with PHP or Wordpress. So, I'm curious to know where I can find tutorials or example for using containers with Python, R, and MongoDB or any idea on how to put them together. What will the Dockerfile will be like? Especially, in my project, R used for data processing and data visualisation will be called from Python used for data collector as a sub-module for data cleaning as well.
Any help will be appreciated.
Option 1:
Split them in multiple docker images and run them all using docker-compose from a YAML that will set them all up easier.
There's probably already an image for each of those services that you can use and just add some code to them using docker volumes. Just look for them at Docker Hub.
Example of use with exiting Python Image are already in its description. It even shows how to create your own Docker Image using a Dockerfile which you need for each image.
Option 2:
You can build just one image using a less specific image (let's say debian/ubuntu), install all interpreters, libraries and other requirements inside and then create an ENTRYPOINT which will call a script that will run each service and keep open to avoid the container finalisation.
I'm asking a question about my Django application updates.
I'm working on my Django Development server and I would like to know How I could create a patch which will update my different Django Project installed on different servers.
For example :
In my Dev' server, I add a new Query in my script. Then I would like to add this new Query to my all virtual instances by updating my script and not just copy/past modifications. It could work for one query, but if I have 200 rows, copy/paste will be too long.
I launch a patch which will make all updates without manually copy/paste rows.
How it's possible to do that ?
Thank you so much
You should probably consider migrating your project to a version control system.
Then every time you are changing something to your local copy of the code and push the changes on the repository, you can fetch/rebase/pull your changes wherever you want (be it your server or another computer of yours), and your patches will be applied without copy/pasting!
Some version control systems to consider:
Git Lab allowing the creation of free private repositories
Github the "old reliable", but no free private repositories
Bit Bucket (don't use it myself but it is widely used!)
Good luck :)
Docker containers can be linked. Most examples involve linking a Redis container with an SQL container. The beauty of linking containers is that you can keep the SQL environment separate from your Redis environment, and instead of building one monolithic image one can maintain two nicely separate ones.
I can see how this works for server applications (where the communication is transmitted through ports), but I have troubles replicating a similar approach for different libraries. As a concrete example, I'd like to use a container with Ipython Notebook together with the C/C++-library caffe (which exposes a Python interface through a package in one of its subfolders) and an optimisation library such as Ipopt. Containers for Ipython and Caffe readily exist, and I am currently working on a separate image for Ipopt. Yet how do I link the three together without building one giant monolithic Dockerfile? Caffe, Ipython and Ipopt each have a range of dependencies, making a combined maintenance a real nightmare.
My view on docker containers is that each container typically represents one process. E.g. redis or nginx. Containers typically communicates with each other using networking or via shared files in volumes.
Each container runs its own operating system (typically specified in the FROM-section in your Dockerfile). In your case, you are not running any specific processes but instead you simply wish to share libraries. This is not what docker was designed for and I am not even sure that it is doable but it sure seems as if it is a strange way of doing things.
My suggestion is therefore that you create a base image with the least common denominator (some of the shared libraries that are common to all other images) and that your other images use that image as the FROM-image.
Furthermore, If you need more complex setup of your environment with lots of dependencies and heavy provisioning, I suggest that you take a look at other provisioning tools such as Chef or Puppet.
Docker linking is about linking microservices, that is separate processes, and has no relation to your question as far as I can see.
There is no out-of-the-box facility to compose separate docker images into one container, the way you call 'linking' in your question.
If you don't want to have that giant monolithic image, you might consider using provisioning tools a-la puppet, chef or ansible together with docker. One example here. There you might theoretically get use of the existing recipes/playbooks for the libraries you need. I would be surprised though if this approach would be much easier for you than to maintain your "big monolithic" Dockerfile.
I'm working on a Heroku app built in Python, and I can't find a recommended way to add a step to deployment for concatenating/processing/minifying JavaScript and CSS assets. For example, I might like to use tools like r.js or less.
I've seen something called "collectstatic" that Heroku knows to run for Django apps, but my application is using web.py, not Django.
One less-than-perfect approach would be to use these tools locally, on my development machine, to produce combined/compressed static assets. I could then check those compiled files into the git repository and push them to Heroku.
Is there any support for this kind of step built in to Heroku? What is the best way of handling javascript/css files for Heroku web apps in Python?
Using the buildpack-multi, Heroku lets you run multiple buildpacks. You can either create our own buildpack that only does the asset compilation that you need or find one that already does it and layer it on top of the Python buildpack with buildpack-multi.
I would generally recommend your less than perfect approach, especially if you have a small number of files.
Simplicity is always better than functionality.