Azure container instance with git redeployment scheme similar to Azure Web Service - python

I want to run a python script continuously on a container instance. I could create a docker container and update my private registry, but it seems like overkill to have to make a new image every time I change the source code. I like how Azure WebApps can link to a git repo and automatically sync the source when it is updated and re-deploy the app. Is it possible to do something similar like this out of the box without making a python web app (non-flask, etc)?
I could technically run my script in flask and just have the web server do nothing (or even close the port) but this seems unnecessary.

Is it possible to do something similar like this out of the box without making a python web app (non-flask, etc)?
I am afraid there is no such out of box way to resolve this question.
Rebuilding image when the code changes is the canonical approach. Build the python script continuously with container instance is different from the Azure WebApps. We have to update the image to the docker container so that could be updated to the private registry.
Besides, if we build/deploy pythonApp with private agent, it is not wasteful at all if done right. pythonApp code should be COPY'd into your image as the final step. This means rebuilding will be very fast as all other steps will be cached. If you only have a few kB of source code changes it will only result in a single new layer of a few kB. Stopping and starting containers is also very light weight. There is nothing to worry about in following this approach.
But, for the hosted agent, it is indeed a problem. There is a user voice on Developer Community and a topic on github about it.
Hope this helps.

Related

What do I need to consider regarding the longevity of my Bokeh app?

I successfully created a Bokeh server application (that is started with bokeh serve MyAppFolder). I also managed to package the app in a docker image, with the correct port exposed. When I run a container with that image, I can access my webpage and everything works fine. For now.
Will this image still work in, say, 5 years? Are there things I should consider to improve the longevity?
I think the biggest potential problem is changing dependencies in the python libaries. This should not be an issue, because the docker image freezes everything in place.
When we create standalone html documents from bokeh, we can see that the html contains references to external sources (stylesheets and javascript, e.g. https://cdn.bokeh.org/bokeh/release/bokeh-2.4.3.min.js).
I assume these references also exist in a bokeh server application, and that they are downloaded everytime the bokeh server is started.
Should I be worried that these links might break someday?
Is there any way to package those sources into my docker image?
Any other helpful guidance?

How to load files dynamically on launch with docker?

I’m working on a project that will need to load python code files dynamically from GitHub on launch. Here’s what it needs to look like:
User asks us to launch an instance for them and provides us with a GitHub url
We have an existing docker with our own python code (a server) that will be using those files from GitHub
We need to launch the docker with our own code, but subbing in parts that we got from the users GitHub, basically creating a server with half our code, half user code
In other words, we need to launch a docker that has some pre planned code from us, and some dynamic code from the user.
Any ideas how to do this? I’ve seen many examples of docker files that load code from GitHub, but I’m having a hard time figuring out how to make it half our code, and half code dynamically from GitHub on run.

Is distributing python source code in Docker secure?

I am about to decide on programming language for the project.
The requirements are that some of customers want to run application on isolated servers without external internet access.
To do that I need to distribute application to them and cannot use SaaS approach running on, for example, my cloud (what I'd prefer to do...).
The problem is that if I decide to use Python for developing this, I would need to provide customer with easy readable code which is not really what I'd like to do (of course, I know about all that "do you really need to protect your source code" kind of questions but it's out of scope for now).
One of my colleagues told me about Docker. I can find dozen of answers about Docker container security. Problem is all that is about protecting (isolating) host from code running in container.
What I need is to know if the Python source code in the Docker Image and running in Docker Container is secured from access - can user in some way (doesn't need to be easy) access that Python code?
I know I can't protect everything, I know it is possible to decompile/crack everything. I just want to know the answer just to decide whether the way to access my code inside Docker is hard enough that I can take the risk.
Docker images are an open and documented "application packaging" format. There are countless ways to inspect the image contents, including all of the python source code shipped inside of them.
Running applications inside of a container provides isolation from the application escaping the container to access the host. They do not protect you from users on the host inspecting what is occurring inside of the container.
Python programs are distributed as source code. If it can run on a client machine, then the code is readable on that machine. A docker container only contains the application and its libraries, external binaries and files, not a full OS. As the security can only be managed at OS level (or through encryption) and as the OS is under client control, the client can read any file on the docker container, including your Python source.
If you really want to go that way, you should consider providing a full Virtual Machine to your client. In that case, the VM contains a full OS with its account based security (administrative account passwords on the VM can be different from those of the host). Is is far from still waters, because it means that the client will be enable to setup or adapt networking on the VM among other problems...
And you should be aware the the client security officer could emit a strong NO when it comes to running a non controlled VM on their network. I would never accept it.
Anyway, as the client has full access to the VM, really securing it will be hard if ever possible (disable booting from an additional device may even not be possible). It is admitted in security that if the attacker has physical access, you have lost.
TL/DR: It in not the expected answer but just don't. It you sell your solution you will have a legal contract with your customer, and that kind of problem should be handled at a legal level, not a technical one. You can try, and I have even given you a hint, but IMHO the risks are higher than the gain.
I know that´s been more than 3 years, but... looking for the same kind of solution I think that including compiled python code -not your source code- inside the container would be a challenging trial for someone trying to access your valuable source code.
If you run pyinstaller --onefile yourscript.py you will get a compiled single file that can be run as an executable. I have only tested it in Raspberry, but as far as I know it´s the same for, say, Windows.
Of course anything can be reverse engineered, but hopefully it won´t be worth the effort to the regular end user.
I think it could be a solution as using a "container" to protect our code from the person we wouldn't let them access. the problem is docker is not a secure container. As the root of the host machine has the most powerful control of the Docker container, we don't have any method to protect the root from accessing inside of the container.
I just have some ideas about a secure container:
Build a container with init file like docker file, a password must be set when the container is created;
once the container is built, we have to use a password to access inside, including
reading\copy\modify files
all the files stored on the host machine should be encypt。
no "retrieve password" or “--skip-grant-” mode is offered. that means nobody can
access the data inside the container if u lost the password.
If we have a trustable container where we can run tomcat or Django server, code obfuscation will not be necessary.

Create a Django patch which update several instances

I'm asking a question about my Django application updates.
I'm working on my Django Development server and I would like to know How I could create a patch which will update my different Django Project installed on different servers.
For example :
In my Dev' server, I add a new Query in my script. Then I would like to add this new Query to my all virtual instances by updating my script and not just copy/past modifications. It could work for one query, but if I have 200 rows, copy/paste will be too long.
I launch a patch which will make all updates without manually copy/paste rows.
How it's possible to do that ?
Thank you so much
You should probably consider migrating your project to a version control system.
Then every time you are changing something to your local copy of the code and push the changes on the repository, you can fetch/rebase/pull your changes wherever you want (be it your server or another computer of yours), and your patches will be applied without copy/pasting!
Some version control systems to consider:
Git Lab allowing the creation of free private repositories
Github the "old reliable", but no free private repositories
Bit Bucket (don't use it myself but it is widely used!)
Good luck :)

How do I run a Django 1.6 project with multiple instances running off the same server, using the same db backend?

I have a Django 1.6 project (stored in a Bitbucket Git repo) that I wish to host on a VPS.
The idea is that when someone purchases a copy of the software I have written, I can type in a few simple commands that will take a designated copy of the code from Git, create a new instance of the project with its own subdomain (e.g. <customer_name>.example.com), and create a new Postgres database (on the same server).
I should hopefully be able to create and remove these 'instances' easily.
What's the best way of doing this?
I've looked into writing scripts using some sort of combination of Supervisor/Gnunicorn/Nginx/Fabric etc. Other options could be something more serious like using Docker or Vagrant. I've also looked into various PaaS options too.
Thanks in advance.
(EDIT: I have looked at the following services/things: Dokku (can't use Heroku due to data constraints), Vagrant (inc Puppet), Docker, Fabfile, Deis, Cherokee, Flynn (under dev))
If I was doing it (and I did a similar thing with a PHP application I inherited), I'd have a fabric command that allows me to provision a new instance.
This could be broken up into the requisite steps (check-out code, create database, syncdb/migrate, create DNS entry, start web server).
I'd probably do something sane like use the DNS entry as the database name: or at least use a reversible function to do that.
You could then string these together to easily create a new instance.
You will also need a way to tell the newly created instance which database and domain name they needed to use. You could have the provisioning script write some data to a file in the checked out repository that is then used by Django in it's initialisation phase.

Categories