Bamboo Can not detect repository commit when flask application is running - python

I am doing a Bamboo plan with two task,
check out source code from the git
run the flask (Python) application
And I want to execute the above plans in Bamboo when a new commit happens in in git repository.
I have configured my project as per the Bamboo Documentation
But, After execute the 2nd task (Python application) Bamboo could not detect the commit changes and not executing the tasks also.
Only Works if all tasks are stopped.

Bamboo itself is running the Flask application and not your system. As a result, the Bamboo build never finishes and all other Bamboo threads related to this build plan are locked. Bamboo tasks will often run until they receive an exit code, which will never happen while your Flask app is running.
Instead of attempting to run the code from Bamboo, you should instead run the flask app outside of bamboo. You can then trigger a reload of your flask app from within Bamboo on source code changes. This will require:
Have Bamboo detection setup to trigger on code changes (you have this and it sounds like it is working even though it is currently blocked).
Have a task where you checkout the source code - but check it out to the directory where you are going to be running the Flask app.
Configure your flask application to watch this source code folder outside of Bamboo. When the source code is updated it will reload the app. The Flask documentation explains this but you can also do it with this one line:
$ FLASK_APP=main.py FLASK_DEBUG=1 python -m flask run
There are several good answers here on SO that go over how to reload your flask app with the latest code changes:
Auto reloading python Flask app upon code changes
How to reload python module in flask?

Resolved by using docker inside Bamboo. Working fine.
Done the following.
Check out source code from repository
Created docker container as a task in Bamboo
Run the docker container using bamboo.
Installed python dependencies with docker file

Related

Am I using Python Flasks built in server?

I am building a back end in python via the python flask application from IBM Cloud/Bluemix. I have heard/read a lot about people complaining regarding that Flasks built in server isn’t good for production. But how do I know if the application uses the Flask built in server or if IBM sets something else? Is there a simple way to see this in the code?
Deploying the Flask boilerplate app from the IBM cloud catalogue will indeed deploy a Flask application running on the Flask dev webserver.
You will need to alter the application if you want to run a production WSGI server.
I work for IBM and am in this stuff all day every day.
If you want to verify this, SSH into your application container on Cloud Foundry with the bash command
cf ssh <yourappnamehere>
You will need to have either the bluemix or cloud foundry CLIs installed and be logged in to the relevant endpoint before submitting this command.
It will open a bash shell in your application container, and you can cd around and open and/or download your project files for inspection.
This line:
app = Flask(__name__)
is a sure fire way to know that you are running a Flask web server application.
If you are concerned with which WSGI server your application is running under, checking your procfile (you should see this when SSHing int your container) will show you which command starts your application. If the command is
python <yourapp>.py
then you are running the dev server. Otherwise, you would be running some other python file, most likely via the server's command rather than the python command, that would import your application as a dependency.
You can also take a look at whether or not any WSGI server libraries were downloaded during the compilation of your droplet, and what command was used to start your application with
cf logs <yourappname> --recent
after deploying it.
Or, you can just believe me that the boilerplate deploys a Flask app under a Flask dev server.
A tutorial on running Flask on a different WSGI server:
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-14-04

Docker vs old approach (supervisor, git, your project)

I'm on Docker for past weeks and I can say I love it and I get the idea. But what I can't figure out is how can I "transfer" my current set-up on Docker solution. I guess I'm not the only one and here is what I mean.
I'm Python guys, more specifically Django. So I usually have this:
Debian installation
My app on the server (from git repo).
Virtualenv with all the app dependencies
Supervisor that handles Gunicorn that runs my Django app.
The thing is when I want to upgrade and/or restart the app (I use fabric for these tasks) I connect to the server, navigate to the app folder, run git pull, restart the supervisor task that handles Gunicorn which reloads my app. Boom, done.
But what is the right (better, more Docker-ish) approach to modify this setup when I use Docker? Should I connect to docker image bash somehow everytime I want upgrade the app and run the upgrade or (from what I saw) should I like expose the app into folder out-of docker image and run the standard upgrade process?
Hope you get the confusion of old school dude. I bet Docker guys were thinking about that.
Cheers!
For development, docker users will typically mount a folder from their build directory into the container at the same location the Dockerfile would otherwise COPY it. This allows for rapid development where at most you need to bounce the container rather than rebuild the image.
For production, you want to include everything in the image and not change it, only persistent data goes in the volumes, your code is in the image. When you make a change to the code, you build a new image and replace the running container in production.
Logging into the container and manually updating things is something I only do to test while developing the Dockerfile, not to manage a developing application.

Deploying angular project & python services on AWS

We have an angularJS project which uses python web services.
As of now we have pushed both the codes on AWS EC2 instance.
And running following commands to make it up forever:
nohup grunt serve & (for angular project)
nohup python Sample.py & (for python services)
Whenever any code changes are made we git pull it and again run above commands.
I know this approach cannot be used for Production as Grunt should be only used for development
Can someone suggest best way to streamline the process right from the servers to deployment?
what you can do is push the code after building and use forever to run your node project, so take if you take pull you dont need to run command again. the only case when you need to run command again is when your index page has some changes.
make sure you run your app behind a web server.

Running simple python script continuously on Heroku

I have simple python script which I would like to host on Heroku and run it every 10 minutes using Heroku scheduler. So can someone explain me what I should type on the rake command at the scheduler and how I should change the Procfile of Heroku?
Sure, you need to do a few things:
Define a requirements.txt file in the root of your project that lists your dependencies. This is what Heroku will use to 'detect' you're using a Python app.
In the Heroku scheduler addon, just define the command you need to run to launch your python script. It will likely be something like python myscript.py.
Finally, you need to have some sort of web server that will listen on the proper Heroku PORT -- otherwise, Heroku will think your app isn't working and it will be in the 'crashed' state -- which isn't what you want. To satisfy this Heroku requirement, you can run a really simple Flask web server like this...
Code (server.py):
from os import environ
from flask import Flask
app = Flask(__name__)
app.run(environ.get('PORT'))
Then, in your Procfile, just say: web: python server.py.
And that should just about do it =)
If you use free account [unverified*] on Heroku (so you cannot install addons), instead of using "Heroku scheduler", use time.sleep(n). You don't need Flask or any server in this case, just place script, say, inside folder Scripts (in default app/project by Heroku) and add to Procfile: worker: python script.py. Of course you replace script.py with Path to your script, including name, ex. worker: python Scripts/my_script.py
Note: If your script uses third-party modules, say bs4 or requests, you need to install them in pipenv install MODULE_NAME or create requirements.txt and place it where manage.py, Procfile, Pipfile, (etc) are. Next place in that requirements.txt:
requirements.txt:
MODULE_NAME==MODULE_VERSION
You can check them in pip freeze | grep MODULE_NAME
Finally deploy to Heroku server using git and run following command:
heroku ps:scale worker=1
That's it! Bot/Script is running, check it in logs:
heroku logs --tail
Source: https://github.com/michaelkrukov/heroku-python-script
unverified* - "To help with abuse prevention, provisioning an add-on requires account verification. If your account has not been verified, you will be directed to visit the verification site." It redirects to Credit Card info. However you can still have Free Acc, but you will not be able to use certain options for free users, such as installing addons:https://devcenter.heroku.com/articles/getting-started-with-python#provision-add-ons

Run custom Python script before appcfg.py update runs

Is that possible to run some Python script every time I run deployment process with appcfg.py? I need that to copy some files from external source to my app folder before uploading it to GAE. Thanks!
I checked briefly the sources of appcfg.py, the script that deploys the application to Google App Engine, but I didn't find a place where a pre-deploy hook can be defined.
I believe that modifying appcfg.py itself would me not mantainable and a bit overkill.
You should create a simple deployment script and call your command from the script.
For example, you can create a simple Makefile with only one target that does what you want:
deploy:
your-copy-command
/path/to/gae-devkit/appcfg.py update .
Running the make command will execute the command to copy external files and call the Google App Engine deployment tool.

Categories