Deploying angular project & python services on AWS - python

We have an angularJS project which uses python web services.
As of now we have pushed both the codes on AWS EC2 instance.
And running following commands to make it up forever:
nohup grunt serve & (for angular project)
nohup python Sample.py & (for python services)
Whenever any code changes are made we git pull it and again run above commands.
I know this approach cannot be used for Production as Grunt should be only used for development
Can someone suggest best way to streamline the process right from the servers to deployment?

what you can do is push the code after building and use forever to run your node project, so take if you take pull you dont need to run command again. the only case when you need to run command again is when your index page has some changes.
make sure you run your app behind a web server.

Related

how can I deploy a python script that runs 24/7?

I am new to the deployment process of python, I have been reading about things like Docker (I know a little bit about Docker) but I haven't really found any concrete guides to deploying python scripts. (I see a lot of examples for flask applications and django applications) My exact use case is a Kafka Consumer that needs to run 24/7. Would I just drop my project on an EC2 and run the docker command or the python script from the command line? if not how can I deploy my python script?
edit: I have tried writing a Dockerfile that just runs my script. Which means I would just drop my project on an EC2 and run the docker file? This sounds wrong to me, but im not entirely sure.

Deploy a stand-alone python script on PaaS service

I have Python script that is supposed to run once every few days to annotate some data on a remote database.
Which PaaS services (GAE, Heroku, etc.) allows for a stand-alone Python script to be deployed and executed via some sort of cron scheduler?
GAE has a module called cron jobs and Heroku has Heroku Scheduler. Both are fairly easy to use and configure. You can check the documentation of both. As I do not have any other information on what you want to do I don’t know if one would be more suitable to you than the other.

What do you use for running scheduled tasks in Production for Python?

The thing is, I read this post stating best practices to set up a code to run at every specified interval over a period of time using the python library - APS Scheduler. Now, it obviously works perfectly fine if I do it on a test environment and run the application from the command prompt.
However, I come from a background where most my projects are university level and never ran in production but for this one, I would like to. I have access to AWS and can configure any kind of server on AWS and I am open to other options as well. It would be great if I could get a headstart on what to look if I have to run this application as a service from a server or a remote machine without having to constantly monitoring it and providing interrupts on command prompt.
I do not have any experience of running Python applications in production so any input would be appreciated. Also, I do not know how to execute this code in production (except for through aws cli) but that session expires once I close my CLI so that does not seem like the most appropriate way to do it so any help on that end would be appreciated too.
The Answer was very simple and does not make a lot of sense and might not be applicable to all.
Now, what I had was a python flask application so I configured the app in a virtual environment using eb-virt on the aws server and then created an executable wsgi script which I then ran as a service using mod_wsgi plugin for the apache http server and then I was able to run my app.

Correct way to update live django web application

Before the actual problem let me explain our architecture. We are using git through ssh on our servers and have post-recieve hooks enabled to setup code. The code is all maintained in a separate folder. What we need is whenever a person pushes code to the server, it runs tests,migrations and updates it on live site. Currently whenever the application undergoes update in model it crashes.
What we need is a way that the hooks script detect if the code is proper, By proper i mean no syntax error etc, then run migrations and update the current application with the new codes without downtime. We are using nginx to proxy to django application,virtualenv for packages install from requirements.txt file and gunicorn for deployment.
The base line is that if there is failure at any point the push commit should be rejected. and if all tests are successfull, it should make migrations to dbs and start with the new app.
A though that i had was to use two ports for the same . One runing the main application and another with the push commits. If pushed codes were successfully tested , change port on nginx to git application and have nginx reload. Please discuss drawbacks of this application if any. And a sample post-commit script to show how to reject git commit in case of failure.
Consider using fabric. Fabric will allow you to create pythonic scripts and you can run deployments in remote server creating a new database and check whether the migrations are done safe. Once all good you can mention in your fabric script to deploy in prod or if fails mention in fabric to send an email.
This makes you life simple.

Is it possible to run mod_python Python Server Pages outside of the Apache server context?

I have experience coding PHP and Python but have until recently not used Python for much web development.
I have started a project using mod_python with Python Server Pages which does a very good job of what I need it to do. Amongst the advantages of this is performance; the web server does not spawn a new interpreter for each request.
The system will finally be deployed on a server where I am able to setup /etc/apache/conf.d correctly, etc.
However, during development and for automated regression testing I would like the ability to run the .psp scripts without having to serve using an Apache instance. For example, it is possible to run PHP scripts using the PHP cli, and I would like to run my .psp scripts from the shell and see the resulting HTTP/HTML on stdout.
I suppose this mode would need to support simulation of POST requests, etc.
Update
OK, after typing all that question I discovered the mod_python command line in the manual:
http://modpython.org/live/current/doc-html/commandline.html
I guess that would get me most of the way there, which is to be able to exercise my application as a local user without deploying to an Apache server.
I am leaving this question here in case anyone does a web search like I did.
The mod_python command line tool is one way to do this.
See http://modpython.org/live/current/doc-html/commandline.html
Essentially,
mod_python create /path/to/new/server_root \
--listen 8888 \
--pythonpath=/path/to/my/app \
--pythonhandler=mod_python.wsgi \
--pythonoption="mod_python.wsgi.application myapp.wsgi::application"
sets up a skeleton app.
Then you run it:
mod_python start /path/to/new/server_root/conf/httpd.conf
wget http://localhost/path/to/my/app:8888
I am still testing this...
Caveat This function only seems to be from 3.4.0

Categories