I want to use twitter streaming API.
https://github.com/michaelbrooks/django-twitter-stream
This plugin work exactly how I want and it is using tweepy library of python for same.
To run this code its expecting to run via command line i.e.
./manage.py stream
Its good to test the same in my local.
But I am not getting how to deploy the same in my aws through server where my remaining django project is already getting served via apache server
I am not pretty sure whether some sort of CRON or celery can be used in scenario.
Related
I have develop python flask application(REST API). Now I want to deploy this application on client system(Windows 10 Professional ). My client dont have any internet service.
Previously, I done in java that time I make a .war file and deployed in tomcat on client system. He was able to access REST API.
Now I want know any similar way to deploy python app on client system, on start system his able to access my REST API
use PyInstaller.
pip install pyinstaller
go to project dir
cd C:\Users\sandip\Desktop\MyPython
use
pyinstaller --onefile HelloFlask.py
If you just want to make your rest APIs accessible by other users in same network, you can simply do it without installing anything on client side by replacing the app.run() in your code to app.run(host= '0.0.0.0'). By default flask app runs on localhost, by changing it to latter causes it to run on your machines IP address, thus making it accessible by all the users under same network. You can read more on flask's documentation under the heading Externally Visible Server.
To deploy your app in production, you need a WSGI server, you can read about deployment of flask app here
I am building a back end in python via the python flask application from IBM Cloud/Bluemix. I have heard/read a lot about people complaining regarding that Flasks built in server isn’t good for production. But how do I know if the application uses the Flask built in server or if IBM sets something else? Is there a simple way to see this in the code?
Deploying the Flask boilerplate app from the IBM cloud catalogue will indeed deploy a Flask application running on the Flask dev webserver.
You will need to alter the application if you want to run a production WSGI server.
I work for IBM and am in this stuff all day every day.
If you want to verify this, SSH into your application container on Cloud Foundry with the bash command
cf ssh <yourappnamehere>
You will need to have either the bluemix or cloud foundry CLIs installed and be logged in to the relevant endpoint before submitting this command.
It will open a bash shell in your application container, and you can cd around and open and/or download your project files for inspection.
This line:
app = Flask(__name__)
is a sure fire way to know that you are running a Flask web server application.
If you are concerned with which WSGI server your application is running under, checking your procfile (you should see this when SSHing int your container) will show you which command starts your application. If the command is
python <yourapp>.py
then you are running the dev server. Otherwise, you would be running some other python file, most likely via the server's command rather than the python command, that would import your application as a dependency.
You can also take a look at whether or not any WSGI server libraries were downloaded during the compilation of your droplet, and what command was used to start your application with
cf logs <yourappname> --recent
after deploying it.
Or, you can just believe me that the boilerplate deploys a Flask app under a Flask dev server.
A tutorial on running Flask on a different WSGI server:
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-14-04
I have Python script that is supposed to run once every few days to annotate some data on a remote database.
Which PaaS services (GAE, Heroku, etc.) allows for a stand-alone Python script to be deployed and executed via some sort of cron scheduler?
GAE has a module called cron jobs and Heroku has Heroku Scheduler. Both are fairly easy to use and configure. You can check the documentation of both. As I do not have any other information on what you want to do I don’t know if one would be more suitable to you than the other.
I'm hosting a website on AWS. Its a web interface with a SQL database. The website will be used to:
1. View results of query from Database
2. Insert data into database
3. View the data and update it where needed.
The codes and connections works file when I run the application on localhost (Apache on my C drive). But we want to host it on AWS so that people around me can use it.
So, In AWS I uploaded the code on EC2 and installed apache on it, all the html links are working but the python file is simply displaying the code.
I'm guessing it has something to do with the shebang. Currently my code has the following shebang:
#!C:\Python27\python.exe
Can someone guide me if its the shebang or if there is something else i need to do.
I have installed boto, but not sure what to do next. The AWS website and most of the forums talk about using Elastic Beanstalk. I want to host a fully functioning Python webApp on AWS without using Elastic Beanstalk.
When apache displays code, that is a clear sign that Apache is not configured properly to execute python. You should look to see if mod_python is installed and configured correctly.
Also, #! is generally used with Linux not windows. If apache/mod_python is installed and configured correctly I can't imagine what code you'd have that would need #! since the .py extension would be enough.
IF your EC2 instance is indeed running Linux, and your code does indeed need #! try:
#!/bin/python
OR
#!/usr/local/bin/python
(Depends on where the python binary is, and those are the most common locations.)
If your EC2 instance is running Windows then "Unless you are using cygwin, windows has no shebang support"
Hi have you logged into your EC2 instance through the endpoint and then run your script, from the command line. I have some experience with EC2 running apache2 only my application was written in Java, having previously used python scripts I was able to run them by logging into my EC2 instance, you can do this from AWS management console. hope this helps you somewhat.
The thing is, I read this post stating best practices to set up a code to run at every specified interval over a period of time using the python library - APS Scheduler. Now, it obviously works perfectly fine if I do it on a test environment and run the application from the command prompt.
However, I come from a background where most my projects are university level and never ran in production but for this one, I would like to. I have access to AWS and can configure any kind of server on AWS and I am open to other options as well. It would be great if I could get a headstart on what to look if I have to run this application as a service from a server or a remote machine without having to constantly monitoring it and providing interrupts on command prompt.
I do not have any experience of running Python applications in production so any input would be appreciated. Also, I do not know how to execute this code in production (except for through aws cli) but that session expires once I close my CLI so that does not seem like the most appropriate way to do it so any help on that end would be appreciated too.
The Answer was very simple and does not make a lot of sense and might not be applicable to all.
Now, what I had was a python flask application so I configured the app in a virtual environment using eb-virt on the aws server and then created an executable wsgi script which I then ran as a service using mod_wsgi plugin for the apache http server and then I was able to run my app.