Heroku install letsencrypt - su: must be run from a terminal - python

I am trying to create an ssl certificate for my website to get the green lock.
While reaseacrhing how to do that (never done anything with SSL certificates before) I encountered letsencrypt. But I cant figure out how to install it on my server.
I have my application on heroku and a custom domain at a random webhoster. I point this domain via CNAME DNS to my heroku application.
As far as I understand the whole SSL thing has to be configured with heroku, because the data is also there.
I have tried a few things which all didnt worked. But this attempt seems to be close:
I created a folder "letsencrypt" in my app localy
I logged in to heroku via CMD
I pushed everything to heroku git push heroku master
I used heroku run bash to access the folder I created
I entered the folder which I just created cd letsencrypt
I cloned letsencrypt into this folder git clone https://github.com/letsencrypt/letsencrypt
I went again into cd letsencrypt
I used ./letsencrypt-auto --help
Which gave me:
"sudo" is not available, will use "su" for installation steps...
Bootstrapping dependencies for Debian-based OSes...
su: must be run from a terminal
apt-get update hit problems but continuing anyway...
su: must be run from a terminal

Disclaimer: have not tried this yet, but:
This seems to be a pretty comprehensive doc.

Related

How to Deploy Flask app on AWS EC2 Linux/UNIX instance

How to deploy Flask app on AWS Linux/UNIX EC2 instance.
With any way either
1> using Gunicorn
2> using Apache server
It's absolutely possible, but it's not the quickest process! You'll probably want to use Docker to containerize your flask app before you deploy it as well, so it boils down to these steps:
Install Docker (if you don't have it) and build an image for your application and make sure you can start the container locally and the app works as intended. You'll also need to write a Dockerfile that sets your runtime, copies all your directories and exposes port 80 (this will be handy for AWS later).
The command to build an image is docker build -t your-app-name .
Once you're ready to deploy the container, head over to AWS and launch an EC2 instance with the Linux 2 machine. You'll be required to create a security key (.pem file) and move it to somewhere on your computer. This acts like your credential to login to your instance. This is where things get different depending on what OS you use. On Mac, you need to cd into your directory where the key is and modify the permissions of it by running chmod 400 key-file-name.pem. On Windows, you have to go into the security settings and make sure only your account (ideally the owner of the computer) can use this file, basically setting it to private. At this point, you can connect to your instance from your command prompt with the command AWS gives you when you click connect to instance on the EC2 dashboard.
Once you're logged in, you can configure your instance to install docker and let you use it by running the following:
sudo amazon-linux-extras install docker
sudo yum install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
Great, now you need to copy all your files from your local directory to your instance using SCP (secure transfer protocol). The long way is to use this command for each file: scp -i /path/my-key-pair.pem file-to-copy ec2-user#public-dns-name:/home/ec2-user. Another route is to install FileZilla or WinSCP to speed up this process.
Now that all your files are in the instance, build the docker container using the same command from the first step and activate it. If you go to the URL that AWS gives you, your app should be running on AWS!
Here's a reference I used when I did this for the first time, it might be helpful for you to look at too

Fabric 2 automating deployment error when git pulling on remote server. Repository not found

I'm trying to automate my deployment with Fabric 2.
When I manually do a git pull through the command line on the remote server everything works fine.
When I try to do the same with my Fabric/Invoke script it does not allow me to pull.
It does though allow me to do git status and other commands.
The code:
# Imports
from fabric import Connection
from fabric.tasks import task
import os
# Here i pass my local passphrase:
kwargs = {'passphrase': os.environ["SSH_PASSPHRASE"]}
#task
def serverdeploy(c, branch="Staging"):
con = Connection('myuser#myhost', connect_kwargs=kwargs)
with con.cd("/home/user/repository/"):
# Activate the virtual environment:
with con.prefix("source ENV/bin/activate"):
con.run("git pull origin {}".format(branch))
The results are:
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Notes:
I don't even get asked for a passphrase while doing the pull.
I have tried doing the pull without activating the environment but that didn't work either.
What could possibly be the problem?
Please place con.run("git pull origin {}".format(branch)) outside the with con.prefix("source ENV/bin/activate"):.
Your code has nothing to do with the interpreter or the virtual env! Try that and it should works!
The most likely issue is that the user that you log in as has the proper ssh key setup for bitbucket.org, but the fabric connection user is different. You can test whether the setup is correct by using these two commands as the user that fabric connects as:
ssh -T git#bitbucket.org
ssh -T -i /path/to/private_key git#bitbucket.org
In order to fix this issue, copy the private key to the /home/myuser/.ssh directory and add an ssh config entry for bitbucket.org to /home/myuser/.ssh/config:
Host bitbucket.org
IdentityFile /path/to/private_key

Download Google App Engine Project

I've gotten appcfg.py to run. However, when I run the command it doesn't actually download the files. Nothing appears in the destination directory.
There was a mixup and a lot of work got lost and the only way to recover them is to download them from the host.
python appcfg.py download_app -A beatemup-1097 /home/chaserlewis/Desktop/gcloud
The output is
Host: appengine.google.com
Fetching file list...
Fetching files...
Then it just returns without having downloaded anything. It is definitely hosted so I'm not sure what else to do.
I am doing this from a different computer then I deployed from if that matters. I couldn't get appcfg.py to run on my Windows machine unfortunately.
It might be due to the omitted version flag. Try the following:
Go to the App Engine versions page in the console and check the version of your app that is serving traffic. If you don't specify the -V flag, the appcfg command will try to download the default version, which isn't necessarily your latest version or the version serving traffic.
Add the -V flag to your command with the target version that you identified from the console.
python appcfg.py download_app -A beatemup-1097 -V [YOUR_VERSION] /home/chaserlewis/Desktop/gcloud

No module named 'flask' while using Vagrant

I am trying to setup Vagrant in my machine(ubuntu 15.10 64bit). and I followed the steps mentioned here link
I am getting error as no Flask found when I run app.py
Am i missing something here? Its mentioned that all packages from requirements will be installed automatically. But I am not able to make it work.
Steps are as follows:
Getting started
Install Vagrant
Clone this repo as your project name:
git clone git#github.com:paste/fvang.git NEW-PROJECT-NAME
Configure project name and host name in ansible/roles/common/vars/main.yml:
project_name: "fvang"
host_name: "fvang.local"
Modify your local /etc/hosts:
192.168.33.11 fvang.local
Build your Vagrant VM:
vagrant up
Log into the VM via SSH:
vagrant ssh
Start Flask development server:
cd ~/fvang
python app/app.py
I am the author of the FVANG repo, but I don't have the rep to join your chat. I posted a response on the github issue, see here:
https://github.com/paste/fvang/issues/2
I think the Ansible provisioning script failed to complete due to changes in Ansible 2.0. (otherwise Flask would have been installed from requirements.txt). You can check which version of Ansible was installed by running ansible --version. I will be upgrading the scripts to 2.0 shortly.
Edit --
I just updated the repo to work with Ansible 2.0 and simplified a few things. Everything should work as expected now, give it a shot. You'll probably want to just vagrant destroy and vagrant up again.
A vagrant machine as new as a new operating system. You need to install each and every software you need. try this
sudo pip install Flask
After installation if you need to run the app, then you need to uncomment vagrant's ip
(In Vagrantfile) before accessing vagrant's localhost, it turns out to be 192.168.33.10 generally and port 5000

How do I deploy a python application to an external server?

I have written a python script on my local laptop which uses several third party packages. I now want to run my script regularly (via a cron job) on an external server.
The external server most likely does not have all the dependencies installed, is there is a way to package and deploy my python script and dependencies in order to ensure that it will run?
I have already tried to package the script as an exe, but failed to do so.
Not clear what kind of third party packages you have, but for those that were installed with pip, you can do this in your dev environment:
$ pip freeze > requirements.txt
And then you can install these packages in your production environment:
$ pip install requirements.txt
Ideally, you will already have a virtualenv on your production box. If not, it may be well worth reading about these before deploying your script.
Just turn your computer into a server. Simply set up your router for port forwarding so that your server's content's will display when the router's IP is entered. You can of course purchase a DNS domain to give that IP a human readable URL.

Categories