When I am trying to load the Cloud9 IDE for my AWS Lightsail instance it gives me this error:
Installation Started
Package Cloud9 IDE 1
--------------------
Python version 2.7 is required to install pty.js. Please install Python 2.7 and try again. You can find more information on how to install Python in the docs: http://docs.aws.amazon.com/console/cloud9/python-ssh
exiting with 1
Failed Bash. Exit code 1
My Lightsail instance does have python 2.7.15 installed (when I do python --version). Does anyone know a solution to this issue?
Here's the walkthrough on connecting your AWS cloud9 IDE to your AWS Lightsail instance (Wordpress, Node, Python etc).
Go to https://lightsail.aws.amazon.com/ls/webapp/home/instances
Create an instance using UNIX/Linux/Wordpress or Node or whatever floats your boat. ->
click create instance
Go to networking and Create a static IP for the instance
Go to manage instance and connect using the web based SSH shell
sudo apt-get update
sudo apt-get install -y python-minimal
sudo apt-get update
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.0/install.sh | bash
. ~/.bashrc
nvm install node
which node (should print sonething like => /home/bitnami/.nvm/versions/node/v11.13.0/bin/node)
curl -L https://raw.githubusercontent.com/c9/install/master/install.sh | bash
wget -O - https://raw.githubusercontent.com/c9/install/master/install.sh | bash
go to https://us-west-2.console.aws.amazon.com/cloud9/home
create a new environment using SSH
enter the username bitnami and the static IP of the instance from lightsail
Environment Path => /home/bitnami
Node Path -> enter the value of 'which node' command from lightsail =>
(e.g. /home/bitnami/.nvm/versions/node/v11.10.0/bin/node)
At the bottom of the new cloud9 configuration, there's an SSH key, highlight and copy that.
Go back to the cloud terminal in lightsail =>
run vi ~/.ssh/authorized_keys
Add the Cloud9 ssh key 2 lines after the default key
Go back to your cloud9 environment and click 'create Environment' once the SSH key has been added and saved
You should now be connected to your lightsail instance through AWS cloud9
Related
I would like to activate the venv. I'am using the remote interpreter because pycharm has got the connection via SSH with GCP VM. I used to activate env by using this command:
On Unix or MacOS, using the bash shell: source /path/to/venv/bin/activate
In local mode there is no tackle with it but for remote interpreter I do not know how to find the source. Could you please help me with this tackle?
How to deploy Flask app on AWS Linux/UNIX EC2 instance.
With any way either
1> using Gunicorn
2> using Apache server
It's absolutely possible, but it's not the quickest process! You'll probably want to use Docker to containerize your flask app before you deploy it as well, so it boils down to these steps:
Install Docker (if you don't have it) and build an image for your application and make sure you can start the container locally and the app works as intended. You'll also need to write a Dockerfile that sets your runtime, copies all your directories and exposes port 80 (this will be handy for AWS later).
The command to build an image is docker build -t your-app-name .
Once you're ready to deploy the container, head over to AWS and launch an EC2 instance with the Linux 2 machine. You'll be required to create a security key (.pem file) and move it to somewhere on your computer. This acts like your credential to login to your instance. This is where things get different depending on what OS you use. On Mac, you need to cd into your directory where the key is and modify the permissions of it by running chmod 400 key-file-name.pem. On Windows, you have to go into the security settings and make sure only your account (ideally the owner of the computer) can use this file, basically setting it to private. At this point, you can connect to your instance from your command prompt with the command AWS gives you when you click connect to instance on the EC2 dashboard.
Once you're logged in, you can configure your instance to install docker and let you use it by running the following:
sudo amazon-linux-extras install docker
sudo yum install docker
sudo service docker start
sudo usermod -a -G docker ec2-user
Great, now you need to copy all your files from your local directory to your instance using SCP (secure transfer protocol). The long way is to use this command for each file: scp -i /path/my-key-pair.pem file-to-copy ec2-user#public-dns-name:/home/ec2-user. Another route is to install FileZilla or WinSCP to speed up this process.
Now that all your files are in the instance, build the docker container using the same command from the first step and activate it. If you go to the URL that AWS gives you, your app should be running on AWS!
Here's a reference I used when I did this for the first time, it might be helpful for you to look at too
I use PyCharm Professional to develop python.
I am able to connect the PyCharm run/debugs GUI to local Docker image's python interpreter and run local code using the Docker Container python environment libraries, eg. via the procedure described here: Configuring Remote Interpreter via Docker.
I am also able to SSH into AWS instances with PyCharm and connect to remote python interpreters there, which maps files from my local project into a remote directory and again allows me to run a GUI stepping through remote code as though it was local, eg. via the procedure described here: Configuring Remote Interpreters via SSH.
I have a Docker image on Docker hub that I would like to deploy to an AWS instance, and then connect my local PyCharm GUI to the environment inside the remote container, but I can't see how to do this, can anybody help me?
[EDIT] Once proposal that has been made is to put an SSH Server inside the remote container and connect my local PyCharm directly into the container via SSH, for example as described here. It's one solution but has been extensively criticised elsewhere - is there a more canonical solution?
After doing a bit of research, I came to the conclusion that installing an SSH server inside my container and logging in via the PyCharm SSH remote interpreter was the best thing to do, despite concerns raised elsewhere. I managed it as follows.
The Dockerfile below will create an image with an SSH server inside that you can SSH into. It also has anaconda/python, so it's possible to run a notebook server inside and connect to that in the usual way for Jupyter degubbing. Note that it's got a plain-text password (screencast), you should definitely enable key login if you're using this for anything sensitive.
It will take local libraries and install them into your package library inside the container, and optionally you can pull repos from GitHub as well (register for an API key in GitHub if you want to do this so you don't need to enter a plain text password). It also requires you to create a plaintext requirements.txt containing all of the other packages you will need to be pip installed.
Then run build command to create the image, and run to create a container from that image. In the Dockerfile we expose the SSH through the container's port 22, so let's hook that up to an unused port on the AWS instance - this is the port we will SSH through. Also add another port pairing if you want to use Jupyter from your local machine at any point:
docker build -t your_image_name .
don't miss the . at the end - it's important!
docker run -d -p 5001:22 -p8889:8889 --name=your_container_name your_image_name
Nb. you will need to bash into the container (docker exec -it xxxxxxxxxx bash) and turn Jupyter on, with jupyter notebook.
Dockerfile:
ROM python:3.6
RUN apt-get update && apt-get install -y openssh-server
# Load an ssh server. Change root username and password. By default in debian, password login is prohibited,
# go into the file that controls this and make a change to allow password login
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN /etc/init.d/ssh restart
# Install git, so we can pull in some repos
RUN apt-get update && apt-get upgrade -y && apt-get install -y git
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
# Install the requirements and the libraries we need (from a requirements.txt file)
COPY requirements.txt /tmp/
RUN python3 -m pip install -r /tmp/requirements.txt
# These are local libraries, add them (assuming a setup.py)
ADD your_libs_directory /your_libs_directory
RUN python3 -m pip install /your_libs_directory
RUN python3 your_libs_directory/setup.py install
# Adding git repos (optional - assuming a setup.py)
git clone https://git_user_name:git_API_token#github.com/YourGit/git_repo.git
RUN python3 -m pip install /git_repo
RUN python3 git_repo/setup.py install
# Cleanup
RUN apt-get update && apt-get upgrade -y && apt-get autoremove -y
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
I am running pycharm 2017.2.3. I want to run my python script on a remote ec2 instance using sudo user through pycharm. How do I acieve this?
Follow the steps below:
Go to File -> Settings -> Project Interpreter and add a new interpreter
Click on + to add a new python interpreter and then click on SSH interpreter
Provide your EC2 Public DNS in HOST and ubuntu as username
Click Next and add the private_key.pem file.
See this article for more details:
PyCharm setup for AWS automatic deployment
It looks like you can configure your python interpreter over SSH with the professional version of PyCharm.
Configuring Remote Interpreter + PyCharm
Finally found an answer after a researching through the internet. We can have a script on remote machine as a pycharm interpreter. Create a following script on a remote machine and make sure the script is executable.
#!/bin/bash
sudo /usr/bin/python "$#"
Now change the project interpreter to point to the above script on remote machine in pycharm. Now every script you run on local machine gets executed on remote as a sudo user.
I am trying to setup Vagrant in my machine(ubuntu 15.10 64bit). and I followed the steps mentioned here link
I am getting error as no Flask found when I run app.py
Am i missing something here? Its mentioned that all packages from requirements will be installed automatically. But I am not able to make it work.
Steps are as follows:
Getting started
Install Vagrant
Clone this repo as your project name:
git clone git#github.com:paste/fvang.git NEW-PROJECT-NAME
Configure project name and host name in ansible/roles/common/vars/main.yml:
project_name: "fvang"
host_name: "fvang.local"
Modify your local /etc/hosts:
192.168.33.11 fvang.local
Build your Vagrant VM:
vagrant up
Log into the VM via SSH:
vagrant ssh
Start Flask development server:
cd ~/fvang
python app/app.py
I am the author of the FVANG repo, but I don't have the rep to join your chat. I posted a response on the github issue, see here:
https://github.com/paste/fvang/issues/2
I think the Ansible provisioning script failed to complete due to changes in Ansible 2.0. (otherwise Flask would have been installed from requirements.txt). You can check which version of Ansible was installed by running ansible --version. I will be upgrading the scripts to 2.0 shortly.
Edit --
I just updated the repo to work with Ansible 2.0 and simplified a few things. Everything should work as expected now, give it a shot. You'll probably want to just vagrant destroy and vagrant up again.
A vagrant machine as new as a new operating system. You need to install each and every software you need. try this
sudo pip install Flask
After installation if you need to run the app, then you need to uncomment vagrant's ip
(In Vagrantfile) before accessing vagrant's localhost, it turns out to be 192.168.33.10 generally and port 5000