I want to run commands remotely from a GCP instance to another one, using a Python script on a Debian machine.
I know that the gcloud ssh command can do that with the subprocess module, but I don't know how to proceed faster as it creates a new key each time I run the command. Is there a way to operate with a service account for example, on which I could setup permissions and keys for each machine on my GCP project?
Create a SSH Key in your "source" machine
ssh-keygen
And then add your public key into the ssh keys tab on the metadata page on Google Cloud Console, under the settings category, from the Compute Engine section.
Then, you should be able to log in to the other instance doing
ssh [user#]other_instance_ip [optional command to execute]
Related
I have an API (in python) which has to alter files inside an EC2 instance that is already running. I'm searching on boto3 documentation, but could only find functions to start new EC2 instances, not to connect to an already existing one.
I am currently thinking of replicating the APIs functions to alter the files in a script inside the EC2 instance, and having the API simply start that script on the EC2 instance by accessing it using some sort of SSH library.
Would that be the correct approach, or is there some boto3 function (or in some of the other Amazon/AWS libraries) that allows me to start a script inside existing instances?
An Amazon EC2 instance is just like any computer on the Internet. It is running an operating system (eg Linux or Windows), and it has standard security in-built. The fact that it is an Amazon EC2 instance has no impact.
So, the question really becomes: How do I run a command on a remote computer?
Typical ways of doing this include:
Connecting to the computer (eg via SSH) and running a command
Running a service on the computer that listens on a particular port (eg responding to an API request)
Using remote shell commands to run an operation on another computer
Fortunately, AWS offers an additional option: Use the AWS Systems Manager Run Command:
AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. A managed instance is any Amazon EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale. You can use Run Command from the AWS console, the AWS Command Line Interface, AWS Tools for Windows PowerShell, or the AWS SDKs. Run Command is offered at no additional cost.
Administrators use Run Command to perform the following types of tasks on their managed instances: install or bootstrap applications, build a deployment pipeline, capture log files when an instance is terminated from an Auto Scaling group, and join instances to a Windows domain, to name a few.
Basically, it is an agent installed on the instance (or, for that matter, on any computer on the Internet) and commands can be sent to the computer that are executed by the agent. In fact, the same command can be sent to hundreds of computers if desired.
The AWS Systems Manager Run Command can be triggered by an API call, such as a program using boto3.
Unless you have a specific service running on that machine which allows you to modify mentioned files. I would make an attempt to log onto EC2 instance as to any other machine via network.
You can access EC2 machine via ssh with use of paramiko or pexpect libraries.
If you want to use the execute a script inside of an existing EC2 instance - you could use the reference from the existing answer here : Boto Execute shell command on ec2 instance
IMO, to be able to start a script inside the EC2, the script should be present on the EC2.
So I'm currently working on a load balancing python script. In this script I will need to update a file on a server. My plan is to have my python script call a bash script.
In that bash script I'd like to ssh into the server, execute an awk command on a file, then logout.
I can currently ssh into this server manually, because I've set up an ssh key (using Google Cloud Platform). But when I try to run a bash script that only executes
'ssh username#externalIP'
I get the error: Permission denied (publickey)
What am I missing here?
Why not use paramiko to connect via SSH? You can specify a key, as shown in this gist.
By using this, you can easily set which commands to execute on the server.
Probably your private key is in your home directory and python is spawning the bash process as a limited user, try changing your script to include the private key explicitly, if still doesn't work you will have to copy the key and change its permission
To explicitly add the private key:
ssh -i /path/to/private/key user#host
I created a server EC2 on Amazon AWS and I installed the HappyBase python library for working with HBASE.
Also I created a Job Cluster Hbase in EMR.
Then I tried to run the script on the first server on EC2:
import happybase
connection = happybase.Connection('….us-west-2.compute.amazonaws.com')
connection.open ()
print connection.tables ()
But I get an error that the server is not found. As host I have Hbase cluster public dns. What I need to configure for working with the database created in the EMR from another EC2 server using python happybase?
Thanks.
Did you start Thrift server on your cluster master node? You can do it with
$ ssh -i <your-key.pem> hadoop#<master-node-dns>
$ hbase-daemon.sh start thrift
For this to work, HBase must be configured on your cluster. You have to choose HBase job type if configuring a job from visual interface.
In old management console ensure that step Start HBase is present under Steps tab, and bootstrap action Install HBase is under Bootstrap Actions.
In new console, in a Cluster Details window, there is Applications section, check that a row like
Applications:HBase 0.92.0
is present there.
When all done corectly, ssh on your master instance, and check for hbase-daemon script with
~$ which hbase-daemon.sh
/home/hadoop/bin/hbase-daemon.sh
I understand nearly nothing to the functioning of EC2. I created an Amazon Web Service (AWS) account. Then I launched an EC2 instance.
And now I would like to execute a Python code in this instance, and I don't know how to proceed. Is it necessary to load the code somewhere in the instance? Or in Amazon's S3 and to link it to the instance?
Where is there a guide that explain the usages of instance that are possible? I feel like a man before a flying saucer's dashboard without user's guide.
Here's a very simple procedure to move your Python script from local to EC2 Instance and run it.
> 1. scp -i <filepath to Pem> <filepath to Py File> ec2-user#<Public DNS>.compute-1.amazonaws.com:<filepath in EC2 instance where you want
> your file to be>
> 2. Cd to to the directory in EC2 containing the file. Type Python <Filename.py> There it executed.
Here's a concrete examples for those who likes things shown step-by-step:
In your local directory, create a python script with the following code: print("Hello AWS")
Assuming you already have AWS already set up and you want to run this script in EC2, you need to SCP (Secure Copy Protocol) your file to a directory in EC2. So here's an example:
- My filepath to pem is ~/Desktop/random.pem.
- My filepath to py file is ~/Desktop/hello_aws.py
- My public DNS is ec22-34-12-888
- The ec2 directory where I want my script to be is in /home/ec2-user
- So the full command I run in my local terminal is:
scp -i ~/Desktop/random.pem ~/Desktop/hello_aws.py ec2-user#ec2-34-201-49-170.compute-1.amazonaws.com:/home/ec2-user
Now ssh to your ec2 instance, cd to /home/ec2-user (Or wherever you put your file) and Python hello_aws.py
You have a variety of options. You can browse through a large library of AMIs here.
You can import a vm, instructions are here.
This is a general article about AWS and python.
And in this article, the author takes you through a more advanced system with a combination of datastores in python using the highly recommend django framework.
Launch your instance through Amazon's Management Console -> Instance Actions -> Connect
(More details in the getting started guide)
Launch the Java based SSH CLient
Plugins-> SCFTP File Transfer
Upload your files
run your files in the background (with '&' at the end or use nohup)
Be sure to select an AMI with python included, you can check by typing 'python' in the shell.
If your app require any unorthodox packages you'll have to install them.
Running scripts on Linux ec2 instances
I had to run a script on Amazon ec2 and learned how to do it. Even though the question was asked years back, I thought I would share how easy it is today.
Setting up EC2 and ssh-ing to ec2 host
Sign up and launch an ec2 instance(Do not forget to save the certificate file that will be generated while launching ec2) with default settings.
Once the ec2 is up and running, provide required permissions to the certificate file chmod 400 /path/my-key-pair.pem (or .cer file)
Run the command: ssh -i /path/my-key-pair.pem(.cer) USER#Public DNS(USER data changes based on the operating system you have launched, refer to the below paragraph for more details && Public DNS can be obtained on ec2 instance page)
Use the ssh command to connect to the instance. You specify the private key (.pem) file and user_name#public_dns_name. For Amazon Linux, the user name is ec2-user. For RHEL, the user name is ec2-user or root. For Ubuntu, the user name is ubuntu or root. For Centos, the user name is centos. For Fedora, the user name is ec2-user. For SUSE, the user name is ec2-user or root. Otherwise, if ec2-user and root don't work, check with your AMI provider.
Clone the script to EC2
In order to run the scripts on ec2, I would prefer storing the code on Github as a repo or as a gist(if you need to keep code private) and clone into ec2.
Above mention is very easy and is not error-prone.
Running the python script
I have worked with RHEL Linux instance and python was already installed. So, I could run python script after ssh-ing to host directly. It depends on your operating system you choose. Refer to aws manuals if it's not installed already.
Reference: AWS Doc
Is fabric suitable for a new VPS setup like Linode or SliceHost?
The setup is explained in this slicehost article
The required actions are basically:
changing root password
creating a new user and group
add the group to the list of sudoers
set hostname
generate local ssh keys and upload securely the public key
set iptables
If fabric is not the tool, is there a better tool for this?
Thanks
Fabric would work very well for these tasks. Essentially anything you do over SSH can be automated with Fabric. It also allows you to upload and download files.
You would probably generate your local keys by invoking shell commands locally; but everything else is in fabric's domain.