VPS setup with fabric - python

Is fabric suitable for a new VPS setup like Linode or SliceHost?
The setup is explained in this slicehost article
The required actions are basically:
changing root password
creating a new user and group
add the group to the list of sudoers
set hostname
generate local ssh keys and upload securely the public key
set iptables
If fabric is not the tool, is there a better tool for this?
Thanks

Fabric would work very well for these tasks. Essentially anything you do over SSH can be automated with Fabric. It also allows you to upload and download files.
You would probably generate your local keys by invoking shell commands locally; but everything else is in fabric's domain.

Related

SSH Tunnel Access

Good Day
I work for an ISP and we basically manage all our switches and routers via the CLI from a Jumpbox.
I would like to automate some of my work on these devices by writing Python scripts, etc.
However, this Jumpbox (Linux), is quite old and the Python version is old. I cannot add Ansible, Netmiko, etc. Plus I'm not an Admin for that box so can't upgrade it.
My question is, if I set up my own Linux VM with all the required tools, how would I be able to access these routers and switches from my local Linux VM?
I tried setting up a Local/Remote/Dynamic SSH Tunnel to the Jumpbox, but I always end up on the Jumpbox SSH session itself.
You can use the jumpbox as a bastion host. Copy your public keys to both hosts (the jumpbox and the devices) and in your inventory file use the ansible_ssh_common_args option to set it up, like this:
[switches]
switch-01 ansible_host=192.168.0.1 ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q user#ip-bastion"'
Note: you must be running Ansible version 2.
Best regards.

Run commands remotely on a GCP machine

I want to run commands remotely from a GCP instance to another one, using a Python script on a Debian machine.
I know that the gcloud ssh command can do that with the subprocess module, but I don't know how to proceed faster as it creates a new key each time I run the command. Is there a way to operate with a service account for example, on which I could setup permissions and keys for each machine on my GCP project?
Create a SSH Key in your "source" machine
ssh-keygen
And then add your public key into the ssh keys tab on the metadata page on Google Cloud Console, under the settings category, from the Compute Engine section.
Then, you should be able to log in to the other instance doing
ssh [user#]other_instance_ip [optional command to execute]

How can I automate remote deployment in python?

I want to automate the remote deployment which currently I am doing manually.
The process includes
Make the tar ball from certain folders
SFTP to the remote server
Rename the old folders
Untar the new tar file
Restart apache
The remote system is on the intranet and has no access to the outside internet
I want to know how can I transfer the file from my python script and then when the transfer is complete then log into ssh and do stuff. I am confused about how can I achieve that. On localhost and I can do all that but how can I do that on a remote host?
For simple&dirty work you can use fabric (This by no means say that you cannot use fabric to build serious product)
For heavy configuration routines, you'd better pick a CMS (e.g., ansible)

How to run a code in an Amazone's EC2 instance?

I understand nearly nothing to the functioning of EC2. I created an Amazon Web Service (AWS) account. Then I launched an EC2 instance.
And now I would like to execute a Python code in this instance, and I don't know how to proceed. Is it necessary to load the code somewhere in the instance? Or in Amazon's S3 and to link it to the instance?
Where is there a guide that explain the usages of instance that are possible? I feel like a man before a flying saucer's dashboard without user's guide.
Here's a very simple procedure to move your Python script from local to EC2 Instance and run it.
> 1. scp -i <filepath to Pem> <filepath to Py File> ec2-user#<Public DNS>.compute-1.amazonaws.com:<filepath in EC2 instance where you want
> your file to be>
> 2. Cd to to the directory in EC2 containing the file. Type Python <Filename.py> There it executed.
Here's a concrete examples for those who likes things shown step-by-step:
In your local directory, create a python script with the following code: print("Hello AWS")
Assuming you already have AWS already set up and you want to run this script in EC2, you need to SCP (Secure Copy Protocol) your file to a directory in EC2. So here's an example:
- My filepath to pem is ~/Desktop/random.pem.
- My filepath to py file is ~/Desktop/hello_aws.py
- My public DNS is ec22-34-12-888
- The ec2 directory where I want my script to be is in /home/ec2-user
- So the full command I run in my local terminal is:
scp -i ~/Desktop/random.pem ~/Desktop/hello_aws.py ec2-user#ec2-34-201-49-170.compute-1.amazonaws.com:/home/ec2-user
Now ssh to your ec2 instance, cd to /home/ec2-user (Or wherever you put your file) and Python hello_aws.py
You have a variety of options. You can browse through a large library of AMIs here.
You can import a vm, instructions are here.
This is a general article about AWS and python.
And in this article, the author takes you through a more advanced system with a combination of datastores in python using the highly recommend django framework.
Launch your instance through Amazon's Management Console -> Instance Actions -> Connect
(More details in the getting started guide)
Launch the Java based SSH CLient
Plugins-> SCFTP File Transfer
Upload your files
run your files in the background (with '&' at the end or use nohup)
Be sure to select an AMI with python included, you can check by typing 'python' in the shell.
If your app require any unorthodox packages you'll have to install them.
Running scripts on Linux ec2 instances
I had to run a script on Amazon ec2 and learned how to do it. Even though the question was asked years back, I thought I would share how easy it is today.
Setting up EC2 and ssh-ing to ec2 host
Sign up and launch an ec2 instance(Do not forget to save the certificate file that will be generated while launching ec2) with default settings.
Once the ec2 is up and running, provide required permissions to the certificate file chmod 400 /path/my-key-pair.pem (or .cer file)
Run the command: ssh -i /path/my-key-pair.pem(.cer) USER#Public DNS(USER data changes based on the operating system you have launched, refer to the below paragraph for more details && Public DNS can be obtained on ec2 instance page)
Use the ssh command to connect to the instance. You specify the private key (.pem) file and user_name#public_dns_name. For Amazon Linux, the user name is ec2-user. For RHEL, the user name is ec2-user or root. For Ubuntu, the user name is ubuntu or root. For Centos, the user name is centos. For Fedora, the user name is ec2-user. For SUSE, the user name is ec2-user or root. Otherwise, if ec2-user and root don't work, check with your AMI provider.
Clone the script to EC2
In order to run the scripts on ec2, I would prefer storing the code on Github as a repo or as a gist(if you need to keep code private) and clone into ec2.
Above mention is very easy and is not error-prone.
Running the python script
I have worked with RHEL Linux instance and python was already installed. So, I could run python script after ssh-ing to host directly. It depends on your operating system you choose. Refer to aws manuals if it's not installed already.
Reference: AWS Doc

How to clone a mercurial repository over an ssh connection initiated by fabric when http authorization is required?

I'm attempting to use fabric for the first time and I really like it so far, but at a certain point in my deployment script I want to clone a mercurial repository. When I get to that point I get an error:
err: abort: http authorization required
My repository requires http authorization and fabric doesn't prompt me for the user and password. I can get around this by changing my repository address from:
https://hostname/repository
to:
https://user:password#hostname/repository
But for various reasons I would prefer not to go this route.
Are there any other ways in which I could bypass this problem?
Here are four options with various security trade-offs and requiring various amounts of sys admin mojo:
With newer mercurial's you could put the password in the [auth] section of the local user's .hgrc file. The password will still be on disk in plaintext, but at least not in the URL
Or
You could locally set up a HTTP proxy that presents as no-auth locally and does the auth for you when communicating with remote.
Or
Of you're able to alter configuration on the hosting server you could set it (Apache?) to not require a user/pass when accessed from localhost, and then use a SSH tunnel to make the local machine look like it's coming from localhost when it access the server:
ssh -L 8080:localhost:80 user#hostname # run in background and leave running
and then have fabric connect to http://localhost:8080/repository
Or
Newer mercurial's support client side certificates for authentication, so you could configure your Apache to honor those as authorization/authentcation and then tweak your local hg to provide the certificate.
Depending on your fabfile, you might be able to reframe the problem. Instead of doing a hg clone on the remote system you could do your mercurial commands on your local system, and then ship the artifact you've constructed across with fabric.
Specifically, you could clone the mercurial repository by using fabric's local() commands, and run a 'hg archive' command to prepare a tarball. Then you can use fabrics put() to upload that tarball, and fabrics run() to unpack it in the correct location.
A code snippet for the clone, pack, put might look a bit like the following:
from fabric.api import local
def task():
local("hg clone ssh://hg#host/repo tmpdir")
with lcd("tmpdir"):
local("hg archive ../repo.tgz")
local("rm tmpdir")
put("repo.tgz")

Categories