Trying to integrate paramiko and pyTelegramBotAPI - python

I am new at programming using python. I am trying to create a python script that connects to different Linux servers to extract data or check if some services are running. I am able to send a command -> use paramiko to extract data -> send the data to telegram. The problem is that i am trying to make the code shorter creating a file only for functions and call it but i can not get this to work. Here is the file(no executable) and the code:
File:
def tx(message):
host = "111.222.333.444"
user = "user"
password = "12345"
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname=host, username=user, password=password)
stdin, stdout, stderr = ssh.exec_command("sudo tail -1 /usr/local/bin/noc/respaldos/diatx.txt")
time.sleep(.5)
output = stdout.readlines()
ssh.close()
return output
Script:
import telebot
import paramiko
import time
import commands
TOKEN = "abcde"
bot = telebot.TeleBot(TOKEN)
#bot.message_handler(commands=['tx'])
commands.tx(message)
bot.send_message(message.chat.id, output)
bot.polling()
My intention is to create 20 functions for different data and checks but all in functions inside the commands.py file. I have tried with from commands import * but that did not work either

I see a few problems in your code.
First, I don't see message being defined anywhere.
Second, the way you use the bot.message_handler decorator.
The decorator function returns a function object that will replace the function you decorate. See the code:
#bot.message_handler(commands=['tx'])
def tx(message):
command.tx(message)
However, I don't know telebot library. I'm not sure how this is supposed to work.

Related

why does my api call in my python code not work when called using ssh in lambda

First off, I'm pretty new to AWS and it took me a lot of trial and error to get my lambda function to execute my python script which sit on an ec2 instance.
If I run my code manually through command line in my ec2 instance, the code works perfectly, it call the requested api and saves down the data.
If I call my script through a lambda function using ssh, it stops executing at the api call, the lamda returns that everything ran, but it didn't, I get no output messages returned to say there was an exception, nothing in the cloudwatch log either. I know it starts to execute my code, because if I put print statments before the api calls, I see them returned in the cloudwatch log.
Any ideas to help out a noob.
Here is my lambda code:
import time
import boto3
import json
import paramiko
def lambda_handler(event, context):
ec2 = boto3.resource('ec2', region_name='eu-west-2')
instance_id = 'removed_id'
instance = ec2.Instance(instance_id)
# Start the instance
instance.start()
s3_client = boto3.client('s3')
# Download private key file from secure S3 bucket
# and save it inside /tmp/ folder of lambda event
s3_client.download_file('removed_bucket', 'SSEC2.pem',
'/tmp/SSEC2.pem')
# Allowing few seconds for the download to complete
time.sleep(2)
# Giving some time to start the instance completely
time.sleep(60)
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
privkey = paramiko.RSAKey.from_private_key_file('/tmp/SSEC2.pem')
# username is most likely 'ec2-user' or 'root' or 'ubuntu'
# depending upon your ec2 AMI
ssh.connect(
instance.public_dns_name, username='ec2-user', pkey=privkey
)
print('Executing')
stdin, stdout, stderr = ssh.exec_command(
'/home/ec2-user/miniconda3/bin/python /home/ec2-user/api-calls/main.py')
stdin.flush()
data = stdout.read().splitlines()
for line in data:
print(line)
ssh.close()
# Stop the instance
# instance.stop()
return {
'statusCode': 200,
'body': json.dumps('Execution successful ' )
}
edit :
okay, slight update, it's not falling over on the api call, it's actually stopping when it tries to open a config file a write, which is stored in "config/config.json". Now obviously this works in the ec2 environment when I'm executing manually, so this must have something to do with enviroment variables in ec2 not being the same if the job is triggered from elsewhere?? here is the exact code :
#staticmethod
def get_config():
with open("config/config.json", "r") as read_file:
data = json.load(read_file)
return data
problem solved. I need to use the full path names when executing the code remotely.
with open("/home/ec2-user/api-calls/config/config.json", "r") as read_file :
'''

How to automate the user creation in vcenter using python

I am pretty new to Pyvmomi and vsphere automation.
I have been trying to automate the user and group creation in vsphere but could not locate the method in Pyvmomi which could help me automate the process of user creation.
I already have a user created in vcenter (abc#xyz.local)
This user has administrative privileges
Now, I want to create a session with user abc#xyz.local and add new users in Vcenter 'users and groups'. Once the new users are created, I have to add these users to different groups.
All these has to be done via automation using python.
Is there a way to automate this?
Unfortunately, the SSO API is all private and unavailable through pyvmomi and the rest of the SDKs.
As #Kyle Ruddy says, it looks like pyvmomi does not support SSO APIs. However, the golang alternative (govmomi) does. Govmomi also has an a CLI called GOVC which provides a nice wrapper to perform the following (and other things!):
Creating groups
Adding users to groups
Creating users
You could look at GOVCs source code and try and figure out the SOAP calls, but I think that would be more trouble than its worth.
If you are open to the idea of launching a bash commands from python then you could do the following:
import subprocess
import os
# Handy function for GOVC and assume GOVC is on your $PATH
def govc_runner(command):
my_env = os.environ.copy()
# Admin user will need to perform the commmands
my_env["GOVC_USERNAME"] = "abc#xyz.local"
my_env["GOVC_PASSWORD"] = "<ABC_PASSWORD>"
my_env["GOVC_URL"] = "https://<VCENTER>"
my_env["GOVC_INSECURE"] = "true"
process = subprocess.Popen(command, env=my_env, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, error = process.communicate()
return output, error
# New group and user info
newUserUsername = "praseemol"
newUserPassword = "<PARASEEMOL_PASSWORD>"
newGroup = "prasGroup"
# Creating new group and user
govc_runner("govc sso.group.create " + newGroup)
govc_runner("govc sso.user.create -p '" + newUserPassword + "' '" + newUserUsername + "'")
govc_runner("govc sso.group.update -a " + newUserUsername + " " + newGroup)
# Check if it has worked
output, error = govc_runner("govc sso.user.id " + newUserUsername)
if newGroup in output:
print("Yay, it worked:\n" + output)
else:
print("Something went wrong :(")
Hope that helps!
You can automate shell(vcenter) execution by doing ssh through putty for creation of user in system domain of vcenter and mimic same using paramiko library of python.
Official docs to refer for system domain user creation:
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.security.doc/GUID-4FBEA58E-9492-409B-B584-C18477F041D8.html
Commands to be executed on vcenter shell:
/usr/lib/vmware-vmafd/bin/dir-cli user create --account william --first-name william --last-name lam --user-password 'VMware1!'
Refer:https://williamlam.com/2015/05/vcenter-server-6-0-tidbits-part-9-creating-managing-sso-users-using-dir-cli.html
To connect to vcenter using paramiko:
How do you execute multiple commands in a single session in Paramiko? (Python)
Pick the answer by "This".
You can fetch the created user using powercli commands:
Get-VIAccount
While using this be sure to find your created user in system domain.
Get_VIAccount -Domain 'domain_name'
The default domain name is usually like: "vsphere.local"
You can also find your domain by using putty to vcenter, enter shell and write,
"sso-config.sh -get_identity_sources"
You will be able to read Sys_Domain: '......'
You can assign role to user using powercli:
Get-VIPermission
If You can automate local user creation, let me know:
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vcsa.doc/GUID-533AE852-A1F9-404E-8AC6-5D9FD65464E5.html

How to SSH and run commands in EC2 using boto3?

I want to be able to ssh into an EC2 instance, and run some shell commands in it, like this.
How do I do it in boto3?
This thread is a bit old, but since I've spent a frustrating afternoon discovering a simple solution, I might as well share it.
NB This is not a strict answer to the OP's question, as it doesn't use ssh. But, one point of boto3 is that you don't have to - so I think in most circumstances this would be the preferred way of achieving the OP's goal, as s/he can use his/her existing boto3 configuration trivially.
AWS' Run Command is built into botocore (so this should apply to both boto and boto3, as far as I know) but disclaimer: I've only tested this with boto3.
def execute_commands_on_linux_instances(client, commands, instance_ids):
"""Runs commands on remote linux instances
:param client: a boto/boto3 ssm client
:param commands: a list of strings, each one a command to execute on the instances
:param instance_ids: a list of instance_id strings, of the instances on which to execute the command
:return: the response from the send_command function (check the boto3 docs for ssm client.send_command() )
"""
resp = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': commands},
InstanceIds=instance_ids,
)
return resp
# Example use:
ssm_client = boto3.client('ssm') # Need your credentials here
commands = ['echo "hello world"']
instance_ids = ['an_instance_id_string']
execute_commands_on_linux_instances(ssm_client, commands, instance_ids)
For windows instance powershell commands you'd use an alternative option:
DocumentName="AWS-RunPowerShellScript",
You can use the following code snippet to ssh to an EC2 instance and run some command from boto3.
import boto3
import botocore
import paramiko
key = paramiko.RSAKey.from_private_key_file(path/to/mykey.pem)
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Connect/ssh to an instance
try:
# Here 'ubuntu' is user name and 'instance_ip' is public IP of EC2
client.connect(hostname=instance_ip, username="ubuntu", pkey=key)
# Execute a command(cmd) after connecting/ssh to an instance
stdin, stdout, stderr = client.exec_command(cmd)
print stdout.read()
# close the client connection once the job is done
client.close()
break
except Exception, e:
print e
Here is how I have done
import boto3
import botocore
import boto
import paramiko
ec2 = boto3.resource('ec2')
instances = ec2.instances.filter(
Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
i = 0
for instance in instances:
print(instance.id, instance.instance_type)
i+= 1
x = int(input("Enter your choice: "))
try:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
privkey = paramiko.RSAKey.from_private_key_file('address to .pem key')
ssh.connect(instance.public_dns_name,username='ec2-user',pkey=privkey)
stdin, stdout, stderr = ssh.exec_command('python input_x.py')
stdin.flush()
data = stdout.read().splitlines()
for line in data:
x = line.decode()
#print(line.decode())
print(x,i)
ssh.close()
For the credentails, I have added AWSCLI package, then in the terminal run
aws configure
enter the credentials. All of them will be saved in .aws folder, u can change the path too.
You can also use kitten python library for that which is just a wrapper around boto3. You can also run same command on multiple servers at the same time using this utility.
For Example.
kitten run uptime ubuntu 18.105.107.20
You don't SSH from python. You can use boto3 module to interact with the EC2 instance.
Here you have a complete documentation of boto3 and what commands you can run with it.
Boto provided a way to SSH into EC2 instances programmatically using Paramiko and then run commands. Boto3 does not include this functionality. You could probably modify the boto code to work with boto3 without a huge amount of effort. Or you could look into using something like fabric or ansible which provide a much more powerful way to remotely execute commands on EC2 instances.
use boto3 to discover instances and fabric to run commands on the instances

How to establish a SSH connection via proxy using Fabric?

I am trying to establish a SSH connection between a Windows PC and a Linux server(amazon ec2).
I decided to use Fabric API implemented using python.
I have Putty installed on the Windows PC.
My fabfile script looks like this:
import sys
from fabric.api import *
def testlive():
print 'Test live ...'
run("uptime")
env.use_ssh_config = False
env.host_string = "host.something.com"
env.user = "myuser"
env.keys_filename = "./private_openssh.key"
env.port = 22
env.gateway = "proxyhost:port"
testlive()
I am running Fabric in the same directory with the private key.
I am able to login on this machine using Putty.
The problem: I am constantly asked for Login password for specified user.
Based on other posts(here and here) I already tried:
pass as a list the key file to env.keys_filename
use username#host_string
use env.host instead of env.host_string
How to properly configure Fabric to deal with proxy server and ssh private key file ?
The following should work.
env.key_filename = "./private_openssh.key"
(notice the typo in your attempt)
Fabric's API is best avoided really, way too many bugs and issues (see issue tracker).
You can do what you want in Python with the following:
from __future__ import print_function
from pssh import ParallelSSHClient
from pssh.utils import load_private_key
client = ParallelSSHClient(['host.something.com'],
pkey=load_private_key('private_openssh.key'),
proxy_host='proxyhost',
proxy_port=<proxy port number>,
user='myuser',
proxy_user='myuser')
output = client.run_command('uname')
for line in output['host.something.com'].stdout:
print(line)
ParallelSSH is available from pip as parallel-ssh.
PuTTYgen is what you will use to generate your SSH key then upload the copied SSH key to your Cloud Management portal - See Joyant
You will have to generate and authenticate a private key, to do so, you need PuTTYgen to generate the SSH access using RSA key with password, key comment and conform the key passphrase, here is a step by step guide documentation SSH Access using RSA Key Authentication

Python Google Voice

I am using the google voice API from here, and trying to send text messages from Python. However, whenever I try to log in using this code, I get something I do not expect:
from googlevoice import tests
from googlevoice import Voice
from googlevoice.util import input
def login():
username, password = "xyz#gmail.com", "******"
client = Voice.login(username, password)
return client
Upon starting this code's parent program (a file that literally just says run this sketch), I get this prompt:
Email Address:
If I enter an email address, it just freezes. Any help would be greatly appreciated.
I've read a few places that the Google Voice API support is coming to an end/has ended and am wondering if this is why I'm getting an error... If so, are there any free alternatives that are python compatible? I don't want to pay to have to text from my computer!
Somehow, this has made it work now:
from googlevoice import Voice
from googlevoice.util import input
import sys
import BeautifulSoup
import fileinput
import Listen #A voice recognition script I wrote
def login():
username, password = "xyz#gmail.com", "******"
voice = Voice()
client = voice.login(username, password)
return client
The only thing different that I've done is changed some of the libraries I've imported, but I can finally get past that "Email Address: " error, and run the rest of my code. have yet to try and test it my sending a text yet though!
#Merlin2011 and #jknupp17, thank you so much for your suggestions!

Categories