Ping server in Python without root permissions - python

I have a python script, that is working only if my server is available. So before the script is beginning, i want to ping my server or rather checking the availability of the server.
There are already some related SO questions.
E.g
pyping module
response = pyping.ping('Your IP')
if response.ret_code == 0:
print("reachable")
else:
print("unreachable")
ping process in python
response = os.system("ping -c 1 " + hostname)
These answers works well, but only as ROOT user!
When i use this solutions as a common user i get the following error message:
ping: Lacking privilege for raw socket.
I need a solution, that i can do that as a common user, because i run this script in a jenkins job and have not the option to run in as root.

Would trying to perform a HTTP HEAD request, assuming the machine has a http server running, suffice?
from http.client import HTTPConnection # python3
try:
conn = HTTPConnection(host, port, timeout)
conn.request("HEAD", "/")
conn.close()
# server must be up
except:
# server is not up, do other stuff

Related

Is there any way to test the weblogic admin connecting URL (t3/t3s) before connecting to it

I'm using following command to connect to weblgic using WLST,
java weblogic.wlst core.py
inside core.py I'm calling following command to connect to the weblogic admin. but some times the service url becomes unresponsive And my script hangs occasionally due to this. Is there any way to give a timeout to this connect() method or any other method to implement a timeout functionality?. Appreciate if someone can shed some light on this. Thanks.
connect(username,password,t3://:)
in earlier WebLogic versions they have provided following functionality(to ping), but they have removed it after 12.2*
java weblogic.Admin -url t3://localhost:7001 -username weblogic -password weblog
ic ping 3 100
This is a very common situation, where you can use Python's socket module to check that the Admin port is opened or not with the following function.
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
AdminIP = '192.168.33.10'
result = sock.connect_ex((AdminIP,7001))
if result == 0:
print "AdminPort is open you can connect"
else:
print "Admin Port is not yet open"
sock.close()
add your logic accordingly, HTH!

How to execute remote commands via SSH through authenticated HTTP Proxy?

I am posting the question and the answer, I found, as well, incase it would help someone. The following were my minimum requirements:
1. Client machine is Windows 10 and remote server is Linux
2. Connect to remote server via SSH through HTTP Proxy
3. HTTP Proxy uses Basic Authentication
4. Run commands on remote server and display output
The purpose of the script was to login to the remote server, run a bash script (check.sh) present on the server and display the result. The Bash script simply runs a list of commands displaying the overall health of the server.
There have been numerous discussions, here, on how to implement HTTP Proxy or running remote commands using Paramiko. However, I could not find the combination of both.
from urllib.parse import urlparse
from http.client import HTTPConnection
import paramiko
from base64 import b64encode
# host details
host = "remote-server-IP"
port = 22
# proxy connection & socket definition
proxy_url = "http://uname001:passw0rd123#HTTP-proxy-server-IP:proxy-port"
url = urlparse(proxy_url)
http_con = HTTPConnection(url.hostname, url.port)
auth = b64encode(bytes(url.username + ':' + url.password,"utf-8")).decode("ascii")
headers = { 'Proxy-Authorization' : 'Basic %s' % auth }
http_con.set_tunnel(host, port, headers)
http_con.connect()
sock = http_con.sock
# ssh connection
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect(hostname=host, port=port, username='remote-server-uname', password='remote-server-pwd', sock=sock)
except paramiko.SSHException:
print("Connection Failed")
quit()
stdin,stdout,stderr = ssh.exec_command("./check")
for line in stdout.readlines():
print(line.strip())
ssh.close()
I would welcome any suggestions to the code as I am a network analyst and not a coder but keen to learn and improve.
I do not think that your proxy code is correct.
For a working proxy code, see How to ssh over http proxy in Python?, particularly the answer by #tintin.
As it seems that you need to authenticate to the proxy, after the CONNECT command, add a Proxy-Authorization header like:
Proxy-Authorization: Basic <credentials>
where the <credentials> is base-64 encoded string username:password.
cmd_connect = "CONNECT {}:{} HTTP/1.1\r\nProxy-Authorization: Basic <credentials>\r\n\r\n".format(*target)

Paramiko SSH failing with "Server '...' not found in known_hosts" when run on web server

I am trying to use Paramiko to make an SSH communication between 2 servers on a private network. The client server is a web server and the host server is going to be a "worker" server. The idea was to not open up the worker server to HTTP connections. The only communication that needs to happen, is the web server needs to pass strings to a script on the worker server. For this I was hoping to use Paramiko and pass the information to the script via SSH.
I set up a new user and created a test script in Python 3, which works when I run it from the command line from my own user's SSH session. I put the same code into my Django web app, thinking that it should work, since it tests OK from the command line, and I get the following error:
Server 'worker-server' not found in known_hosts
Now, I think I understand this error. When performing the test script, I was using a certain user to access the server, and the known hosts information is saved to ~/.ssh/known_hosts even though the user is actually a 3rd party user created just for this one job. So the Django app is running under a different user who doesn't find the saved known hosts info because it doesn't have access to that folder. As far as I can tell the user which Apache uses to execute the Django scripts doesn't have a home directory.
Is there a way I can add this known host in a way that the Django process can see it?
Script:
import paramiko
client = paramiko.SSHClient()
client.load_system_host_keys()
client.connect('worker-server', 22, 'workeruser', 'workerpass')
code = "123wfdv"
survey_id = 111
stdin, stdout, stderr =
client.exec_command('python3 /path/to/test_script/test.py %s %s' % ( code, survey_id ))
print( "ssh successful. Closing connection" )
stdout = stdout.readlines()
client.close()
print ( "Connection closed" )
output = ""
for line in stdout:
output = output + line
if output!="":
print ( output )
else:
print ( "There was no output for this command" )
You can hard-code the host key in your Python code, using HostKeys.add:
import paramiko
from base64 import decodebytes
keydata = b"""AAAAB3NzaC1yc2EAAAABIwAAAQEA0hV..."""
key = paramiko.RSAKey(data=decodebytes(keydata))
client = paramiko.SSHClient()
client.get_host_keys().add('example.com', 'ssh-rsa', key)
client.connect(...)
This is based on my answer to:
Paramiko "Unknown Server".
To see how to obtain the fingerprint for use in the code, see my answer to:
Verify host key with pysftp.
If using pysftp, instead of Paramiko directly, see:
PySFTP failing with "No hostkey for host X found" when deploying Django/Heroku
Or, as you are connecting within a private network, you can give up on verifying host key altogether, using AutoAddPolicy:
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(...)
(This can be done only if you really do not need the connection to be secure)

Verify rabbitmq credentials are valid

I'd like to write a simple smoke test that runs after deployment to verify that the RabbitMQ credentials are valid. What's the simplest way to check that rabbitmq username/password/vhost are valid?
Edit: Preferably, check using a bash script. Alternatively, using a Python script.
As you haven't provided any details about language, etc.:
You could simply issue a HTTP GET request to the management api.
$ curl -i -u guest:guest http://localhost:15672/api/whoami
See RabbitMQ Management HTTP API
Here's a way to check using Python:
#!/usr/bin/env python
import socket
from kombu import Connection
host = "localhost"
port = 5672
user = "guest"
password = "guest"
vhost = "/"
url = 'amqp://{0}:{1}#{2}:{3}/{4}'.format(user, password, host, port, vhost)
with Connection(url) as c:
try:
c.connect()
except socket.error:
raise ValueError("Received socket.error, "
"rabbitmq server probably isn't running")
except IOError:
raise ValueError("Received IOError, probably bad credentials")
else:
print "Credentials are valid"
You could try with rabbitmqctl as well,
rabbitmqctl authenticate_user username password
and check the return code in Bash.
using Python:
>>> import pika
>>> URL = 'amqp://guest:guest#localhost:5672/%2F'
>>> parameters = pika.URLParameters(URL)
>>> connection = pika.BlockingConnection(parameters)
>>> connection.is_open
True
>>> connection.close()

Slow Python HTTP server on localhost

I am experiencing some performance problems when creating a very simple Python HTTP server. The key issue is that performance is varying depending on which client I use to access it, where the server and all clients are being run on the local machine. For instance, a GET request issued from a Python script (urllib2.urlopen('http://localhost/').read()) takes just over a second to complete, which seems slow considering that the server is under no load. Running the GET request from Excel using MSXML2.ServerXMLHTTP also feels slow. However, requesting the data Google Chrome or from RCurl, the curl add-in for R, yields an essentially instantaneous response, which is what I would expect.
Adding further to my confusion is that I do not experience any performance problems for any client when I am on my computer at work (the performance problems are on my home computer). Both systems run Python 2.6, although the work computer runs Windows XP instead of 7.
Below is my very simple server example, which simply returns 'Hello world' for any get request.
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
class MyHandler(BaseHTTPRequestHandler):
def do_GET(self):
print("Just received a GET request")
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write('Hello world')
return
def log_request(self, code=None, size=None):
print('Request')
def log_message(self, format, *args):
print('Message')
if __name__ == "__main__":
try:
server = HTTPServer(('localhost', 80), MyHandler)
print('Started http server')
server.serve_forever()
except KeyboardInterrupt:
print('^C received, shutting down server')
server.socket.close()
Note that in MyHandler I override the log_request() and log_message() functions. The reason is that I read that a fully-qualified domain name lookup performed by one of these functions might be a reason for a slow server. Unfortunately setting them to just print a static message did not solve my problem.
Also, notice that I have put in a print() statement as the first line of the do_GET() routine in MyHandler. The slowness occurs prior to this message being printed, meaning that none of the stuff that comes after it is causing a delay.
The request handler issues a inverse name lookup in order to display the client name in the log. My Windows 7 issues a first DNS lookup that fails with no delay, followed by 2 successive NetBIOS name queries to the HTTP client, and each one run into a 2 sec timeout = 4 seconds delay !!
Have a look at https://bugs.python.org/issue6085
Another fix that worked for me is to override BaseHTTPRequestHandler.address_string() in my request handler with a version that does not perform the name lookup
def address_string(self):
host, port = self.client_address[:2]
#return socket.getfqdn(host)
return host
Philippe
This does not sound like a problem with the code. A nifty way of troubleshooting an HTTP server is to connect to it to telnet to it on port 80. Then you can type something like:
GET /index.html HTTP/1.1
host: www.blah.com
<enter> <enter>
and observe the server's response. See if you get a delay using this approach.
You may also want to turn off any firewalls to see if they are responsible for the slowdown.
Try replacing 127.0.0.1 for localhost. If that solves the problem, then that is a clue that the FQDN lookup may indeed be the possible cause.
Replacing localhost with 127.0.0.1 can solve the problem:)

Categories