I have a script that I use to connect to localhost:8080 to run some commands on a dev_appserver instance. I use a combination of remote_api_stub and httplib.HTTPConnection. Before I make any calls to either api I want to ensure that the server is actually running.
What would be a "best practice" way in python to determine:
if any web server is running on localhost:8080
if dev_appserver is running on localhost:8080?
This should do it:
import httplib
NO_WEB_SERVER = 0
WEB_SERVER = 1
GAE_DEV_SERVER_1_0 = 2
def checkServer(host, port, try_only_ssl = False):
hh = None
connectionType = httplib.HTTPSConnection if try_only_ssl \
else httplib.HTTPConnection
try:
hh = connectionType(host, port)
hh.request('GET', '/_ah/admin')
resp = hh.getresponse()
headers = resp.getheaders()
if headers:
if (('server', 'Development/1.0') in headers):
return GAE_DEV_SERVER_1_0|WEB_SERVER
return WEB_SERVER
except httplib.socket.error:
return NO_WEB_SERVER
except httplib.BadStatusLine:
if not try_only_ssl:
# retry with SSL
return checkServer(host, port, True)
finally:
if hh:
hh.close()
return NO_WEB_SERVER
print checkServer('scorpio', 22) # will print 0 an ssh server
print checkServer('skiathos', 80) # will print 1 for an apache web server
print checkServer('skiathos', 8080) # will print 3, a GAE Dev Web server
print checkServer('no-server', 80) # will print 0, no server
print checkServer('www.google.com', 80) # will print 1
print checkServer('www.google.com', 443) # will print 1
I have an ant build script that does stuff using remote_api. To verify the server is running I just use curl and make sure no error is returned by it.
<target name="-local-server-up">
<!-- make sure local server is running -->
<exec executable="curl" failonerror="true">
<arg value="-s"/>
<arg value="${local.host}${remote.api}"/>
</exec>
<echo>local server running</echo>
</target>
You could just use call to do the same in Python (assuming you have curl on your machine).
I would go with something like this:
import httplib
GAE_DEVSERVER_HEADER = "Development/1.0"
def is_HTTP_server_running(host, port, just_GAE_devserver = False):
conn= httplib.HTTPConnection(host, port)
try:
conn.request('HEAD','/')
return not just_GAE_devserver or \
conn.getresponse().getheader('server') == GAE_DEVSERVER_HEADER
except (httplib.socket.error, httplib.HTTPException):
return False
finally:
conn.close()
tested with:
assert is_HTTP_server_running('yahoo.com','80') == True
assert is_HTTP_server_running('yahoo.com','80', just_GAE_devserver = True) == False
assert is_HTTP_server_running('localhost','8088') == True
assert is_HTTP_server_running('localhost','8088', just_GAE_devserver = True) == True
assert is_HTTP_server_running('foo','8088') == False
assert is_HTTP_server_running('foo','8088', just_GAE_devserver = True) == False
Related
Dears ,
I am new to Python and flask . When I run the following code on Spyder I get the following message:
runfile('C:/Users/...../Desktop/Folders/..../BlockChain/Create Blockchain/Module 1 - Create a Blockchain/blockchain.py', wdir='C:/Users/...../Desktop/Folders/..../BlockChain/Create Blockchain/Module 1 - Create a Blockchain')
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
but when I want to run http://127.0.0.1:5000/get_chain on POSTMAN, I get the following message:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
I TOTALLY CONFUSED WHY? Here is my code:
import datetime
import hashlib
import json
from flask import Flask, jsonify, request
Part 1 - Building a Blockchain
class Blockchain:
def __init__(self):
self.chain = []
self.create_block(proof = 1, previous_hash = '0')
def create_block(self, proof, previous_hash):
block = {'index': len(self.chain) + 1,
'timestamp': str(datetime.datetime.now()),
'proof': proof,
'previous_hash': previous_hash}
self.chain.append(block)
return block
def get_previous_block(self):
return self.chain[-1]
def proof_of_work(self, previous_proof):
new_proof = 1
check_proof = False
while check_proof is False:
hash_operation = hashlib.sha256(str(new_proof**2 - previous_proof**2).encode()).hexdigest()
if hash_operation[:4] == '0000':
check_proof = True
else:
new_proof += 1
return new_proof
def hash(self, block):
encoded_block = json.dumps(block, sort_keys = True).encode()
return hashlib.sha256(encoded_block).hexdigest()
def is_chain_valid(self, chain):
previous_block = chain[0]
block_index = 1
while block_index < len(chain):
block = chain[block_index]
if block['previous_hash'] != self.hash(previous_block):
return False
previous_proof = previous_block['proof']
proof = block['proof']
hash_operation = hashlib.sha256(str(proof**2 - previous_proof**2).encode()).hexdigest()
if hash_operation[:4] != '0000':
return False
previous_block = block
block_index += 1
return True
# Creating a Web App
app = Flask(__name__)
# Creating a Blockchain
blockchain = Blockchain()
# Mining a new block
#app.route('/mine_block', methods = ['GET'])
def mine_block():
previous_block = blockchain.get_previous_block()
previous_proof = previous_block['proof']
proof = blockchain.proof_of_work(previous_proof)
previous_hash = blockchain.hash(previous_block)
block = blockchain.create_block(proof, previous_hash)
response = {'message': 'Congratulations, you just mined a block!',
'index': block['index'],
'timestamp': block['timestamp'],
'proof': block['proof'],
'previous_hash': block['previous_hash']}
return jsonify(response), 200
# Getting the full Blockchain
#app.route('/get_chain', methods = ['GET'])
def get_chain():
response = {'chain': blockchain.chain,
'length': len(blockchain.chain)}
return jsonify(response), 200
# Checking if the Blockchain is valid
#app.route('/is_valid', methods = ['GET'])
def is_valid():
is_valid = blockchain.is_chain_valid(blockchain.chain)
if is_valid:
response = {'message': 'All good. The Blockchain is valid.'}
else:
response = {'message': 'Houston, we have a problem. The Blockchain is not valid.'}
return jsonify(response), 200
# Running the app
app.run(host='0.0.0.0', port=5000)
'is_xhr' method has been deprecated & removed, So you need to upgrade your Flask version
Open Anaconda Prompt and type
pip install --upgrade Flask
Then restart your IDE
Flask is not compatible with that. Need to upgrade our Flask.
how to do:
step 1: Open Anaconda prompt
step 2 : write command conda install flask=1.0.0 now your flask will get update..& issue will get solve if, still don't work, then downgrade Werkzeug
how to do:
step 1: conda install werkzeug=0.16.1 in Anaconda prompt.
And Restart the Anaconda.
Hello I'm working on simple python ssh tunnel scrpit but I allways receive Could not resolve hostname error, but it works if run it manually. this is my code:
#!/usr/bin/env python
import subprocess
import time
import tempfile
class TunnelSSH():
def __init__(self, ssh_user: str, ssh_password: str, ssh_host: str, ssh_port: int,
local_tunnel_port:int, remote_tunnel_host:str, remote_tunnel_port:int):
self.ssh_user = ssh_user
self.ssh_password = ssh_password
self.ssh_host = ssh_host
self.ssh_port = ssh_port
self.local_tunnel_port = local_tunnel_port
self.remote_tunnel_port = remote_tunnel_port
self.remote_tunnel_host = remote_tunnel_host
_socket_file = tempfile.NamedTemporaryFile()
_socket_file.close()
self.socket = _socket_file.name
self.connected = False
def start(self):
ssh_conection = ['ssh', '-CN',
f'"{self.ssh_user}:{self.ssh_password}"#{self.ssh_host} -p {self.ssh_port}',
f'-L {self.local_tunnel_port}:{self.remote_tunnel_host}:{self.remote_tunnel_port}',
f'-S {self.socket}',
'-o ExitOnForwardFailure=True'
]
if not self.connected:
status = subprocess.call(ssh_conection)
self._check_connection(status)
time.sleep(self.retry_sleep)
else:
raise Exception('Tunnel is open')
def stop(self):
if self.connected:
if self._send_control_command('exit') != 0:
raise Exception('SSH tunnel failed to exit')
self.connected = False
def _check_connection(self, status) -> None:
"""Check connection status and set connected to True is tunnel is open"""
if status != 0:
raise Exception(f'SSH tunnel failed status: {status}')
if self._send_control_command('check'):
raise Exception(f'SSH tunnel failed to check')
self.connected = True
def _send_control_command(self, ctl_cmd:str):
call = ['ssh',f'-S {self.socket}',f'-O {self.ctl_cmd}', f'-l {self.ssh_user}', f'{self.ssh_host}']
return subprocess.check_call(call)
if __name__ == "__main__":
tunnel = TunnelSSH(ssh_user='...',
ssh_password='...',
ssh_host='...',
ssh_port=...,
local_tunnel_port=...,
remote_tunnel_host='...',
remote_tunnel_port=...
)
retry = 10 # times
wait_for_retry = 5 #s
for i in range(retry):
print(f'Connection attempt: {i}')
try:
tunnel.start()
except Exception as err:
tunnel.stop()
print(err)
time.sleep(wait_for_retry)
print(f'Connected: {tunnel.connected}')
subprocess.call expects a list of arguments. When ssh_conection is formed, several arguments are slapped together, so e.g. this part gets quoted into a single argument:
'"{self.ssh_user}:{self.ssh_password}"#{self.ssh_host} -p {self.ssh_port}'
Fix: properly split the arguments:
...
ssh_conection = ['ssh', '-CN',
f'{self.ssh_user}:{self.ssh_password}#{self.ssh_host}', # will be quoted automatically
'-p', f'{self.ssh_port}',
'-L', f'{self.local_tunnel_port}:{self.remote_tunnel_host}:{self.remote_tunnel_port}',
'-S', f'{self.socket}',
'-o', 'ExitOnForwardFailure=True'
]
...
What hinted the problem: IP addresses are used directly. 'cannot be resolved' on an IP address says that it is interpreted as a symbolic name, which makes spotting this easier.
I have method that sends an email and I want to make parametrize pytest test, but when I run my test it doesn't run and in logs it shows no tests ran. What's wrong with my test?
pytest scenario:
import pytest
import new_version
#pytest.mark.parametrize("email", ['my_email#gmail.com'])
def send_email(email):
new_version.send_mail(email)
method that sends email:
# Imports
import smtplib
import time
# Language
error_title = "ERROR"
send_error = "I'm waiting {idle} and I will try again."
send_error_mail = "Mail invalid"
send_error_char = "Invalid character"
send_error_connection_1_2 = "Connection problem..."
send_error_connection_2_2 = "Gmail server is down or internet connection is instabil."
login_browser = "Please log in via your web browser and then try again."
login_browser_info = "That browser and this software shood have same IP connection first time."
# Gmaild ID fro login
fromMail = "myemail#gmail.com"
fromPass = "pass"
# Some configurations
mailDelay = 15
exceptionDelay = 180
# SEND MAILS
def send_mail(email, thisSubject="Just a subject",
thisMessage="This is just a simple message..."):
# To ho to send mails
mailTo = [
email
]
# If still have mails to send
while len(mailTo) != 0:
sendItTo = mailTo[0] # Memorise what mail will be send it (debug purpose)
try:
# Connect to the server
server = smtplib.SMTP("smtp.gmail.com:587")
server.ehlo()
server.starttls()
# Sign In
server.login(fromMail, fromPass)
# Set the message
message = f"Subject: {thisSubject}\n{thisMessage}"
# Send one mail
server.sendmail(fromMail, mailTo.pop(0), message)
# Sign Out
server.quit()
# If is a problem
except Exception as e:
# Convert error in a string for som checks
e = str(e)
# Show me if...
if "The recipient address" in e and "is not a valid" in e:
print(f"\n>>> {send_error_mail} [//> {sendItTo}\n")
elif "'ascii'" in e and "code can't encode characters" in e:
print(f"\n>>> {send_error_char} [//> {sendItTo}\n")
elif "Please" in e and "log in via your web browser" in e:
print(f"\n>>> {login_browser}\n>>> - {login_browser_info}")
break
elif "[WinError 10060]" in e:
if "{idle}" in send_error:
se = send_error.split("{idle}");
seMsg = f"{se[0]}{exceptionDelay} sec.{se[1]}"
else:
seMsg = send_error
print(f"\n>>> {send_error_connection_1_2}\n>>> {send_error_connection_2_2}")
print(f">>> {seMsg}\n")
# Wait 5 minutes
waitTime = exceptionDelay - mailDelay
if waitTime <= 0:
waitTime = exceptionDelay
time.sleep(waitTime)
else:
if "{idle}" in send_error:
se = send_error.split("{idle}");
seMsg = f"{se[0]}{exceptionDelay} sec.{se[1]}"
else:
seMsg = send_error
print(f">>> {error_title} <<<", e)
print(f">>> {seMsg}\n")
# Wait 5 minutes
time.sleep(exceptionDelay)
# If are still mails wait before to send another one
if len(mailTo) != 0:
time.sleep(mailDelay)
logs:
platform darwin -- Python 3.7.5, pytest-5.3.5, py-1.8.1, pluggy-0.13.1 -- /Users/nikolai/Documents/Python/gmail/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/nikolai/Documents/Python/gmail
collected 0 items
Unless you change the discovery prefixes for Pytest, test functions must be named test_*:
import pytest
import new_version
#pytest.mark.parametrize("email", ['my_email#gmail.com'])
def test_send_email(email):
new_version.send_mail(email)
How can i set one mailbox parser for user in few different threads ? Each user have 2 or more companies which mails he should parse. In my expample i'm always had errors.
class User():
services = {'company1': 'STOPED',
'company2': 'STOPED'}
def __init__(self, user_settings):
self.user_name = user_settings['User']
self.user_settings = user_settings
self.global_mailbox = None
def company1(self):
service_name = 'company1'
settings = self.user_settings['Company1']
gb_mail = False
def mailbox_check(mailbox):
nonlocal settings, service_name
mailbox.noop()
status, mails = mailbox.search(None, '(UNSEEN)', '(FROM "company1#cmp.com")')
....
if not settings['Mail Login']:
if self.global_mailbox:
mailbox = self.global_mailbox
gb_mail = True
else:
self.SET_GLOBAL_MAIL()
if not self.global_mailbox:
return
else:
mailbox = self.global_mailbox
else:
mail_host = 'imap.{0}'.format(settings['Mail Login'].split('#')[-1])
mail_login = settings['Mail Login']
mail_password = settings['Mail Password']
mailbox = imaplib.IMAP4_SSL(mail_host)
mailbox.sock.settimeout(43200)
mailbox.login(mail_login, mail_password)
mailbox.select('INBOX')
gb_mail = False
while self.services[service_name] != 'STOPED':
time.sleep(5)
new_orders = self.mailbox_check(mailbox)
if new_orders:
action()
if self.services[service_name] == 'STOPED':
break
def company2(self):
service_name = 'company2'
settings = self.user_settings['Company2']
gb_mail = False
def mailbox_check(mailbox):
nonlocal settings, service_name
mailbox.noop()
status, mails = mailbox.search(None, '(UNSEEN)', '(FROM "company2#cmp.com")')
....
if not settings['Mail Login']:
if self.global_mailbox:
mailbox = self.global_mailbox
gb_mail = True
else:
self.SET_GLOBAL_MAIL()
if not self.global_mailbox:
return
else:
mailbox = self.global_mailbox
else:
mail_host = 'imap.{0}'.format(settings['Mail Login'].split('#')[-1])
mail_login = settings['Mail Login']
mail_password = settings['Mail Password']
mailbox = imaplib.IMAP4_SSL(mail_host)
mailbox.sock.settimeout(43200)
mailbox.login(mail_login, mail_password)
mailbox.select('INBOX')
gb_mail = False
while self.services[service_name] != 'STOPED':
time.sleep(5)
new_orders = self.mailbox_check(mailbox)
if new_orders:
action()
if self.services[service_name] == 'STOPED':
break
def SET_GLOBAL_MAIL(self)
try:
gb_mail = self.user_settings['Global Mail']
mail_host = 'imap.{0}'.format(gb_mail['Login'].split('#')[-1])
mail_login = gb_mail['Login']
mail_password = gb_mail['Password']
mailbox = imaplib.IMAP4_SSL(mail_host)
mailbox.sock.settimeout(43200)
mailbox.login(mail_login, mail_password)
mailbox.select('INBOX')
self.global_mailbox = mailbox
except:
self.global_mailbox = None
pass
def START_ALL(self):
self.SET_GLOBAL_MAIL()
for service in self.services.keys():
self.services[service] = 'RUNNING'
threading.Thread(target=lambda: self.__getattribute__(service)(), name='{0} [{1} service thread]'.format(self.user_name, service), daemon=True).start()
>>>user = User(settings)
>>>user.START_ALL()
After few seconds i got these errors:
imaplib.IMAP4.abort: command: SEARCH => socket error: EOF
imaplib.IMAP4.abort: socket error: EOF
imaplib.IMAP4.abort: command: NOOP => socket error: EOF
imaplib.IMAP4.abort: socket error: [WinError 10014] The system detected an
invalid pointer address in attempting to use a pointer argument in a call
ssl.SSLError: [SSL: SSLV3_ALERT_BAD_RECORD_MAC] sslv3 alert bad record mac (_ssl.c:2273)
If for each thread i'm creating a new imap session all works fine, but GMAIL has a limit for simultaneous connection through imap. And user may have more than 15 companies for parsing. How to setup one global mail for user for all his actions ?
It does not make really sense to use the same IMAP connection for several conversations because there is one single socket connection and one single server side context. If one thread asks mailbox1 and a second asks for mailbox2, the server will select successively the 2 mailboxes and will stay in last one.
Not speaking of race conditions if two threads simultaneously read from same socket: each will only get quasi random partial data while the other part will be read by the other thread. I am sorry but this is not possible.
I am trying to learn about kapacitor User Defined Functions (udf) from this URL https://www.youtube.com/watch?v=LL8g4qiBCNo
kapacitor starts and listens on http port 9092 when I do not specify a python udf.
My [udf] section in kapacitor.conf looks like
[udf]
[udf.functions]
[udf.functions.geoSum]
prog = "/usr/bin/python"
args = ["-u", "/tmp/geo.py"]
timeout = "20s"
My python udf (geo.py) looks like the following
import sys
from agent import Agent, Handler
import udf_pb2
class GeoSum(Handler):
def __init__(self):
self._field = ''
self.size = 0
def info(self):
response = udf_pb2.Response()
response.info.wants = udf_pb2.STREAM
response.info.provides = udf_pb2.STREAM
response.info.options['field'].valueTypes.append(udf_pb2.STRING)
response.info.options['size'].valueTypes.append(udf_pb2.INT)
response.info.options['magic'].valueTypes.extend([
udf_pb2.INT,
udf_pb2.DOUBLE,
udf_pb2.DURATION
])
return response
if __name__ == '__main__':
agent = Agent()
handler = GeoSum()
agent.handler = handler
print >> sys.stdout, "Starting GeoSum ..."
agent.start()
agent.wait()
print >> sys.stdout, "Stoping GeoSum ..."
With the above udf section kapacitor does not listen on http port 9092
I resolved the above issue by replacing the occurrences of
print >> sys.stdout
with
print >> sys.stderr