Stop pytest right at the start if condition not met - python

Any way of stopping the entire pytest run from happening at a very early stage if some condition is not met. For example, if it is found that the Elasticsearch service is not running?
I have tried putting this in a common file which is imported by all test files:
try:
requests.get('http://localhost:9200')
except requests.exceptions.ConnectionError:
msg = 'FATAL. Connection refused: ES does not appear to be installed as a service (localhost port 9200)'
pytest.exit(msg)
... but tests are being run for each file and each test within each file, and large amounts of error-related output is also produced.
Obviously what I am trying to do is stop the run at the very start of the collection stage.
Obviously too, I could write a script which checks any necessary conditions before calling pytest with any CLI parameters I might pass to it. Is this the only way to accomplish this?

Try using the pytest_configure initialization hook.
In your global conftest.py:
import requests
import pytest
def pytest_configure(config):
try:
requests.get(f'http://localhost:9200')
except requests.exceptions.ConnectionError:
msg = 'FATAL. Connection refused: ES does not appear to be installed as a service (localhost port 9200)'
pytest.exit(msg)
Updates:
Note that the single argument of pytest_configure has to be named config!
Using pytest.exit makes it look nicer.

Yes, MrBeanBremen's solution also works, with the following code in conftest.py:
#pytest.fixture(scope='session', autouse=True)
def check_es():
try:
requests.get(f'http://localhost:9200')
except requests.exceptions.ConnectionError:
msg = 'FATAL. Connection refused: ES does not appear to be installed as a service (localhost port 9200)'
pytest.exit(msg)

Related

Kafka Producer Function is not producing messages to Kafka via Google Cloud Functions

I am using GCP with its Cloud Functions to execute web scrapers on a frequent basis. Also locally, my script is working without any problems.
I have a setup.py file in which I am initializing the connection to a Kafka Producer. This looks like this:
p = Producer(
{
"bootstrap.servers": os.environ.get("BOOTSTRAP.SERVERS"),
"security.protocol": os.environ.get("SECURITY.PROTOCOL"),
"sasl.mechanisms": os.environ.get("SASL.MECHANISMS"),
"sasl.username": os.environ.get("SASL.USERNAME"),
"sasl.password": os.environ.get("SASL.PASSWORD"),
"session.timeout.ms": os.environ.get("SESSION.TIMEOUT.MS")
}
)
def delivery_report(err, msg):
"""Called once for each message produced to indicate delivery result.
Triggered by poll() or flush()."""
print("Got here!")
if err is not None:
print("Message delivery failed: {}".format(err))
else:
print("Message delivered to {} [{}]".format(msg.topic(), msg.partition()))
return "DONE."
I am importing this setup in main.py in which my scraping functions are defined. This looks similar to this:
from setup import p, delivery_report
def scraper():
try:
# I won't insert my whole scraper here since it's working fine ...
print(scraped_data_as_dict)
p.produce(topic, json.dumps(scraped_data_as_dict), callback=delivery_report)
p.poll(0)
except Exception as e:
# Do sth else
The point here is: I am printing my scraped data in the console. But it doesn't do anything with the producer. It's not even logging an failed producer message (deliver_report) on the console. It's like my script is ignoring the producer command. Also, there are no Error reports in the LOG of the Cloud Function. What am I doing wrong since the function is doing something, except the important stuff? What do I have to be aware of when connection Kafka with Cloud Functions?

kubernetes python client: block and wait for child pods to dissappear when deleting deployment

I'm looking to use the Kubernetes python client to delete a deployment, but then block and wait until all of the associated pods are deleted as well. A lot of the examples I'm finding recommend using the watch function something like follows.
try:
# try to delete if exists
AppsV1Api(api_client).delete_namespaced_deployment(namespace="default", name="mypod")
except Exception:
# handle exception
# wait for all pods associated with deployment to be deleted.
for e in w.stream(
v1.list_namespaced_pod, namespace="default",
label_selector='mylabel=my-value",
timeout_seconds=300):
pod_name = e['object'].metadata.name
print("pod_name", pod_name)
if e['type'] == 'DELETED':
w.stop()
break
However, I see two problems with this.
If the pod is already gone (or if some other process deletes all pods before execution reaches the watch stream), then the watch will find no events and the for loop will get stuck until the timeout expires. Watch does not seem to generate activity if there are no events.
Upon seeing events in the event stream for the pod activity, how do know all the pods got deleted? Seems fragile to count them.
I'm basically looking to replace the kubectl delete --wait functionality with a python script.
Thanks for any insights into this.
import json
def delete_pod(pod_name):
return v1.delete_namespaced_pod(name=pod_name, namespace="default")
def delete_pod_if_exists(pod_name):
def run():
delete_pod(pod_name)
while True:
try:
run()
except ApiException as e:
has_deleted = json.loads(e.body)['code'] == 404
if has_deleted:
return
May be you can try this way and handle exceptions based your requirement
def delete_deployment():
""" Delete deployment """
while True:
try:
deployment = api_client.delete_namespaced_deployment(
name="deployment_name",
namespace="deployment_namespace",
body=client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=5),
)
except ApiException:
break
print("Deployment 'deployment_name' has been deleted.")

Checking Servers with Motor(Mongodb & Tornado)

I need to create a function that checks to make sure Mongo servers are running using the ping function. I set up the clients right there (the config file has dictionary with ports numbers)
clientList = []
for value in configuration["mongodbServer"]:
client = motor.motor_tornado.MotorClient('mongodb://localhost:{}'.format(value))
clientList.append(client)
and then i run this function:
class MongoChecker(Checker):
formatter = 'stashboard.formatters.MongoFormatter'
def check(self):
for x in clientList:
if x.ping:
return x.ping
and the error i get:
yielded unknown object MotorDatabase(Database(MongoClient([]), 'ping'))\n",
I think my issue is that i'm using the ping function wrong. I can't find any other documentation on that or any other kind of feature that would check to see if the servers are still running. If anyone knows of a better way to monitor the status using Motor, i'm open. Thanks!
First, there's no "ping" function. Hence MotorClient thinks you're trying to access the database named "ping". The database named "ping" is shown in the "unknown object" exception. For all MongoDB commands like "ping", just use MotorDatabase's command method.
Second, Motor is asynchronous. You must use Motor methods in a Tornado coroutine with the "yield" statement. For example:
#gen.coroutine
def check():
try:
result = yield client.admin.command({'ping': 1})
print(result)
except ConnectionFailure as exc:
print(exc)
If you want to test this out synchronously, you can run the IOLoop just long enough for the coroutine to complete:
from pymongo.errors import ConnectionFailure
from tornado import gen
from tornado.ioloop import IOLoop
import motor.motor_tornado
client = motor.motor_tornado.MotorClient()
IOLoop.current().run_sync(check)
For an introduction to Tornado coroutines, see Refactoring Tornado Coroutines and the Tornado documentation.

suppress exceptions in twisted

I writing little customized ftp server and I need to suppress printing exceptions (well, one specific type of exception) to console but I want server to send "550 Requested action not taken: internal server error" or something like that to client.
However, when I catch exception using addErrback(), than I don't see exception in console but client gets OK status..
What could I do?
When you catch an error in errback handler, you should then inspect the type of the Failure and based on internal logic of your application send the Error as an FTP error message to the client
twisted.protocol.ftp.FTP handles this with self.reply(ERROR_CODE, "description")
So your code could look something like this:
from twisted.internet import ftp
MY_ERROR = ftp.REQ_ACTN_NOT_TAKEN
def failureCheck(failureInstance):
#do some magic to establish if we should reply an Error to this failure
return True
class myFTP(ftp.FTP):
def myActionX(self):
magicResult = self.doDeferredMagic()
magicResult.addCallback(self.onMagicSuccess)
magicResult.addErrback(self.onFailedMagic)
def onFailedMagic(self,failureInstance):
if failureCheck(failureInstance):
self.reply(MY_ERROR,'Add relevant failure information here')
else:
#do whatever other logic here
pass

Python - How to check if Redis server is available

I'm developing a Python Service(Class) for accessing Redis Server. I want to know how to check if Redis Server is running or not. And also if somehow I'm not able to connect to it.
Here is a part of my code
import redis
rs = redis.Redis("localhost")
print rs
It prints the following
<redis.client.Redis object at 0x120ba50>
even if my Redis Server is not running.
As I found that my Python Code connects to the Server only when I do a set() or get() with my redis instance.
So I dont want other services using my class to get an Exception saying
redis.exceptions.ConnectionError: Error 111 connecting localhost:6379. Connection refused.
I want to return proper message/Error code. How can I do that??
If you want to test redis connection once at startup, use the ping() command.
from redis import Redis
redis_host = '127.0.0.1'
r = Redis(redis_host, socket_connect_timeout=1) # short timeout for the test
r.ping()
print('connected to redis "{}"'.format(redis_host))
The command ping() checks the connection and if invalid will raise an exception.
Note - the connection may still fail after you perform the test so this is not going to cover up later timeout exceptions.
The official way to check if redis server availability is ping ( http://redis.io/topics/quickstart ).
One solution is to subclass redis and do 2 things:
check for a connection at instantiation
write an exception handler in the case of no connectivity when making requests
As you said, the connection to the Redis Server is only established when you try to execute a command on the server. If you do not want to go head forward without checking that the server is available, you can just send a random query to the server and check the response. Something like :
try:
response = rs.client_list()
except redis.ConnectionError:
#your error handlig code here
There are already good solutions here, but here's my quick and dirty for django_redis which doesn't seem to include a ping function (though I'm using an older version of django and can't use the newest django_redis).
# assuming rs is your redis connection
def is_redis_available():
# ... get redis connection here, or pass it in. up to you.
try:
rs.get(None) # getting None returns None or throws an exception
except (redis.exceptions.ConnectionError,
redis.exceptions.BusyLoadingError):
return False
return True
This seems to work just fine. Note that if redis is restarting and still loading the .rdb file that holds the cache entries on disk, then it will throw the BusyLoadingError, though it's base class is ConnectionError so it's fine to just catch that.
You can also simply except on redis.exceptions.RedisError which is the base class of all redis exceptions.
Another option, depending on your needs, is to create get and set functions that catch the ConnectionError exceptions when setting/getting values. Then you can continue or wait or whatever you need to do (raise a new exception or just throw out a more useful error message).
This might not work well if you absolutely depend on setting/getting the cache values (for my purposes, if cache is offline for whatever we generally have to "keep going") in which case it might make sense to have the exceptions and let the program/script die and get the redis server/service back to a reachable state.
I have also come across a ConnectionRefusedError from the sockets library, when redis was not running, therefore I had to add that to the availability check.
r = redis.Redis(host='localhost',port=6379,db=0)
def is_redis_available(r):
try:
r.ping()
print("Successfully connected to redis")
except (redis.exceptions.ConnectionError, ConnectionRefusedError):
print("Redis connection error!")
return False
return True
if is_redis_available(r):
print("Yay!")
Redis server connection can be checked by executing ping command to the server.
>>> import redis
>>> r = redis.Redis(host="127.0.0.1", port="6379")
>>> r.ping()
True
using the ping method, we can handle reconnection etc. For knowing the reason for error in connecting, exception handling can be used as suggested in other answers.
try:
is_connected = r.ping()
except redis.ConnectionError:
# handle error
Use ping()
from redis import Redis
conn_pool = Redis(redis_host)
# Connection=Redis<ConnectionPool<Connection<host=localhost,port=6379,db=0>>>
try:
conn_pool.ping()
print('Successfully connected to redis')
except redis.exceptions.ConnectionError as r_con_error:
print('Redis connection error')
# handle exception

Categories