Constantly hanging test run with the plugin - python

We are using:
platform linux -- Python 3.9.5, pytest-6.2.5, py-1.10.0, pluggy-0.13.1
plugins: forked-1.4.0, xdist-2.5.0, pytest_check-1.0.4, teamcity-messages-1.29, anyio-3.3.4, testrail-2.9.1, dependency-0.5.1
When trying to execute pytest using xdist on remote windows host by loadfile get hanging test run.
The command:
python3 -m pytest -vv --dist=loadfile --tx ssh=admin#test-host-ip --rsyncdir /tmp/autotests_rsync C:\\users\\admin\\pyexecnetcache\\autotests_rsync\\autotests\\testsuite\\positive
The hanging appears in test, which using waiter to get value from postgresql db via SQLAlchemy ORM.
We're passing value from test suite to the following test:
start_time = Waiter.wait_new(lambda: DbTestData.get_session_records_column_by_record_id(
DbTestData.start_time, record_id)[0][0],
check_func=CheckFunctions.check_none,
error_message=f"Error")
assert start_time is not None, f"Record start_time in db = {start_time}, expected not None"
def query(*args):
session = SessionHolder.get_session()
result = session.query(*args)
session.commit()
return result
which using this waiter:
#staticmethod
def wait_new(func: Callable, check_func: Callable = CheckFunctions.check_empty, timeout_value: int = 20,
timeout_interval: int = 1, error_message: str = ""):
print(f"Func = {func}")
value = waiter_exception
exc_raise_if_fail = TestWaiterException()
timeout = 0
in_while = True
Logger.utils_logger.debug(f"timeout_value = {timeout_value}, timeout_interval = {timeout_interval})")
while in_while:
print(f"in_while loop")
try:
print(f"Trying execute func")
value = func()
except Exception as ex:
print(f"Exception")
if timeout == timeout_value:
exc_raise_if_fail.with_traceback(sys.exc_info()[2])
exc_raise_if_fail.txt += ": " + ex.args[0]
in_while = False
value = waiter_exception
Logger.utils_logger.debug(f"Exception", exc_info=True)
finally:
print(f"Finally")
Logger.utils_logger.debug(f"Current value: {value}")
if (timeout > timeout_value) or (value != waiter_exception and not check_func(value)):
print(f"Break")
break
else:
print(f"Else")
timeout += timeout_interval
time.sleep(timeout_interval)
if value == waiter_exception:
Logger.utils_logger.critical(f"{exc_raise_if_fail.txt}, {error_message}")
raise exc_raise_if_fail
return value
It just hangs permanently while executing waiter only when we using xdist plugin.
And we add close_all_sessions for SQL queries, but there are no results.

Related

pytest: TypeError: int() can't convert non-string with explicit base

def _get_trace(self) -> None:
"""Retrieves the stack trace via debug_traceTransaction and finds the
return value, revert message and event logs in the trace.
"""
# check if trace has already been retrieved, or the tx warrants it
if self._raw_trace is not None:
return
self._raw_trace = []
if self.input == "0x" and self.gas_used == 21000:
self._modified_state = False
self._trace = []
return
if not web3.supports_traces:
raise RPCRequestError("Node client does not support `debug_traceTransaction`")
try:
trace = web3.provider.make_request( # type: ignore
"debug_traceTransaction", (self.txid, {"disableStorage": CONFIG.mode != "console"})
)
except (requests.exceptions.Timeout, requests.exceptions.ConnectionError) as e:
msg = f"Encountered a {type(e).__name__} while requesting "
msg += "`debug_traceTransaction`. The local RPC client has likely crashed."
if CONFIG.argv["coverage"]:
msg += " If the error persists, add the `skip_coverage` marker to this test."
raise RPCRequestError(msg) from None
if "error" in trace:
self._modified_state = None
self._trace_exc = RPCRequestError(trace["error"]["message"])
raise self._trace_exc
self._raw_trace = trace = trace["result"]["structLogs"]
if not trace:
self._modified_state = False
return
# different nodes return slightly different formats. its really fun to handle
# geth/nethermind returns unprefixed and with 0-padding for stack and memory
# erigon returns 0x-prefixed and without padding (but their memory values are like geth)
fix_stack = False
for step in trace:
if not step["stack"]:
continue
check = step["stack"][0]
if not isinstance(check, str):
break
if check.startswith("0x"):
fix_stack = True
> c:\users\xxxx\appdata\local\programs\python\python310\lib\site-packages\brownie\network\transaction.py(678)_get_trace()
-> step["pc"] = int(step["pc"], 16)
(Pdb)
I am doing Patricks Solidity course and ran into this error. I ended up copying and pasting his code:
def test_only_owner_can_withdraw():
if network.show_active() not in LOCAL_BLOCKCHAIN_ENVIRONMENTS:
pytest.skip("only for local testing")
fund_me = deploy_fund_me()
bad_actor = accounts.add()
with pytest.raises(exceptions.VirtualMachineError):
fund_me.withdraw({"from": bad_actor})
Pytest worked for my other tests however When I tried to do this one it wouldnt work.
Ok, So after looking at my scripts and contracts I found the issue. The was an issue with my .sol contract and instead of returning a variable, it was returning the error message from my retrieve function in the contract. Its fixed and working now

How to add configuration setting for sasl.mechanism PLAIN (API) and GSSAPI (Kerberos) authentication in python script

Need some help to set the configuration for sasl.mechanism PLAIN (API) and GSSAPI (Kerberos) authentication.
We are using confluent Kafka here, there are two scripts, one a python script and the second one is a bash script which calls the python one. You can find the script below.
Thanks for the help in advance!
import json
import os
import string
import random
import socket
import uuid
import re
from datetime import datetime
import time
import hashlib
import math
import sys
from functools import cache
from confluent_kafka import Producer, KafkaError, KafkaException
topic_name = os.environ['TOPIC_NAME']
partition_count = int(os.environ['PARTITION_COUNT'])
message_key_template = json.loads(os.environ['KEY_TEMPLATE'])
message_value_template = json.loads(os.environ['VALUE_TEMPLATE'])
message_header_template = json.loads(os.environ['HEADER_TEMPLATE'])
bootstrap_servers = os.environ['BOOTSTRAP_SERVERS']
perf_counter_batch_size = int(os.environ.get('PERF_COUNTER_BATCH_SIZE', 100))
messages_per_aggregate = int(os.environ.get('MESSAGES_PER_AGGREGATE', 1))
max_message_count = int(os.environ.get('MAX_MESSAGE_COUNT', sys.maxsize))
def error_cb(err):
""" The error callback is used for generic client errors. These
errors are generally to be considered informational as the client will
automatically try to recover from all errors, and no extra action
is typically required by the application.
For this example however, we terminate the application if the client
is unable to connect to any broker (_ALL_BROKERS_DOWN) and on
authentication errors (_AUTHENTICATION). """
print("Client error: {}".format(err))
if err.code() == KafkaError._ALL_BROKERS_DOWN or \
err.code() == KafkaError._AUTHENTICATION:
# Any exception raised from this callback will be re-raised from the
# triggering flush() or poll() call.
raise KafkaException(err)
def acked(err, msg):
if err is not None:
print("Failed to send message: %s: %s" % (str(msg), str(err)))
producer_configs = {
'bootstrap.servers': bootstrap_servers,
'client.id': socket.gethostname(),
'error_cb': error_cb
}
# TODO: Need to support sasl.mechanism PLAIN (API) and GSSAPI (Kerberos) authentication.
# TODO: Need to support truststores for connecting to private DCs.
producer = Producer(producer_configs)
# generates a random value if it is not cached in the template_values dictionary
def get_templated_value(term, template_values):
if not term in template_values:
template_values[term] = str(uuid.uuid4())
return template_values[term]
def fill_template_value(value, template_values):
str_value = str(value)
template_regex = '{{(.+?)}}'
templated_terms = re.findall(template_regex, str_value)
for term in templated_terms:
str_value = str_value.replace(f"{{{{{term}}}}}", get_templated_value(term, template_values))
return str_value
def fill_template(template, templated_terms):
# TODO: Need to address metadata field, as it's treated as a string instead of a nested object.
return {field: fill_template_value(value, templated_terms) for field, value in template.items()}
#cache
def get_partition(lock_id):
bits = 128
bucket_size = 2**bits / partition_count
partition = (int(hashlib.md5(lock_id.encode('utf-8')).hexdigest(), 16) / bucket_size)
return math.floor(partition)
sequence_number = int(time.time() * 1000)
sequence_number = 0
message_count = 0
producing = True
start_time = time.perf_counter()
aggregate_message_counter = 0
# cache for templated term values so that they match across the different templates
templated_values = {}
try:
while producing:
sequence_number += 1
aggregate_message_counter += 1
message_count += 1
if aggregate_message_counter % messages_per_aggregate == 0:
# reset templated values
templated_values = {}
else:
for term in list(templated_values):
if term not in ['aggregateId', 'tenantId']:
del(templated_values[term])
# Fill in templated field values
message_key = fill_template(message_key_template, templated_values)
message_value = fill_template(message_value_template, templated_values)
message_header = fill_template(message_header_template, templated_values)
ts = datetime.utcnow().isoformat()[:-3]+'Z'
message_header['timestamp'] = ts
message_header['sequence_number'] = str(sequence_number)
message_value['timestamp'] = ts
message_value['sequenceNumber'] = sequence_number
lock_id = message_header['lock_id']
partition = get_partition(lock_id) # partition by lock_id, since key could be random, but a given aggregate_id should ALWAYS resolve to the same partition, regardless of key.
# Send message
producer.produce(topic_name, partition=partition, key=json.dumps(message_key), value=json.dumps(message_value), headers=message_header, callback=acked)
if sequence_number % perf_counter_batch_size == 0:
producer.flush()
end_time = time.perf_counter()
total_duration = end_time - start_time
messages_per_second=(perf_counter_batch_size/total_duration)
print(f'{messages_per_second} messages/second')
# reset start time
start_time = time.perf_counter()
if message_count >= max_message_count:
break
except Exception as e:
print(f'ERROR: %s' % e)
sys.exit(1)
finally:
producer.flush()

Fallback to normal function if celery is not active

What I require is a simple hack for running function synchronously if celery is not active.
What I tried is:
is_celery_working returns False although celery and Redis both are running (ran celery -A project worker -l debug and redis-server respectively). Also get_celery_worker_status is always giving error in status.
I am using celery with Django.
from project.celery import app
def is_celery_working():
result = app.control.broadcast('ping', reply=True, limit=1)
return bool(result) # True if at least one result
def sync_async(func):
if is_celery_working():
return func.delay
else:
return func
sync_async(some_func)(**its_args, **its_kwrgs)
def get_celery_worker_status():
error_key = 'error'
try:
from celery.task.control import inspect
insp = inspect()
d = insp.stats()
if not d:
d = {error_key: 'No running Celery workers were found.'}
except IOError as e:
from errno import errorcode
msg = "Error connecting to the backend: " + str(e)
if len(e.args) > 0 and errorcode.get(e.args[0]) == 'ECONNREFUSED':
msg += ' Check that the RabbitMQ server is running.'
d = {error_key: msg}
except ImportError as e:
d = {error_key: str(e)}
return d
def sync_async(func):
status = get_celery_worker_status()
if 'error' not in status:
return func.delay
else:
return func
sync_async(some_func)(**its_args, **its_kwrgs)
Your simple is_celery_working function looks correct. If you're getting False, you may want to increase your timeout to 5 or 10 seconds using the optional timeout parameter.
def is_celery_working():
result = app.control.broadcast('ping', reply=True, limit=1, timeout=5.0)
return bool(result) # True if at least one result
def sync_async(func, *args, **kwargs):
try:
func.delay(*args, **kwargs)
except Exception as error:
print('Celery not active', error)
func(*args, **kwargs)
This just gives an error if the Redis server is not working. This worked fine for me as I am assuming that if Redis is not working then celery is stopped.

Device twin Reported properties are not updating properly with python sdk

I had divided my software update process in various stages like (download, unzip, pre-install, install, post-install). I am setting reported properties
at every stage accordingly. But these properties are not updating during installation process (i.e. unable to see changes in reported property in device twin on azure portal) but at the end of installation I am getting callback responses for all the set "reported" properties.
I am working on software update with Azure IOT using python device sdk.
For this I have modified the sample given in SDK (i.e. iothub_client_sample_class.py file). I am using device twin for updating the software. I had created "desired" property for "software_version" in device twin. Once "desired" property for "software_version" is changed, software update process is started. Software update process perform various operation so I have divided this process in various stages. I am sending "reported" properties for ever stage to IotHub. But these "reported" are not updating in seqence in device twin reported property on azure portal.
import random
import time
import sys
import iothub_client
import json
from iothub_client import IoTHubClient, IoTHubClientError, IoTHubTransportProvider
from iothub_client import IoTHubMessage, IoTHubMessageDispositionResult, IoTHubError, DeviceMethodReturnValue
from iothub_client_args import get_iothub_opt, OptionError
# HTTP options
# Because it can poll "after 9 seconds" polls will happen effectively
# at ~10 seconds.
# Note that for scalabilty, the default value of minimumPollingTime
# is 25 minutes. For more information, see:
# https://azure.microsoft.com/documentation/articles/iot-hub-devguide/#messaging
TIMEOUT = 241000
MINIMUM_POLLING_TIME = 9
# messageTimeout - the maximum time in milliseconds until a message times out.
# The timeout period starts at IoTHubClient.send_event_async.
# By default, messages do not expire.
MESSAGE_TIMEOUT = 1000
RECEIVE_CONTEXT = 0
AVG_WIND_SPEED = 10.0
MIN_TEMPERATURE = 20.0
MIN_HUMIDITY = 60.0
MESSAGE_COUNT = 5
RECEIVED_COUNT = 0
TWIN_CONTEXT = 0
METHOD_CONTEXT = 0
# global counters
RECEIVE_CALLBACKS = 0
SEND_CALLBACKS = 0
BLOB_CALLBACKS = 0
TWIN_CALLBACKS = 0
SEND_REPORTED_STATE_CALLBACKS = 0
METHOD_CALLBACKS = 0
firstTime = True
hub_manager = None
PROTOCOL = IoTHubTransportProvider.MQTT
CONNECTION_STRING = "XXXXXX"
base_version = '1.0.0.000'
SUCCESS = 0
firstTime = True
def downloadImage(url):
# Code for downloading the package from url
return 0
def unzipPackage():
#code for unzipping the package
return 0
def readPackageData():
# code reading package data
return 0
def pre_install():
#code for installing dependencies
return 0
def install():
#code for installing main package
return 0
def post_install():
#code for verifying installation
return 0
def start_software_update(url,message):
global hub_manager
print "Starting software update process!!!!"
reported_state = "{\"updateStatus\":\"softwareUpdateinprogress\"}"
hub_manager.send_reported_state(reported_state, len(reported_state), 1003)
time.sleep(1)
status = downloadImage(url)
if status == SUCCESS:
reported_state = "{\"updateStatus\":\"downloadComplete\"}"
hub_manager.send_reported_state(reported_state, len(reported_state), 1004)
print "Downlaod Phase Done!!!"
time.sleep(1)
else:
print "Download Phase failed!!!!"
return False
status = unzipPackage()
if status == SUCCESS:
reported_state = "{\"updateStatus\":\"UnzipComplete\"}"
hub_manager.send_reported_state(reported_state, len(reported_state), 1005)
print "Unzip Package Done!!!"
time.sleep(1)
else:
print "Unzip package failed!!!"
return False
status = readPackageData()
if status == SUCCESS:
reported_state = "{\"updateStatus\":\"ReadPackageData\"}"
hub_manager.send_reported_state(reported_state, len(reported_state), 1006)
print "Reading package json data"
time.sleep(1)
else:
print "Failed Reading package!!!"
return False
status = pre_install()
if status == SUCCESS:
reported_state = "{\"updateStatus\":\"PreInstallComplete\"}"
hub_manager.send_reported_state(reported_state, len(reported_state), 1007)
time.sleep(1)
print "pre_install state successful!!!"
else:
print "pre_install failed!!!!"
return False
status = install()
if status == SUCCESS:
reported_state = "{\"updateStatus\":\"InstallComplete\"}"
hub_manager.send_reported_state(reported_state, len(reported_state), 1008)
time.sleep(1)
print "install sucessful!!!"
else:
print "install failed!!!"
return False
status = post_install()
if status == SUCCESS:
reported_state = "{\"updateStatus\":\"SoftwareUpdateComplete\"}"
hub_manager.send_reported_state(reported_state, len(reported_state), 1009)
time.sleep(1)
print "post install sucessful!!!"
else:
print "post install failed!!!"
return False
return True
def device_twin_callback(update_state, payload, user_context):
global TWIN_CALLBACKS
global firstTime
global base_version
print ( "\nTwin callback called with:\nupdateStatus = %s\npayload = %s\ncontext = %s" % (update_state, payload, user_context) )
TWIN_CALLBACKS += 1
print ( "Total calls confirmed: %d\n" % TWIN_CALLBACKS )
message = json.loads(payload)
if not firstTime:
if message["software_version"] != base_version:
url = message["url"]
status = start_software_update(url,message)
if status:
print "software Update Successful!!!"
else:
print "software Update Unsuccessful!!!"
else:
base_version = message["desired"]["software_version"]
print "Set firstTime to false", base_version
firstTime = False
def send_reported_state_callback(status_code, user_context):
global SEND_REPORTED_STATE_CALLBACKS
print ( "Confirmation for reported state received with:\nstatus_code = [%d]\ncontext = %s" % (status_code, user_context) )
SEND_REPORTED_STATE_CALLBACKS += 1
print ( " Total calls confirmed: %d" % SEND_REPORTED_STATE_CALLBACKS )
class HubManager(object):
def __init__(
self,
connection_string,
protocol=IoTHubTransportProvider.MQTT):
self.client_protocol = protocol
self.client = IoTHubClient(connection_string, protocol)
if protocol == IoTHubTransportProvider.HTTP:
self.client.set_option("timeout", TIMEOUT)
self.client.set_option("MinimumPollingTime", MINIMUM_POLLING_TIME)
# set the time until a message times out
self.client.set_option("messageTimeout", MESSAGE_TIMEOUT)
# some embedded platforms need certificate information
# self.set_certificates()
self.client.set_device_twin_callback(device_twin_callback, TWIN_CONTEXT)
def send_reported_state(self, reported_state, size, user_context):
self.client.send_reported_state(
reported_state, size,
send_reported_state_callback, user_context)
def main(connection_string, protocol):
global hub_manager
try:
print ( "\nPython %s\n" % sys.version )
print ( "IoT Hub Client for Python" )
hub_manager = HubManager(connection_string, protocol)
print ( "Starting the IoT Hub Python sample using protocol %s..." % hub_manager.client_protocol )
reported_state = "{\"updateStatus\":\"waitingforupdate\"}"
hub_manager.send_reported_state(reported_state, len(reported_state), 1002)
while True:
time.sleep(1)
except IoTHubError as iothub_error:
print ( "Unexpected error %s from IoTHub" % iothub_error )
return
except KeyboardInterrupt:
print ( "IoTHubClient sample stopped" )
if __name__ == '__main__':
try:
(CONNECTION_STRING, PROTOCOL) = get_iothub_opt(sys.argv[1:], CONNECTION_STRING)
except OptionError as option_error:
print ( option_error )
usage()
sys.exit(1)
main(CONNECTION_STRING, PROTOCOL)
Expected Result: The "reported" property for every stage in software update should update properly in device twin in azure portal.
Actual Result: The "reported" property for each stage in software update process is not updating properly.

Use python to check if dev_appserver is running on localhost

I have a script that I use to connect to localhost:8080 to run some commands on a dev_appserver instance. I use a combination of remote_api_stub and httplib.HTTPConnection. Before I make any calls to either api I want to ensure that the server is actually running.
What would be a "best practice" way in python to determine:
if any web server is running on localhost:8080
if dev_appserver is running on localhost:8080?
This should do it:
import httplib
NO_WEB_SERVER = 0
WEB_SERVER = 1
GAE_DEV_SERVER_1_0 = 2
def checkServer(host, port, try_only_ssl = False):
hh = None
connectionType = httplib.HTTPSConnection if try_only_ssl \
else httplib.HTTPConnection
try:
hh = connectionType(host, port)
hh.request('GET', '/_ah/admin')
resp = hh.getresponse()
headers = resp.getheaders()
if headers:
if (('server', 'Development/1.0') in headers):
return GAE_DEV_SERVER_1_0|WEB_SERVER
return WEB_SERVER
except httplib.socket.error:
return NO_WEB_SERVER
except httplib.BadStatusLine:
if not try_only_ssl:
# retry with SSL
return checkServer(host, port, True)
finally:
if hh:
hh.close()
return NO_WEB_SERVER
print checkServer('scorpio', 22) # will print 0 an ssh server
print checkServer('skiathos', 80) # will print 1 for an apache web server
print checkServer('skiathos', 8080) # will print 3, a GAE Dev Web server
print checkServer('no-server', 80) # will print 0, no server
print checkServer('www.google.com', 80) # will print 1
print checkServer('www.google.com', 443) # will print 1
I have an ant build script that does stuff using remote_api. To verify the server is running I just use curl and make sure no error is returned by it.
<target name="-local-server-up">
<!-- make sure local server is running -->
<exec executable="curl" failonerror="true">
<arg value="-s"/>
<arg value="${local.host}${remote.api}"/>
</exec>
<echo>local server running</echo>
</target>
You could just use call to do the same in Python (assuming you have curl on your machine).
I would go with something like this:
import httplib
GAE_DEVSERVER_HEADER = "Development/1.0"
def is_HTTP_server_running(host, port, just_GAE_devserver = False):
conn= httplib.HTTPConnection(host, port)
try:
conn.request('HEAD','/')
return not just_GAE_devserver or \
conn.getresponse().getheader('server') == GAE_DEVSERVER_HEADER
except (httplib.socket.error, httplib.HTTPException):
return False
finally:
conn.close()
tested with:
assert is_HTTP_server_running('yahoo.com','80') == True
assert is_HTTP_server_running('yahoo.com','80', just_GAE_devserver = True) == False
assert is_HTTP_server_running('localhost','8088') == True
assert is_HTTP_server_running('localhost','8088', just_GAE_devserver = True) == True
assert is_HTTP_server_running('foo','8088') == False
assert is_HTTP_server_running('foo','8088', just_GAE_devserver = True) == False

Categories