I've been trying to figure out what the best way is to do something when my fabric script fails (for example send a slack notification message via python module slackbot).
I've made an example where I try to do the above here:
fab_failtest.py
my_slackclient.py
You can run above example by downloading both files to a directory, pip install fabric and slackbot, then run:
fab --fabfile=fab_failtest.py fail_test1
or
fab --fabfile=fab_failtest.py fail_test2
(you also have to have a machine you can ssh to, in this example I have mrbluesky#elo with open ssh port on 22)
fail_test1 uses try-except so I can get exception error info and so forth
fail_test2 uses try-finally plus a simple boolean variable so no exception info is available
At first I thought I had it with the fail_test1 example but I've seen it fail several times to send the slack message on failure, I'm wondering if there might be a race condition or something involved? I could start using fail_test2 instead but I really like to have access to the stack-trace like in fail_test1.
Is there a better way to do this, like, something provided in python fabric that does excatly what I'm trying to accomplish in above example?
I disagree with both your approaches. Im a strong believer that less code is better. What do i mean by that? A function should do what its name says, no more no less, if you have to add in a global handler like that i would add it in as a wrapper, fabric functions are difficult enough to read, no need to add error handling to the mix. With that said:
import sys
import traceback
from fabric.api import task, settings, local, abort
from fabric.decorators import _wrap_as_new
from functools import wraps
HOST = 'elo'
PORT = 22
def alert_on_fail(func):
#wraps(func)
def decorated(*args, **kwargs):
try:
return func(*args, **kwargs)
except:
# TODO: add more code here
exception_type, value, tb_msg = sys.exc_info()
traceback_msg = traceback.format_exc()
notify('something went wrong: ' + traceback_msg)
abort('exiting error!!')
return _wrap_as_new(func, decorated)
#task
#alert_on_fail
def fail_test(host=HOST, port=PORT):
notify('fail test', msg_type='info')
local('''python -c "raise Exception('foobar')"''')
notify('script ran successfully', msg_type='success') # this will never run because the function above crashed
#task
#alert_on_fail
def pass_test(host=HOST, port=PORT):
notify('pass test', msg_type='info')
local('whoami')
notify('script ran successfully', msg_type='success')
def notify(msg, **kwargs):
# DISREGARD THIS
print 'sent to slack:', msg
Output:
$ fab fail_test
sent to slack: fail test
[localhost] local: python -c "raise Exception('foobar')"
Traceback (most recent call last):
File "<string>", line 1, in <module>
Exception: foobar
Fatal error: local() encountered an error (return code 1) while executing 'python -c "raise Exception('foobar')"'
Aborting.
sent to slack: something went wrong: Traceback (most recent call last):
File "/private/tmp/fabfile.py", line 21, in decorated
return func(*args, **kwargs)
File "/private/tmp/fabfile.py", line 34, in fail_test
local('''python -c "raise Exception('foobar')"''')
File "/usr/local/lib/python2.7/site-packages/fabric/operations.py", line 1198, in local
error(message=msg, stdout=out, stderr=err)
File "/usr/local/lib/python2.7/site-packages/fabric/utils.py", line 347, in error
return func(message)
File "/usr/local/lib/python2.7/site-packages/fabric/utils.py", line 53, in abort
sys.exit(msg)
SystemExit: local() encountered an error (return code 1) while executing 'python -c "raise Exception('foobar')"'
Fatal error: exiting error!!
Aborting.
exiting error!!
and:
$ fab pass_test
sent to slack: pass test
[localhost] local: whoami
buzzi
sent to slack: script ran successfully
Done.
You'll notice that the functions are now "easy" to read, they are "simple", all the error handling code has been moved over somewhere else.
Related
I have a small Python3-script like this:
import speedtest
# Speedtest
test = speedtest.Speedtest() # <--- line 4
test.get_servers()
best = test.get_best_server()
print(f"Found: {best['host']} located in {best['country']}")
The first time I run it, it works and everything is fine; it outputs:
Found: speedtest.witcom.cloud:8080 located in Germany
Happy days.
The second time (and subsequel times) that I run the script, I get this error:
Traceback (most recent call last):
File "/Users/zeth/Code/pinger/pinger.py", line 4, in <module>
test = speedtest.Speedtest()
File "/usr/local/lib/python3.9/site-packages/speedtest.py", line 1095, in __init__
self.get_config()
File "/usr/local/lib/python3.9/site-packages/speedtest.py", line 1127, in get_config
raise ConfigRetrievalError(e)
speedtest.ConfigRetrievalError: HTTP Error 403: Forbidden
When Googling around, I saw that I could also call this module straight from the command line, but just running this:
$ speedtest-cli
That gives me the same kind of error:
Retrieving speedtest.net configuration...
Cannot retrieve speedtest configuration
ERROR: HTTP Error 403: Forbidden
But if I run the direct cli-command: speedtest-cli --secure ( docs for the --secure-flag ), then it goes through and outputs this:
Retrieving speedtest.net configuration...
Testing from Deutsche Telekom AG (212.185.228.168)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by hotspot.koeln (Cologne) [3.44 km]: 28.805 ms
Testing download speed................................................................................
Download: 30.01 Mbit/s
Testing upload speed......................................................................................................
Upload: 8.68 Mbit/s
The question
I can't figure out how to change this Python-line: test = speedtest.Speedtest() to use a --secure-flag (nor via HTTPS).
The documentation for speedtest-cli is scarce.
Other attempts
I found this solution here: Python Speedtest facing problems with certification _ssl.c:1056, that suggests manually approving the certificates.
But in this directory: /Volumes/Macintosh HD/Applications/ I don't have anything called Python3.9. I have python3.9 installed via Brew. And I'm on a Mac.
I could do this:
test = speedtest.Speedtest(secure=True)
I looked into the source code myself, in this directory:
vim /usr/local/lib/python3.9/site-packages/speedtest.py
Where I would see the function was defined like this:
class Speedtest(object):
"""Class for performing standard speedtest.net testing operations"""
def __init__(self, config=None, source_address=None, timeout=10,
secure=False, shutdown_event=None):
self.config = {}
self._source_address = source_address
self._timeout = timeout
self._opener = build_opener(source_address, timeout)
self._secure = secure
...
...
...
In my proyect scheduler return this error in the execute job, help me please
this is my error in cosole, then execute the program
Error notifying listener
Traceback (most recent call last):
File "C:\Users\angel\project\venv\lib\site-packages\apscheduler\schedulers\base.py", line 836, in _dispatch_event
cb(event)
File "C:\Users\angel\project\venv\lib\site-packages\django_apscheduler\jobstores.py", line 53, in handle_submission_event
DjangoJobExecution.SENT,
File "C:\Users\angel\project\venv\lib\site-packages\django_apscheduler\models.py", line 157, in atomic_update_or_create
job_id=job_id, run_time=run_time
File "C:\Users\angel\project\venv\lib\site-packages\django\db\models\query.py", line 412, in get
(self.model._meta.object_name, num)
django_apscheduler.models.DjangoJobExecution.MultipleObjectsReturned: get() returned more than one DjangoJobExecution -- it returned 2!
This is my code
class Command(BaseCommand):
help = "Runs apscheduler."
scheduler = BackgroundScheduler(timezone=settings.TIME_ZONE, daemon=True)
scheduler.add_jobstore(DjangoJobStore(), "default")
def handle(self, *args, **options):
self.scheduler.add_job(
delete_old_job_executions,
'interval', seconds=5,
id="delete_old_job_executions",
max_instances=1,
replace_existing=True
)
try:
logger.info("Starting scheduler...")
self.scheduler.start()
except KeyboardInterrupt:
logger.info("Stopping scheduler...")
self.scheduler.shutdown()
logger.info("Scheduler shut down successfully!")
Not sure if you're still having this issue. I have same error and found your question. Turned out this happens only in dev environment.
Because python3 manage.py runserver starts two processes by default, the code
seems to register two job records and find two entries at next run time.
With --noreload option, it starts only one scheduler thread and works well. As name implies, it won't reload changes you make automatically though.
python3 manage.py runserver --noreload
not sure if you're still having this issue. i think you can use socket , socket can use this issue.
look this enter image description here
I develop locally on win10, which is a problem for the usage of the RQ task queue, which only works on linux systems because it requires the ability to fork processes. I'm trying to extend the flask-base project https://github.com/hack4impact/flask-base/tree/master/app which can use RQ. I came across https://github.com/michaelbrooks/rq-win . I love the idea of this repo (If I can get it working it will really simplify my life, since I develop on win 10 -64):
After installing this library
I can queue a job in my views by running something like:
#login_required
#main.route('/selected')
def selected():
messages = 'abcde'
j = get_queue().enqueue(render_png, messages, result_ttl=5000)
return j.get_id()
This returns a job_code correctly.
I changed the code in manage.py to:
from rq_win import WindowsWorker
#manager.command
def run_worker():
"""Initializes a slim rq task queue."""
listen = ['default']
REDIS_URL = 'redis://localhost:6379'
conn = Redis.from_url(REDIS_URL)
with Connection(conn):
# worker = Worker(map(Queue, listen))
worker = WindowsWorker(map(Queue, listen))
worker.work()
When I try to run it with:
$ python -u manage.py run_worker
09:40:44
09:40:44 *** Listening on ?[32mdefault?[39;49;00m...
09:40:58 ?[32mdefault?[39;49;00m: ?[34mapp.main.views.render_png('{"abcde"}')?[39;49;00m (8c1b6186-39a5-4daf-9c45-f60e4241cd1f)
...\lib\site-packages\rq\job.py:161: DeprecationWarning: job.status is deprecated. Use job.set_status() instead
DeprecationWarning
09:40:58 ?[31mValueError: Unknown type <class 'redis.client.StrictPipeline'>?[39;49;00m
Traceback (most recent call last):
File "...\lib\site-packages\rq_win\worker.py", line 87, in perform_job
queue.enqueue_dependents(job, pipeline=pipeline)
File "...\lib\site-packages\rq\queue.py", line 322, in enqueue_dependents
for job_id in pipe.smembers(dependents_key)]
File "...\lib\site-packages\rq\queue.py", line 322, in <listcomp>
for job_id in pipe.smembers(dependents_key)]
File "...\lib\site-packages\rq\compat\__init__.py", line 62, in as_text
raise ValueError('Unknown type %r' % type(v))
ValueError: Unknown type <class 'redis.client.StrictPipeline'>
So in summary, I think the jobs are being queued correctly within redis. However when the worker process tries to grab a job off of the queue to process, This error occurs. How can I fix this?
So after some digging, it looks like the root of the error is here, where job_id being sent to the as_text function is, somehow, a StrictPipeline object. However, I have been unable to replicate the error locally; can you post more of your code? Also, I would try re-installing the redis, rq, and rq-win modules, and possibly try importing rq.compat
Background
I have an existing sipp conf file that I launch like so:
sipp mysipdomain.net -sf ./testcall.conf -m 1 -s 12345 -i 10.1.1.1:5060
This runs just fine. It simulates a call in our test labs. But now I need to expand this test to make it a part of a larger test script where not only do I launch the sipp test, but I prove (via sip trace) that it's hitting the right boxes.
I decided to wrap this sipp call in python. I just found https://github.com/SIPp/pysipp and am trying to see if I can write this entire test in python. To start, i tried to run the same sipp test using pysipp.
Problem / Question
I'm currently getting an error that says:
lab2:/tmp/jj/sipp_tests# python mvv_numeric.py
No handlers could be found for logger "pysipp"
Traceback (most recent call last):
File "mvv_numeric.py", line 6, in <module>
uac()
File "/usr/lib/python2.7/site-packages/pysipp-0.1.alpha-py2.7.egg/pysipp/agent.py", line 71, in __call__
raise_exc=raise_exc, **kwargs
File "/usr/lib/python2.7/site-packages/pluggy-0.3.1-py2.7.egg/pluggy.py", line 724, in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "/usr/lib/python2.7/site-packages/pluggy-0.3.1-py2.7.egg/pluggy.py", line 338, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/usr/lib/python2.7/site-packages/pluggy-0.3.1-py2.7.egg/pluggy.py", line 333, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "/usr/lib/python2.7/site-packages/pluggy-0.3.1-py2.7.egg/pluggy.py", line 596, in execute
res = hook_impl.function(*args)
File "/usr/lib/python2.7/site-packages/pysipp-0.1.alpha-py2.7.egg/pysipp/__init__.py", line 250, in pysipp_run_protocol
finalize(cmds2procs, raise_exc=raise_exc)
File "/usr/lib/python2.7/site-packages/pysipp-0.1.alpha-py2.7.egg/pysipp/__init__.py", line 228, in finalize
raise SIPpFailure(msg)
pysipp.SIPpFailure: Some agents failed
'uac' with exit code 255 -> Command or syntax error: check stderr output
Code
Here's what the py script looks like:
1 import pysipp
2 uac = pysipp.client(destaddr=('mysipdomain.net', 5060))
3 uac.uri_username = '12345'
4 uac.auth_password = ''
5 uac.scen_file = './numeric.xml'
6 uac()
And the original sipp "testcall.conf" has been renamed to "numeric.xml" and looks like this: (I'm only including the first part because it's quite long. if you need to see something specific, please let me know and I will add to this post)
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE scenario SYSTEM "sipp.dtd">
<scenario name="UAC with Media">
<send retrans="10000">
<![CDATA[
INVITE sip:[service]#[remote_ip]:[remote_port] SIP/2.0
Via: SIP/2.0/[transport] [local_ip]:[local_port];branch=[branch]
From: sipp <sip:sipp#[local_ip]:[local_port]>;tag=[pid]SIPpTag00[call_number]
To: [service] <sip:[service]#[remote_ip]:[remote_port]>
Call-id: [call_id]
CSeq: 1 INVITE
Contact: <sip:sipp#[local_ip]:[local_port]>
Allow: INVITE, ACK, BYE, CANCEL, OPTIONS, INFO, MESSAGE, SUBSCRIBE, NOTIFY, PRACK, UPDATE, REFER
User-Agent: PolycomVVX-VVX_300-UA/5.5.2.8571
Accept-Language: en
Supported: replaces,100rel
Allow-Events: conference,talk,hold
Max-Forwards: 70
Content-Type: application/sdp
Content-Length: [len]
I'm sure it's something simple I've overlooked. Any pointers would be appreciated.
EDIT:
I added debug level logging and reran the python script. In the logs I can now see what pysipp is actually attempting:
2018-01-31 14:40:32,715 MainThread [DEBUG] pysipp launch.py:63 : launching cmd:
"'/usr/bin/sipp' 'mysipdomain.net':'5060' -s '12345' -sn 'uac' -sf 'numeric.xml' -recv_timeout '5000' -r '1' -l '1' -m '1' -log_file '/tmp/uac_log_file' -screen_file '/tmp/uac_screen_file' -trace_logs -trace_screen "
So comparing that with the original command line I use to run sipp, I see the extra "-sn 'uac'".
Going to see about either getting my SIPP script to work with that tag or ... google to see if I can find other similar posts.
In the meantime, if you see my mistake, i'm all ears.
The problem here (as you noticed) is likely that pysipp.client() sets the -sn uac flag and sipp fails having both -sn and -sf.
To see the actual error you can enable logging before running the client:
import pysipp
pysipp.utils.log_to_stderr("DEBUG")
uac = pysipp.client(destaddr=('mysipdomain.net', 5060))
uac.uri_username = '12345'
uac.auth_password = ''
uac.scen_file = './numeric.xml'
uac()
The hack is to simply do uac.scen_name = None but the proper way to do this is to either use pysipp.scenario() (docs here) and rename your numeric.xml to have uac in the file name (i.e. uac_numeric.xml) or use instead pysipp.ua(scen_file=<path/to/numeric.xml>).
To understand the problem the client is currently applying a default scenario name argument when really the user should be able to override that (though in that case there'll be no guarantee that the user actually is sending client traffic which renders the name client kind of pointless).
I'm trying to use parallel python in order to do some distributed benchmarking (essentially, coordinate and run some code on a set of machines from a central server). The code I had was working perfectly fine until I moved the functionality to a separate package. From then on, I keep getting ImportError: No module named some.module.pp_test.
My question is actually two-fold: has anyone ever came across this problem with pp, and if yes, how to solve it? I tried using dill (import dill), but didn't help. Also, is there a good replacement for parallelpython, that doesn't require any additional infrastructure?
The exact error I get is:
RUNNING TEST
Waiting for hosts to finish booting....A fatal error has occured during the function execution
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/ppworker.py", line 86, in run
__args = pickle.loads(__sargs)
ImportError: No module named some.module.pp_test
Caught exception in the run phase 'NoneType' object is not iterable
Traceback (most recent call last):
File "test.py", line 5, in <module>
p.ping_pong()
File "/home/ubuntu/workspace/pp-test/some/module/pp_test.py", line 5, in ping_pong
a_test.run()
File "/home/ubuntu/workspace/pp-test/some/module/pp_test.py", line 27, in run
pong, hostname = ping()
TypeError: 'NoneType' object is not iterable
The code is structured this way:
pp-test/
test.py
some/
__init__.py
module/
__init__.py
pp_test.py
The test.py is implemented as:
from some.module.pp_test import MWE
p = MWE()
p.ping_pong()
While pp_test.py is:
class MWE():
def ping_pong(self):
print "RUNNING TEST "
a_test = PPTester()
a_test.run()
import pp
import time
from sys import stdout, exit
class PPTester(object):
def run(self):
try:
ppservers = ('10.10.10.10', )
time.sleep(5)
job_server = pp.Server(0, ppservers=ppservers)
stdout.write("Waiting for hosts to finish booting...")
while len(job_server.get_active_nodes()) - 1 < len(ppservers):
stdout.write(".")
stdout.flush()
time.sleep(1)
ppmodules = ()
pings = [(server, job_server.submit(self.run_pong, modules=ppmodules)) for server in ppservers]
for server, ping in pings:
pong, hostname = ping()
print "Host ", hostname, " is alive!"
print "All servers booted up, starting benchmarks..."
job_server.print_stats()
except Exception as e:
print "Caught exception in the run phase", e
raise
pass
def run_pong(self):
import subprocess
p = subprocess.Popen("hostname", stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
(output, err) = p.communicate()
p_status = p.wait()
return "pong ", output
dill won't work with pp out of the box, because pp doesn't serialize the python objects -- pp extracts the object's source code (like the inspect module in the standard python library).
To enable pp to use dill (actually dill.source, which is inspect augmented by dill), you have to use a fork of pp called ppft. ppft installs as pp (i.e. imports with import pp), but it has much stronger source inspection, so you can automatically "serialize" most python objects and have ppft track down their dependencies automatically.
Get ppft here: https://github.com/uqfoundation
ppft is also pip installable and python 3.x compatible.