pysipp - Trying to use it with existing sipp conf file - python

Background
I have an existing sipp conf file that I launch like so:
sipp mysipdomain.net -sf ./testcall.conf -m 1 -s 12345 -i 10.1.1.1:5060
This runs just fine. It simulates a call in our test labs. But now I need to expand this test to make it a part of a larger test script where not only do I launch the sipp test, but I prove (via sip trace) that it's hitting the right boxes.
I decided to wrap this sipp call in python. I just found https://github.com/SIPp/pysipp and am trying to see if I can write this entire test in python. To start, i tried to run the same sipp test using pysipp.
Problem / Question
I'm currently getting an error that says:
lab2:/tmp/jj/sipp_tests# python mvv_numeric.py
No handlers could be found for logger "pysipp"
Traceback (most recent call last):
File "mvv_numeric.py", line 6, in <module>
uac()
File "/usr/lib/python2.7/site-packages/pysipp-0.1.alpha-py2.7.egg/pysipp/agent.py", line 71, in __call__
raise_exc=raise_exc, **kwargs
File "/usr/lib/python2.7/site-packages/pluggy-0.3.1-py2.7.egg/pluggy.py", line 724, in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "/usr/lib/python2.7/site-packages/pluggy-0.3.1-py2.7.egg/pluggy.py", line 338, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/usr/lib/python2.7/site-packages/pluggy-0.3.1-py2.7.egg/pluggy.py", line 333, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "/usr/lib/python2.7/site-packages/pluggy-0.3.1-py2.7.egg/pluggy.py", line 596, in execute
res = hook_impl.function(*args)
File "/usr/lib/python2.7/site-packages/pysipp-0.1.alpha-py2.7.egg/pysipp/__init__.py", line 250, in pysipp_run_protocol
finalize(cmds2procs, raise_exc=raise_exc)
File "/usr/lib/python2.7/site-packages/pysipp-0.1.alpha-py2.7.egg/pysipp/__init__.py", line 228, in finalize
raise SIPpFailure(msg)
pysipp.SIPpFailure: Some agents failed
'uac' with exit code 255 -> Command or syntax error: check stderr output
Code
Here's what the py script looks like:
1 import pysipp
2 uac = pysipp.client(destaddr=('mysipdomain.net', 5060))
3 uac.uri_username = '12345'
4 uac.auth_password = ''
5 uac.scen_file = './numeric.xml'
6 uac()
And the original sipp "testcall.conf" has been renamed to "numeric.xml" and looks like this: (I'm only including the first part because it's quite long. if you need to see something specific, please let me know and I will add to this post)
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE scenario SYSTEM "sipp.dtd">
<scenario name="UAC with Media">
<send retrans="10000">
<![CDATA[
INVITE sip:[service]#[remote_ip]:[remote_port] SIP/2.0
Via: SIP/2.0/[transport] [local_ip]:[local_port];branch=[branch]
From: sipp <sip:sipp#[local_ip]:[local_port]>;tag=[pid]SIPpTag00[call_number]
To: [service] <sip:[service]#[remote_ip]:[remote_port]>
Call-id: [call_id]
CSeq: 1 INVITE
Contact: <sip:sipp#[local_ip]:[local_port]>
Allow: INVITE, ACK, BYE, CANCEL, OPTIONS, INFO, MESSAGE, SUBSCRIBE, NOTIFY, PRACK, UPDATE, REFER
User-Agent: PolycomVVX-VVX_300-UA/5.5.2.8571
Accept-Language: en
Supported: replaces,100rel
Allow-Events: conference,talk,hold
Max-Forwards: 70
Content-Type: application/sdp
Content-Length: [len]
I'm sure it's something simple I've overlooked. Any pointers would be appreciated.
EDIT:
I added debug level logging and reran the python script. In the logs I can now see what pysipp is actually attempting:
2018-01-31 14:40:32,715 MainThread [DEBUG] pysipp launch.py:63 : launching cmd:
"'/usr/bin/sipp' 'mysipdomain.net':'5060' -s '12345' -sn 'uac' -sf 'numeric.xml' -recv_timeout '5000' -r '1' -l '1' -m '1' -log_file '/tmp/uac_log_file' -screen_file '/tmp/uac_screen_file' -trace_logs -trace_screen "
So comparing that with the original command line I use to run sipp, I see the extra "-sn 'uac'".
Going to see about either getting my SIPP script to work with that tag or ... google to see if I can find other similar posts.
In the meantime, if you see my mistake, i'm all ears.

The problem here (as you noticed) is likely that pysipp.client() sets the -sn uac flag and sipp fails having both -sn and -sf.
To see the actual error you can enable logging before running the client:
import pysipp
pysipp.utils.log_to_stderr("DEBUG")
uac = pysipp.client(destaddr=('mysipdomain.net', 5060))
uac.uri_username = '12345'
uac.auth_password = ''
uac.scen_file = './numeric.xml'
uac()
The hack is to simply do uac.scen_name = None but the proper way to do this is to either use pysipp.scenario() (docs here) and rename your numeric.xml to have uac in the file name (i.e. uac_numeric.xml) or use instead pysipp.ua(scen_file=<path/to/numeric.xml>).
To understand the problem the client is currently applying a default scenario name argument when really the user should be able to override that (though in that case there'll be no guarantee that the user actually is sending client traffic which renders the name client kind of pointless).

Related

CS50 Finance - NameError: name 'self' is not defined

I am getting 500 Internal Server Error when running flask with the following error message:
NameError: name 'self' is not defined
Yesterday my code worked fine and I didn't make any changes. The error message lists python files which were already imported in the distribution code. Maybe something changed in the background?
192.168.234.116 - - [10/Jul/2019 11:15:56] "GET / HTTP/1.0" 500 -
Error on request:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/werkzeug/serving.py", line 303, in run_wsgi
execute(self.server.app)
File "/usr/local/lib/python3.7/site-packages/werkzeug/serving.py", line 291, in execute
application_iter = app(environ, start_response)
File "/home/ubuntu/environment/pset8/finance/application.py", line 13, in <module>
app = Flask(__name__)
File "/usr/local/lib/python3.7/site-packages/cs50/flask.py", line 54, in _after
self.wsgi_app = ProxyFix(self.wsgi_app, x_proto=1)
NameError: name 'self' is not defined
It's a CS50 bug (regression).
I've submitted [GitHub]: cs50/python-cs50 - Added the 1st (required) argument (self) to flask.Flask's initializer, which was closed (because it was incomplete - as I was too rushed when submitting it, and missed one spot), and [GitHub]: Fix missing self arguments in Flask __init__ replacement was created and merged. Not sure though when will it be available on the market (PyPI, so you can simply pip install it), but you could download the sources from GitHub and overwrite yours.
As an alternative, you could download the patch, and apply the changes locally. Check [SO]: Run/Debug a Django application's UnitTests from the mouse right click context menu in PyCharm Community Edition? (#CristiFati's answer) (Patching utrunner section) for how to apply patches (basically, every line that starts with one "+" sign goes in, and every line that starts with one "-" sign goes out). Or (given the fact that the change is more than trivial), you could:
Open the flask.py file (in your case: "/usr/local/lib/python3.7/site-packages/cs50/flask.py") in a text editor
Go to line #54 (in your case, in mine it's 52)
Replace the current content (do not alter leading SPACEs):
For the current line (def _after(*args, **kwargs):) by: def _after(self, *args, **kwargs):
For the next one (_before(*args, **kwargs)) by: _before(self, *args, **kwargs)
#EDIT0:
As #kaczifant already experienced, the fix is already available for download: pip3 install cs50 --upgrade
The bug was fixed by the CS50 staff.
Select CS50 IDE > Log Out and then log back in at ide.cs50.io. Alternatively, you can run:
sudo pip3 install cs50 --upgrade

Pdb error while debugging python app with serverless launched offline

I have problem with debugging my view function with
import pdb; pdb.set_trace()
placed inside it and serverless launched as
> sls offline start
in console.
Namely, making correspondent GET request I receive the following error:
Python: > /.../handler.py(88)get_results()
-> request_params = event.query_params
Python: (Pdb)
Python: 2019-02-20 18:37:43,648 [ERROR] | ...
Traceback (most recent call last):
...
File ".../handler.py", line 88, in get_results
...
File "/usr/lib/python3.6/bdb.py", line 51, in trace_dispatch
return self.dispatch_line(frame)
File "/usr/lib/python3.6/bdb.py", line 70, in dispatch_line
if self.quitting: raise BdbQuit
bdb.BdbQuit
Google suggests that the problem is in the inability of serverless process to read from stdin, but I don't know how to handle this problem.
Any suggestions?
I found a solution here https://stackoverflow.com/a/26975795/4388451:
create two fifos:
mkfifo fifo_stdin
mkfifo fifo_stdout
in one terminal
In the same terminal open stdout on background, and write to stdin:
cat fifo_stdout & cat > fifo_stdin
In python code create the the pdb object, and use it:
import pdb
mypdb = pdb.Pdb(stdin=open('fifo_stdin', 'r'), stdout=open('fifo_stdout', 'w'))
....
mypdb.set_trace()
Run python code from the folder where fifos were placed (or place fifos in the first step in the folder with python code) in another terminal
Now I am able to use pdb in first console!
PS
It is useful to use --noTimeout option while debugging: sls offline --noTimeout

Perforce (P4) Python API complains about too many locks

I wrote an application that opens several subprocesses, which initiate connections individually to a Perforce server. After a while I get this error message in almost all of these child-processes:
Traceback (most recent call last):
File "/Users/peter/Desktop/test_app/main.py", line 76, in p4_execute
p4.run_login()
File "/usr/local/lib/python3.7/site-packages/P4.py", line 665, in run_login
return self.run("login", *args, **kargs)
File "/usr/local/lib/python3.7/site-packages/P4.py", line 611, in run
raise e
File "/usr/local/lib/python3.7/site-packages/P4.py", line 605, in run
result = P4API.P4Adapter.run(self, *flatArgs)
P4.P4Exception: [P4#run] Errors during command execution( "p4 login" )
[Error]: "Fatal client error; disconnecting!
Operation 'client-SetPassword' failed.
Too many trys to get lock /Users/peter/.p4tickets.lck."
Does anyone have any idea what could cause this? I open my connections properly and all double checked on all source locations that I disconnect from the server properly via disconnect.
Only deleting the .p4tickets.lck manually works until the error comes back after a few seconds
The relevant code is here:
https://swarm.workshop.perforce.com/projects/perforce_software-p4/files/2018-1/support/ticket.cc#200
https://swarm.workshop.perforce.com/projects/perforce_software-p4/files/2018-1/sys/filetmp.cc#147
I can't see that there's any code path where the ticket.lck file would fail to get cleaned up without throwing some other error.
Is there anything unusual about the home directory where the tickets file lives? Like, say, it's on a network filer with some latency and some kind of backup process? Or maybe one that doesn't properly enforce file locks between all these subprocesses you're spawning?
How often are your scripts running "p4 login" to refresh and re-write the ticket? Many times a second? If you change them to not do that (e.g. only login if there's not already a ticket) does the problem persist?

Why is Python pyusb usb.core access denied due to permissions and why won't the rules.d fix it?

I have a USB device which I'm tring to talk to using pyusb 1.0.2 on linux (Linux tpad 4.15.0-38-generic #41~16.04.1-Ubuntu SMP Wed Oct 10 20:16:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux). Running python 3.5 I get the following error (full trace at bottom of this post):
usb.core.USBError: [Errno 13] Access denied (insufficient permissions)
Mounting the usb and using lsusb and then inspecting it:
udevadm info -a -p $(udevadm info -q path -n /dev/bus/usb/006/011)
shows
looking at device '/devices/pci0000:00/0000:00:10.0/usb6/6-1':
KERNEL=="6-1"
SUBSYSTEM=="usb"
DRIVER=="usb"
ATTR{authorized}=="1"
ATTR{avoid_reset_quirk}=="0"
ATTR{bConfigurationValue}=="1"
ATTR{bDeviceClass}=="00"
ATTR{bDeviceProtocol}=="00"
ATTR{bDeviceSubClass}=="00"
ATTR{bMaxPacketSize0}=="8"
ATTR{bMaxPower}=="98mA"
ATTR{bNumConfigurations}=="1"
ATTR{bNumInterfaces}==" 1"
ATTR{bcdDevice}=="0111"
ATTR{bmAttributes}=="80"
ATTR{busnum}=="6"
ATTR{configuration}==""
ATTR{devnum}=="11"
ATTR{devpath}=="1"
ATTR{idProduct}=="0001"
ATTR{idVendor}=="17a4"
ATTR{ltm_capable}=="no"
ATTR{manufacturer}=="Concept2"
ATTR{maxchild}=="0"
ATTR{product}=="Concept2 Performance Monitor 3 (PM3)"
ATTR{quirks}=="0x0"
ATTR{removable}=="unknown"
ATTR{serial}=="300118412"
ATTR{speed}=="12"
ATTR{urbnum}=="12"
ATTR{version}==" 1.10"
So I wrote a rule in /etc/udev/rules.d/10-local.rules like this (FYI -- I've also tried "user1" which is the user python shows it is running under and I've tried both ":=" and "="):
SUBSYSTEMS=="usb", ATTRS{idVendor}=="17a4", ATTRS{idProduct}=="0001", GROUP:="users", MODE="0777"
Then I ran udevadm test /devices/pci0000:00/0000:00:10.0/usb6/6-1
which shows:
calling: test
version 229
This program is for debugging only, it does not run any program
specified by a RUN key. It may show incorrect results, because
some values may be different, or not available at a simulation run.
=== trie on-disk ===
tool version: 229
file size: 7049340 bytes
header size 80 bytes
strings 1759644 bytes
nodes 5289616 bytes
Load module index
timestamp of '/etc/systemd/network' changed
timestamp of '/lib/systemd/network' changed
Parsed configuration file /lib/systemd/network/99-default.link
Created link configuration context.
timestamp of '/etc/udev/rules.d' changed
Reading rules file: /etc/udev/rules.d/10-local.rules
Reading rules file: /lib/udev/rules.d/40-crda.rules
[removed long list of rule files...]
Skipping empty file: /etc/udev/rules.d/99-usbftdi.rules
rules contain 393216 bytes tokens (32768 * 12 bytes), 33403 bytes strings
25688 strings (211409 bytes), 22263 de-duplicated (181432 bytes), 3426 trie nodes used
GROUP 100 /etc/udev/rules.d/10-local.rules:1
MODE 0777 /etc/udev/rules.d/10-local.rules:1
value '[dmi/id]sys_vendor' is 'LENOVO'
value '[dmi/id]sys_vendor' is 'LENOVO'
IMPORT builtin 'usb_id' /lib/udev/rules.d/50-udev-default.rules:13
IMPORT builtin 'hwdb' /lib/udev/rules.d/50-udev-default.rules:13
MODE 0664 /lib/udev/rules.d/50-udev-default.rules:41
PROGRAM 'mtp-probe /sys/devices/pci0000:00/0000:00:10.0/usb6/6-1 6 11' /lib/udev/rules.d/69-libmtp.rules:1923
starting 'mtp-probe /sys/devices/pci0000:00/0000:00:10.0/usb6/6-1 6 11'
'mtp-probe /sys/devices/pci0000:00/0000:00:10.0/usb6/6-1 6 11'(out) '0'
Process 'mtp-probe /sys/devices/pci0000:00/0000:00:10.0/usb6/6-1 6 11' succeeded.
handling device node '/dev/bus/usb/006/011', devnum=c189:650, mode=0664, uid=0, gid=100
set permissions /dev/bus/usb/006/011, 020664, uid=0, gid=100
setting mode of /dev/bus/usb/006/011 to 020664 failed: Operation not permitted
setting owner of /dev/bus/usb/006/011 to uid=0, gid=100 failed: Operation not permitted
ACTION=add
BUSNUM=006
DEVNAME=/dev/bus/usb/006/011
DEVNUM=011
DEVPATH=/devices/pci0000:00/0000:00:10.0/usb6/6-1
DEVTYPE=usb_device
DRIVER=usb
ID_BUS=usb
ID_MODEL=Concept2_Performance_Monitor_3__PM3_
ID_MODEL_ENC=Concept2\x20Performance\x20Monitor\x203\x20\x28PM3\x29
ID_MODEL_FROM_DATABASE=Performance Monitor 3
ID_MODEL_ID=0001
ID_REVISION=0111
ID_SERIAL=Concept2_Concept2_Performance_Monitor_3__PM3__300118412
ID_SERIAL_SHORT=300118412
ID_USB_INTERFACES=:030000:
ID_VENDOR=Concept2
ID_VENDOR_ENC=Concept2
ID_VENDOR_FROM_DATABASE=Concept2
ID_VENDOR_ID=17a4
MAJOR=189
MINOR=650
PRODUCT=17a4/1/111
SUBSYSTEM=usb
TYPE=0/0/0
USEC_INITIALIZED=3850171749
Unload module index
Unloaded link configuration context.
However after doing this python still reports access errors to the USB device.
Traceback (most recent call last):
File "/home/user1/PycharmProjects/PyRow/statshow.py", line 22, in <module>
erg = pyrow.pyrow(ergs[0])
File "/home/user1/PycharmProjects/PyRow/pyrow.py", line 61, in __init__
usb.util.claim_interface(erg, INTERFACE)
File "/home/user1/PycharmProjects/camera/venv/lib/python3.5/site-packages/usb/util.py", line 205, in claim_interface
device._ctx.managed_claim_interface(device, interface)
File "/home/user1/PycharmProjects/camera/venv/lib/python3.5/site-packages/usb/core.py", line 102, in wrapper
return f(self, *args, **kwargs)
File "/home/user1/PycharmProjects/camera/venv/lib/python3.5/site-packages/usb/core.py", line 159, in managed_claim_interface
self.managed_open()
File "/home/user1/PycharmProjects/camera/venv/lib/python3.5/site-packages/usb/core.py", line 102, in wrapper
return f(self, *args, **kwargs)
File "/home/user1/PycharmProjects/camera/venv/lib/python3.5/site-packages/usb/core.py", line 120, in managed_open
self.handle = self.backend.open_device(self.dev)
File "/home/user1/PycharmProjects/camera/venv/lib/python3.5/site-packages/usb/backend/libusb1.py", line 786, in open_device
return _DeviceHandle(dev)
File "/home/user1/PycharmProjects/camera/venv/lib/python3.5/site-packages/usb/backend/libusb1.py", line 643, in __init__
_check(_lib.libusb_open(self.devid, byref(self.handle)))
File "/home/user1/PycharmProjects/camera/venv/lib/python3.5/site-packages/usb/backend/libusb1.py", line 595, in _check
raise USBError(_strerror(ret), ret, _libusb_errno[ret])
usb.core.USBError: [Errno 13] Access denied (insufficient permissions)
I'm out of ideas on how to fix this. Are there any suggestions?
Using this answer I found I could correct the permissions problem using the plugdev group.
So in a terminal I used
groups
groups [myuserid]
and verified plugdev was there and the user was part of that group. Then I put the following line in /etc/udev/rules.d/10-local.rules
SUBSYSTEMS=="usb", ENV{DEVTYPE}=="usb_device",
ATTRS{idVendor}=="17a4", ATTRS{idProduct}=="0001", GROUP="plugdev",
MODE="0777"
I'm not sure if the devtype and if 0777 or 0666 is proper, but this is what worked.
After making the changes I also ran the following commands to reset the rules for the system:
sudo udevadm control --reload
sudo udevadm trigger
given that this is the first result on google, and that i don't see the answer that fixed this issue for me, I figured i'd post.
the udev rules do not allow comments and seems to fail pretty much silently, so if you tried adding comments to keep track of your devices, you'll have broken the permissions for those devices.
SUBSYSTEMS=="usb", ENV{DEVTYPE}=="usb_device", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="faf0", GROUP="plugdev", MODE="0777"
will work, but
SUBSYSTEMS=="usb", ENV{DEVTYPE}=="usb_device", ATTRS{idVendor}=="0403", ATTRS{idProduct}=="faf0", GROUP="plugdev", MODE="0777" # thorlabs KB101 motor controller
will not

Decoupled process from terminal still outputs Traceback to the terminal

While testing an application I made using a rest api I discovered this behaviour which I don't understand.
Lets start by reproducing a similar error as follows -
In file call.py -
Note that this file has code that manifests itself visually for example a GUI that runs forever. Here I am just showing you a representation and deliberately making it raise an Exception to show you the issue. Making a get request and then trying to parse the result as json will raise a JSONDecodeError.
import requests
from time import sleep
sleep(3)
uri = 'https://google.com'
r = requests.get(uri)
response_dict = r.json()
Since I want to run this as a daemon process, I decouple this process from the terminal that started it using the following trick -
In file start.py -
import subprocess
import sys
subprocess.Popen(["python3", "call.py"])
sys.exit(0)
And then I execute python3 start.py
It apparently decouples the process because if there are no exceptions the visual manifestation runs perfectly.
However in case of an exception I immediately see this output in the terminal, even though I got a new prompt after calling python3 start.py -
$ python3 start.py
$ Traceback (most recent call last):
File "call.py", line 7, in <module>
response_dict = r.json()
File "/home/walker/.local/lib/python3.6/site-packages/requests/models.py", line 896, in json
return complexjson.loads(self.text, **kwargs)
File "/usr/lib/python3/dist-packages/simplejson/__init__.py", line 518, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python3/dist-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
simplejson.errors.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Now, I understand that all exceptions MUST be handled in the program itself. And I have done it after this strange issue, but what is not clear to me is why did this happen at all in the first place?
It doesn't happen if I quit the terminal and restart the terminal (the visual manifestation gets stuck in case of a Traceback, and no output on any terminal as expected)
Why is a decoupled process behaving this way?
NOTE: Decoupling is imperative to me. It is imperative that the GUI run as a background or daemon process and that the terminal that spawns it is freed from it.
by "decoupled", I assume you mean you want the stdout/stderr to go to /dev/null? assuming that's what you mean, that's not what you've told your code to do
from the docs:
stdin, stdout and stderr specify the executed program’s standard input, standard output and standard error file handles, respectively. Valid values are PIPE, DEVNULL, an existing file descriptor (a positive integer), an existing file object, and None.
With the default settings of None, no redirection will occur; the child’s file handles will be inherited from the parent.
you therefore probably want to be doing:
from subprocess import Popen, DEVNULL
Popen(["python3", "call.py"], stdin=DEVNULL, stdout=DEVNULL, stderr=DEVNULL)
based on the OPs comment, I think they might be after a tool like GNU screen or tmux. terminal multiplexers, like these, allow you to create a virtual terminal that you can disconnect from and reconnect to at need. these answers see https://askubuntu.com/a/220880/106239 and https://askubuntu.com/a/8657/106239 have examples for tmux and screen respectively

Categories