I am trying to run simple pjsip application in daemon mode. I have combined this library with python twisted. Script works fine when I run it in shell & can make call. But when I use it with twisted's Application framework, I get following error.
Object: {Account <sip:192.168.0.200:5060>}, operation=make_call(), error=Unknown error from audio driver (PJMEDIA_EAUD_SYSERR)
Most of example applications from documents do not run in daemon mode - pjsip examples.
Looks like even pjsua doesn't run in background - pjsua
I am wondering, does it work in background. I am not getting exactly what "Unknown error" meant to. Is there any better way to debug ?
Architecture of my application is as follows -
Start pjsip lib, initiate pjsip lib, create transport & create userless account.
Create UDP protocol which listens for incoming requests.
Once app gets request, it makes calls to particular sip uri.
Everything goes well when I run app with listenUDP & reactor.run() but when I tries with typical twisted application setup - twistd( either listenUPD or UDPServer) above error pops up.
Am I doing anything wrong ? Any info will be welcomed.
thank you.
This issue resolved after I set sound devices.
Related
I am new to rabbitmq. In all tutorials of rabbitmq in python/php says that receiver side
php receiver.php
or
python receiver.py
but how can we do this in production?
if we have to run above command in production either we have to use & at last or we have to use nohup. Which is not a good idea?
How to implement rabbitmq receiver in production server in php/python?
Consumers/Receivers tend to be managed by a process controller. Either initd, systemd can work. What I saw used a lot more is something like http://supervisord.org/ or http://godrb.com/ or https://mmonit.com/
In production you ideally want to not only have something that will make sure a process is running, but also that the logs are separated and rolled, that you have some amount of monitoring to make sure a process is not just constantly restarting at boot or otherwise. Those tools are better adapted than running by hand.
I wanted to do a stack inquiry before I do a deep dive tomorrow and start trying out every option under the sun. We have an application that we've embedded python inside with PyBind 11. In the current implementation application events are triggering an xml-rpc call to a remote server and its working fine.
We are looking, however, to extend this functionality by either adding a websocket server, connecting into Kafka or MQ or some other sort of functionality - allowing for both incoming and outgoing messages.
Where I'm running into some design questions is as I understand I have 1 interpreter which means I likely (by default) have 1 thread available.
I'm trying to figure out what approach gives me the following:
Application events in C++ will be allowed to call into the Python interpreter and fire off events
Events received by the python interpreter will call back into C++
I'm pretty sure this is easy to setup one way or the other but I'm getting confused how to handle both things at once.
I can setup a websocket server easily enough - but if my server is actively running
start_server = websockets.serve(counter, "localhost", 6789)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
Am I still able going to be able to make additional calls into the interpreter or is Python going to be locked up?
In summary:
When using embedded Python - if I have an actively running process (say listening for messages on a websocket) am I still able to make calls into the Python interpreter to do additional things - or am I blocked?
I have a node.js server running on a Raspberry Pi 3 B+. (I'm using node because I need the capabilities of a bluetooth library that works well).
Once the node server picks up a message from a bluetooth device, I want it to fire off an event/command/call to a different python script running on the same device.
What is the best way to do this? I've looked into spawning child processes and running the script in them, but that seems messy... Additionally, should I set up a socket between them and stream data through it? I imagine this is done often, what is the consensus solution?
Running a child process is how you would run a python script. That's how you do it from nodejs or any other program (besides a python program).
There are dozens of options for communicating between the python script and the nodejs program. The simplest would be stdin/stdout which are automatically set up for you when you create the child process, but you could also give the nodejs app a local http server that the python script could communicate with or vice versa.
Or, set up a regular socket between the two.
If, as you now indicate in a comment, your python script is already running, then you may want to use a local http server in the nodejs app and the python script can just send an http request to that local http server whenever it has some data it wants to pass to the nodejs app. Or, if you primarily want data to flow the opposite direction, you can put the http server in the python app and have the nodejs server send data to the python app.
If you want good bidirectional capabilities, then you could also set up a socket.io connection between the two and then you can easily send messages either way at any time.
I've got a small application (https://github.com/tkoomzaaskz/cherry-api) and I would like to integrate it with travis. In fact, travis is probably not important here. My question is how can I configure a build/job to execute the following sequence:
start the server that serves the application
run tests
close the server (which means close the build)
The application is written in python/CherryPy (basic webapp framework). On my localhost I do it using two consoles. One runs the server and another one runs the tests - it's pretty easy and works fine. But when I want to execute all this in the CI environment, I fall in trouble - I'm unable to gain control after the server is started, because the server process waits for requests... and waits... and waits... and tests are never run (https://travis-ci.org/tkoomzaaskz/cherry-api/builds/10855029 - this build is infinite). Additionally, I don't know how to close the server. This is my .travis.yml:
before_script: python src/hello.py
script: nosetests
src/hello.py starts the built-in CherryPy server (listens on localhost:8080). I know I can move it to the background by adding the &: before_script: python src/hello.py & but then I shall find the process ID in the CI-environment and kill the process which seems very very dirty solution and I guess there's something better than that.
I'd appreciate any hints on how can I configure this.
edit: I've configured this dirty run in the background and then kill the process in this file. The build passes now. Still, I think it's ugly...
I need a reliable way to check if a Twisted-based server, started via twistd (and a TAC-file), was started successfully. It may fail because some network options are setup wrong. Since I cannot access the twistd log (as it is logged to /dev/null, because I don't need the log-clutter twistd produces), I need to find out if the Server was started successfully within a launch-script which wraps the twistd-call.
The launch-script is a Bash script like this:
#!/usr/bin/bash
twistd \
--pidfile "myservice.pid" \
--logfile "/dev/null" \
--python \
myservice.tac
All I found on the net are some hacks using ps or stuff like that. But I don't like an approach like that, because I think it's not reliable.
So I'm thinking about if there is a way to access the internals of Twisted, and get all currently running Twisted applications? That way I could query the currently running apps for the the name of my Twisted application (as I named it in the TAC-file) to start.
I'm also thinking about not using the twistd executable but implementing a Python-based launch script which includes the twistd-content, like the answer to this question provides, but I don't know if that helps me in getting the status of the server to run.
So my question is just: is there a reliable not-ugly way to tell if a Twisted Server started with twistd was started successfully, when twistd-logging is disabled?
You're explicitly specifying a PID file. twistd will write its PID into that file. You can check the system to see if there is a process with that PID.
You could also re-enable logging with a custom log observer which only logs your startup event and discards all other log messages. Then you can watch the log for the startup event.
Another possibility is to add another server to your application which exposes the internals you mentioned. Then try connecting to that server and looking around to see what you wanted to see (just the fact that the server is running seems like a good indication that the process started up properly, though). If you make it a manhole server then you get the ability to evaluate arbitrary Python code, which lets you inspect any state in the process you want.
You could also just have your application code write out an extra state file that explicitly indicates successful startup. Make sure you delete it before starting the application and you'll have a fine indicator of success vs failure.