I am running a script that works on sockets.. It requires sudo to run..
however,
Inside the script i call another script that requires not to be run as sudo
here is the code:
import subprocess
import socket
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#s.settimeout(5.0)
host='192.168.1.148'
port=1022
s.bind((host, port))
s.listen(5)
while True:
c, addr = s.accept()
subprocess.call("python bluetooth2.py",shell=True)
print 'got connection from',addr
c.send('Thank you for connecting')
#c.settimeout(5.0)
c.recv(1022)
c.close()
bluetooth2.py runs pulseaudio which is run as root for some reason and doesn't work. any help greatly appreciated!
Here is what the bluetooth2.py script looks like for reference (the one that is calling on pulseaudio)
import time
import pexpect
from sh import bluetoothctl
import subprocess
mac = "C8:84:47:26:E6:3C"
print ("stuck here")
#bluetoothctl("connect", mac)
def connect():
child = pexpect.spawn('bluetoothctl')
child.sendline('power on')
child.sendline('agent on')
child.sendline('default-agent')
child.sendline('pair C8:84:47:26:E6:3C')
time.sleep(1)
child.sendline('trust C8:84:47:26:E6:3C')
time.sleep(1)
child.sendline('connect C8:84:47:26:E6:3C')
print("connecting...")
time.sleep(5)
subprocess.call("pulseaudio --start",shell=True)
subprocess.call("pacmd set-default-sink
bluez_sink.C8_84_47_26_E6_3C",shell=True)
subprocess.call("aplay /home/pi/bleep_01.wav", shell=True)
Solution run PulseAudio for all your users
Add bellow lines into /etc/systemd/system/pulseaudio.service file and save
[Unit]
Description=PulseAudio system server
[Service]
Type=notify
ExecStart=pulseaudio --daemonize=no --system --realtime --log-target=journal
[Install]
WantedBy=multi-user.target
Enable service
sudo systemctl --system enable pulseaudio.service
sudo systemctl --system start pulseaudio.service
sudo systemctl --system status pulseaudio.service
Edit Client conf /etc/pulse/client.conf and replace ass bellow
default-server = /var/run/pulse/native
autospawn = no
Add root to pulse group
sudo adduser root pulse-access
And finally reboot the system
Related
How to run a server in python?
I already have tried:
python -m SimpleHTTPServer
python -m HTTPServer
but its says to me:
invalid syntax
Can someone help me?
Thanks!
You can use this command in cmd or terminal
python -m SimpleHTTPServer <port_number> # Python 2.x
Python 3.x
python3 -m http.server # Python 3x
By default, this will run the contents of the directory on a local web server, on port 8000. You can go to this server by going to the URL localhost:8000 in your web browser.
I have made a remote access program that uses the Socket module. If you want to copy the code, that's fine. EDIT: You will need to run it using a cmd file like this: "python (filename).py." After that, you will need to add the line "pause"
#SERVER
import os
import socket
s = socket.socket()
host = socket.gethostname()
port = 8080
s.bind((host, port))
print("Server started at: ", host)
s.listen(1)
conn,addr = s.accept()
print(addr, "connected")
#CLIENT
import os
import socket
s = socket.socket()
port = 8080
host = "YOUR DESKTOP ID" (Your server should say it. I.E. "Server started at: (Desktop-123456)")
first of all to clarify some things:
This python script works perfectly on my windows machine(without docker)
I am also using virtualenv on my local machine
while running on my machine, I can easily connect to to the socket server from my android phone(websocket tester app)
So now I am trying to run this websocket script(Flask & SocketIO) with docker on my ubuntu server on cloud(digital ocean).
My dockers commands for deploying this:
docker build -t websocketserver .
docker run -d -p 5080:8000 --restart always --name my_second_docker_running websocketserver
The script runs fine, BUT when i try to connect to it(from my phone), I get some errors when typing the command: "docker logs --tail 500 my_second_docker_running"
The error is:
Traceback (most recent call last):
File "/opt/company/project/venv/lib/python3.8/site-packages/gunicorn/workers/sync.py", line 134, in handle
self.handle_request(listener, req, client, addr)
File "/opt/company/project/venv/lib/python3.8/site-packages/gunicorn/workers/sync.py", line 175, in handle_request
respiter = self.wsgi(environ, resp.start_response)
TypeError: __call__() takes 1 positional argument but 3 were given
My requirements.txt:
Flask==1.1.1
Flask-SocketIO==3.0.1
aiohttp-cors==0.7.0
asyncio==3.4.3
gunicorn==20.0.4
My dockerfile:
FROM ubuntu:latest
MAINTAINER raxor2k "xxx.com"
RUN apt-get update -y
#RUN apt-get install -y python3-pip build-essential python3-dev
RUN apt-get install -y build-essential python3-dev python3-venv
COPY . /app
WORKDIR /app
RUN python3 -m venv /opt/company/project/venv
RUN /opt/company/project/venv/bin/python -m pip install -r requirements.txt
#ENTRYPOINT ["gunicorn"]
ENTRYPOINT ["/opt/company/project/venv/bin/gunicorn"]
CMD ["main:app", "-b", "0.0.0.0"]
and finally, my main.py file:
from aiohttp import web
import socketio
import aiohttp_cors
import asyncio
import asyncio as aio
import logging
# creates a new Async Socket IO Server
sio = socketio.AsyncServer()
# Creates
app = web.Application()
sio.attach(app)
# AIOSerial now logs! uncomment below for debugging
logging.basicConfig(level=logging.DEBUG)
async def index(request):
with open('index.html') as f:
print("Somebody entered the server from the browser!")
return web.Response(text=f.read(), content_type='text/html')
#sio.on("android-device")
async def message(sid, data):
print("message: ", data)
#sio.on("device-id")
async def message(sid, android_device_id):
print("DEVICE ID: ", android_device_id)
#sio.on("disconnected-from-socket")
async def message(sid, disconnected_device):
print("Message from client: ", disconnected_device)
async def send_message_to_client():
print("this method got called!")
await sio.emit("SuperSpecialMessage", {"Message from server:": "MESSAGE FROM SENSOR"})
# We bind our aiohttp endpoint to our app
# router
cors = aiohttp_cors.setup(app)
app.router.add_get('/', index)
# We kick off our server
if __name__ == '__main__':
print("websocket server is running!")
the_asyncio_loop = asyncio.get_event_loop()
run_the_websocket = asyncio.gather(web.run_app(app))
run_both_loops_together = asyncio.gather(run_the_websocket)
results = the_asyncio_loop.run_until_complete(run_both_loops_together)
Could someone please help me solve this issue? could perhaps someone here try running this code yourself to see if you get the same error?
I decided to follow this example instead: https://github.com/miguelgrinberg/Flask-SocketIO
It works pretty much the same as my code and everything is fine now.
TL;DR: Why does Scapy's sniff not run at reboots from systemd?
I have the following code running on my RPI3 that specifically looks for network requests. This uses the inbuilt ETH0 wifi:
monitorConnections.py
def arp_detect(pkt):
print("Starting ARP detect")
logging.debug('Starting ARP detect')
if pkt.haslayer(ARP):
if pkt[ARP].op == 1: #network request
PHONE_name = "Unknown"
PHONE_mac_address = ""
if pkt[ARP].hwsrc in known_devices.keys():
print ("Known Phone Detected")
logging.debug('Known Phone Detected')
# Grab name and mac address
PHONE_mac_address = pkt[ARP].hwsrc
PHONE_name = known_devices[PHONE_mac_address]
print ('Hello ' + PHONE_name)
logging.debug('Hello ' + PHONE_name)
else:
# Grab mac address, log these locally
print ("Unknown Phone Detected")
logging.debug('Unknown Phone Detected')
PHONE_mac_address = pkt[ARP].hwsrc
print (pkt[ARP].hwsrc)
print("Start!")
print (sniff(prn=arp_detect, filter="arp", store=0))
When I run this via the command
python2 monitorConnections.py
This runs as designed, however I have been trying to put this in a daemon, conscious that it needs to run after the internet connection has been established. I have the following setting in my service:
MonitorConnections.service
[Unit]
Description=Monitor Connections
Wants=network-online.target
After=network.target network-online.target sys-subsystem-net-devices-wlan0.device sys-subsystem-net-devices-eth0.device
[Service]
Type=simple
ExecStart=/usr/bin/python2 -u monitorConnections.py
ExecStop=pkill -9 /usr/bin/autossh
WorkingDirectory=/home/pi/Shared/MonitorPhones
Restart=always
User=root
StandardOutput=console
StandardError=console
[Install]
WantedBy=multi-user.target
In order to find the services that I need my script to run after, I ran this command:
systemctl list-units --no-pager
To find the following services to add to my service under 'After' - these corresspond with the ethernet
services (I imagine!)
sys-subsystem-net-devices-wlan0.device
sys-subsystem-net-devices-eth0.device
As far as I can tell, this is running successfully. When I save everything and run the following:
sudo systemctl daemon-reload
sudo systemctl restart monitorConnections
This kickstarts the script beautifully. I have then set my script to run at reboot like so:
sudo systemctl enable monitorConnections
And reboot, I can see that it runs the print statement "Start", however then does not seem to run anything within the 'sniff' command, however when running
sudo systemctl -l status monitorConnections
I can see that the script is active - so it has not errored!
My question: Why is it that scapy's sniff does not seem to run at reboot? Have I missed something out
I'm honestly at the end of my wits as to what is wrong - any help about this would be greatly appreciated!
RPI3's wifi driver does not have monitoring mode. After weeks of debugging, this was narrowed down to be the issue. I hope this helps someone else.
I want to run a python script in a CENTOS server:
#!/usr/bin/env python
import socket
try:
import thread
except ImportError:
import _thread as thread #Py3K changed it.
class Polserv(object):
def __init__(self):
self.numthreads = 0
self.tidcount = 0
self.port = 843
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.sock.bind(('100.100.100.100', self.port))
self.sock.listen(5)
def run(self):
while True:
thread.start_new_thread(self.handle, self.sock.accept())
def handle(self,conn,addr):
self.numthreads += 1
self.tidcount += 1
tid=self.tidcount
while True:
data=conn.recv(2048)
if not data:
conn.close()
self.numthreads-=1
break
#if "<policy-file-request/>\0" in data:
conn.sendall(b"<?xml version='1.0'?><cross-domain-policy><allow-access-from domain='*' to-ports='*'/></cross-domain-policy>")
conn.close()
self.numthreads-=1
break
#conn.sendall(b"[#%d (%d running)] %s" % (tid,self.numthreads,data) )
Polserv().run()
Im using $ python flashpolicyd.py and it works fine...
The question is: How to keep this script running even after I close the terminal(console)?
I offer two recommendations:
supervisord
1) Install the supervisor package (more verbose instructions here):
sudo apt-get install supervisor
2) Create a config file for your daemon at /etc/supervisor/conf.d/flashpolicyd.conf:
[program:flashpolicyd]
directory=/path/to/project/root
environment=ENV_VARIABLE=example,OTHER_ENV_VARIABLE=example2
command=python flashpolicyd.py
autostart=true
autorestart=true
3) Restart supervisor to load your new .conf
supervisorctl update
supervisorctl restart flashpolicyd
systemd (if currently used by your Linux distro)
[Unit]
Description=My Python daemon
[Service]
Type=simple
ExecStart=/usr/bin/python3 /opt/project/main.py
WorkingDirectory=/opt/project/
Environment=API_KEY=123456789
Environment=API_PASS=password
Restart=always
RestartSec=2
[Install]
WantedBy=sysinit.target
Place this file into /etc/systemd/system/my_daemon.service and enable it using systemctl daemon-reload && systemctl enable my_daemon && systemctl start my_daemon --no-block.
To view logs:
systemctl status my_daemon
I use this code to daemonize my applications. It allows you start/stop/restart the script using the following commands.
python myscript.py start
python myscript.py stop
python myscript.py restart
In addition to this I also have an init.d script for controlling my service. This allows you to automatically start the service when your operating system boots-up.
Here is a simple example to get your going. Simply move your code inside a class, and call it from the run function inside MyDeamon.
import sys
import time
from daemon import Daemon
class YourCode(object):
def run(self):
while True:
time.sleep(1)
class MyDaemon(Daemon):
def run(self):
# Or simply merge your code with MyDaemon.
your_code = YourCode()
your_code.run()
if __name__ == "__main__":
daemon = MyDaemon('/tmp/daemon-example.pid')
if len(sys.argv) == 2:
if 'start' == sys.argv[1]:
daemon.start()
elif 'stop' == sys.argv[1]:
daemon.stop()
elif 'restart' == sys.argv[1]:
daemon.restart()
else:
print "Unknown command"
sys.exit(2)
sys.exit(0)
else:
print "usage: %s start|stop|restart" % sys.argv[0]
sys.exit(2)
Upstart
If you are running an operating system that is using Upstart (e.g. CentOS 6) - you can also use Upstart to manage the service. If you use Upstart you can keep your script as is, and simply add something like this under /etc/init/my-service.conf
start on started sshd
stop on runlevel [!2345]
exec /usr/bin/python /opt/my_service.py
respawn
You can then use start/stop/restart to manage your service.
e.g.
start my-service
stop my-service
restart my-service
A more detailed example of working with upstart is available here.
Systemd
If you are running an operating system that uses Systemd (e.g. CentOS 7) you can take a look at the following Stackoverflow answer.
My non pythonic approach would be using & suffix. That is:
python flashpolicyd.py &
To stop the script
killall flashpolicyd.py
also piping & suffix with disown would put the process under superparent (upper):
python flashpolicyd.pi & disown
first import os module in your app than with use from getpid function get pid's app and save in a file.for example :
import os
pid = os.getpid()
op = open("/var/us.pid","w")
op.write("%s" % pid)
op.close()
and create a bash file in /etc/init.d path:
/etc/init.d/servername
PATHAPP="/etc/bin/userscript.py &"
PIDAPP="/var/us.pid"
case $1 in
start)
echo "starting"
$(python $PATHAPP)
;;
stop)
echo "stoping"
PID=$(cat $PIDAPP)
kill $PID
;;
esac
now , u can start and stop ur app with down command:
service servername stop
service servername start
or
/etc/init.d/servername stop
/etc/init.d/servername start
for my script of python, I use...
To START python script :
start-stop-daemon --start --background --pidfile $PIDFILE --make-pidfile --exec $DAEMON
To STOP python script :
PID=$(cat $PIDFILE)
kill -9 $PID
rm -f $PIDFILE
P.S.: sorry for poor English, I'm from CHILE :D
I wrote a python XMLRPC server for my web application. The problem is whenever I start the server from shell and exit, xmlrpc server stops as well. I tried executing server script from another file thinking that it will continue to run in the background but that didn't work. Here's the code used to start a server.
host = 'localhost'
port = 8000
server = SimpleXMLRPCServer.SimpleXMLRPCServer((host, port))
server.register_function(getList)
server.serve_forever()
In the shell I just do >>python MyXmlrpcServer.py to start a server.
What do I do to be able to start a server and keep it running?
#warwaruk makes a useful suggestion; Twisted XML-RPC is simple and robust. However, if you simply want to run and manage a python process in the 'background' take a look at Supervisord. It is a simple process management system.
$ pip install supervisor
$ echo_supervisord_conf > /etc/supervisord.conf
Edit that config file to add a definition of your process thus...
[program:mycoolproc]
directory=/path/to/my/script/dir
command=python MyXmlrpcServer.py
Start supervisord and start your process
$ supervisord
$ supervisorctl start mycoolproc
Better use twisted to create an XML-RPC server. Thus you will not need writing your own server, it is very flexible, and you will be able to run in background using twistd:
#!/usr/bin/env python
import time, datetime, os, sys
from twisted.web import xmlrpc, server
from twisted.internet import reactor
class Worker(xmlrpc.XMLRPC):
def xmlrpc_test(self):
print 'test called!'
port = 1235
r = Worker(allowNone=True)
if __name__ == '__main__':
print 'Listening on port', port
reactor.listenTCP(port, server.Site(r))
reactor.run()
else: # run the worker as a twistd service application: twistd -y xmlrpc_server.py --no_save
from twisted.application import service, internet
application = service.Application('xmlrpc_server')
reactor.listenTCP(port, server.Site(r))
reactor.run()
#internet.TCPServer(port, server.Site(r)).setServiceParent(application)