Importing python script that contain serial initialization into flask app - python

I want to import some function in form of a script, let's call it controller.py, to a flask app as a web service. Let's call the flask app api.py.
The problem is, in controller.py, there is a pyserial declaration.
controller.py:
import serial
ser = serial.Serial('COM31', 9600,timeout=2)
def serial_function(foo):
ser.write(foo)
reply = ser.read()
return reply
api.py:
from flask import Flask
import controller as cont
app = Flask(__name__)
#app.route('/function/<foo>',methods=['GET'])
def do_function(foo):
data=cont.serial_function(foo)
return data
if __name__ == '__main__':
app.run('0.0.0.0', 80,True)
But i got this error:
raise SerialException("could not open port %s: %s" % (self.portstr, ctypes.WinError()))
serial.serialutil.SerialException: could not open port COM31: [Error 5] Access is denied.
It seems that Flask is trying to import controller.py over and over again, and the serial port re initialized.
Is there some way I can achieve what I'm trying to do as described above?

The main problem of your code is that the Serial object creation is directly in the code of your module. This way, each time you import your module, python will load/interpret the file, and by doing this, will execute all the code found at the root level of your module. (you can refer to this excellent answer if you want to dig in more : What does if __name__ == "__main__": do?)
Moreover, in debug mode, Flask will start 2 processes (one to monitor the source code and when changed, restart the second one, which is the one who will really handle the requests) and in production mode, you could create much more threads or processes when starting the server, by doing this, your module is imported at least twice => conflict in serial open.
A possible solution would be to remove the initialization of the serial port from your module and use the context manager syntax in your method :
def serial_function(foo):
with serial.Serial('COM31', 9600,timeout=2) as ser:
ser.write(foo)
reply = ser.read()
return reply
This way, you will open (and close) your serial port at each read.
But, you'll still have to deal with concurrent access if you have multiple clients making requests to your webserver simultaneously.
EDIT:
If, as you say in comment, you need to open only once your serial port, you'll need to encapsulate this in a specific object (probably using a Singleton pattern) that will be responsible of opening the serial port if not already opened:
class SerialProxy:
def __init__(self):
self.__serial = None
def serial_function(self, foo):
if self.__serial is None:
self.__serial = serial.Serial('COM31', 9600,timeout=2)
self.__serial.write(foo)
reply = self.__serial.read()
return reply

Related

How to connect to a bluetooth profile using dbus APIs

I have a python3 script that successfully opens a RFCOMM socket to a server using old-style bluetooth. I'm trying to accomplish the same thing using dbus, which is the way I'm reading you're supposed to use bluetooth on Linux these days. (This is a proof-of-concept for significant changes to be made to a Linux app written in C.)
When I run the script below I see this:
connecting...
ex from ConnectProfile(): g-io-error-quark: GDBus.Error:org.bluez.Error.NotAvailable: Operation currently not available (36)
onPropertiesChanged( org.bluez.Device1 {'Connected': True} [] )
onPropertiesChanged( org.bluez.Device1 {'ServicesResolved': True} [] )
onPropertiesChanged( org.bluez.Device1 {'ServicesResolved': False, 'Connected': False} [] )
Note that the property changes happen after the call to ConnectProfile fails. I've seen suggestions that I should be opening an RFCOMM socket from inside the property-changed callback, taking advantage of the moment when the connection is open. But server-side (I'm using the excellent bluez-rfcomm-example on github) dbus/bluez takes care of creating the socket: you just get passed a file descriptor. I'm expecting ConnectProfile to work similarly, but can't find any examples.
How should I modify my new_style() function so that it gives me a working socket?
Thanks,
--Eric
#!/usr/bin/env python3
# for new_style()
from pydbus import SystemBus
from gi.repository import GLib
# for old_style()
import bluetooth
PROFILE = 'b079b640-35fe-11e5-a432-0002a5d5c51b'
ADDR = 'AA:BB:CC:DD:EE:FF'
# Works fine. But you're supposed to use dbus these days
def old_style():
service_matches = bluetooth.find_service(uuid=PROFILE, address=ADDR)
if len(service_matches):
first_match = service_matches[0]
port = first_match['port']
host = first_match['host']
sock = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
sock.connect((host, port))
while True:
data = input()
if not data:
break
sock.send(data)
sock.close()
# Does not work. First an exception fires:
# g-io-error-quark: GDBus.Error:org.bluez.Error.NotAvailable: Operation currently not available (36)
# then onPropertiesChanged lists stuff -- after the failure, not during the connection attempt.
def new_style():
nucky = SystemBus().get('org.bluez', '/org/bluez/hci0/dev_' + ADDR.replace(':', '_'))
# Callback: (s, a{sv}, as)
nucky.onPropertiesChanged = lambda p1, p2, p3: print('onPropertiesChanged(', p1, p2, p3, ')')
def try_connect():
print('connecting...')
try:
nucky.ConnectProfile(PROFILE)
except Exception as ex:
print('ex from ConnectProfile():', ex)
GLib.timeout_add( 250, try_connect )
GLib.MainLoop().run()
if False:
old_style()
else:
new_style()
(Added later)
Let me clarify my question. On a Linux box I'm running a bluez-rfcomm-example server that I modified to use a custom Service UUID. It probably creates a service record, but on the client (Android) side these three lines of Java are enough to get a connected socket to it (assuming the server has bluetooth mac AA:BB:CC:DD:EE:FF and the two are paired):
BluetoothDevice remote = BluetoothAdapter.getDefaultAdapter().getRemoteDevice( "AA:BB:CC:DD:EE:FF" );
BluetoothSocket socket = remote.createRfcommSocketToServiceRecord( MY_SERVICE_UUID );
socket.connect();
Is there a way to do this on Linux using dbus/bluez that is remotely close to this simple? I'm assuming Device1/ConnectProfile(UUID) is what I want -- that it's the same thing as createRfcommSocketToServiceRecord() -- but that assumption might be totally wrong! Should this even be possible from Linux using blues/dbus? Or should I stick with the older methods?
Thanks, and sorry for the vague initial question.
--Eric
The code below works to get and use a rfcomm socket for a connection to a remote service specified by a UUID. The answer I accepted, by ukBaz, included all I needed, but I didn't understand enough background to make sense of it immediately. I was right that calling ConnectProfile() was the way to start, but missed two things:
Providing a Profile on the calling side was necessary for two reasons. First, it provides a callback by which you get hold of the socket. But without it -- without the NewConnection method specifically -- the connection fails (ConnectProfile() returns an error.)
I needed to make the ConnectProfile() call on a background thread. The callback will come in on the glib loop's main thread, so ConnectProfile(), which doesn't return until the connection succeeds or fails, mustn't block that thread!
It's possible that different Bluetooth connection types require subtly different machinations, but for RFCOMM socket connections anyway this does the trick.
#!/usr/bin/env python3
import socket, threading
from pydbus import SystemBus
from gi.repository import GLib
import dbus, dbus.service, dbus.mainloop.glib
CUSTOM_UUID = 'b079b640-35fe-11e5-a432-0002a5d5c51b'
ADDR = 'AA:BB:CC:DD:EE:FF'
PATH = '/org/neednt/match/remote'
class Profile(dbus.service.Object):
#dbus.service.method("org.bluez.Profile1",
in_signature="oha{sv}", out_signature="")
def NewConnection(self, path, fd, properties):
None
print('NewConnection: fd:', fd);
try:
self.socket = socket.socket(fileno=fd.take())
print('got socket:', self.socket)
self.socket.send(b"You there?")
except Exception as ex:
print('ex:', ex)
def connect_thread_main():
print('connect_thread_main()...')
SystemBus().get('org.bluez', '/org/bluez/hci0/dev_' + ADDR.replace(':', '_')) \
.ConnectProfile(CUSTOM_UUID)
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
bus = dbus.SystemBus()
Profile(bus, PATH) # added by side-effect apparently
dbus.Interface(bus.get_object("org.bluez","/org/bluez"),
"org.bluez.ProfileManager1") \
.RegisterProfile(PATH, CUSTOM_UUID, {})
threading.Thread(target=connect_thread_main).start()
GLib.MainLoop().run()
There is a good (if slightly old now) blog comparing pybluez and using Python 3 sockets:
https://blog.kevindoran.co/bluetooth-programming-with-python-3/
If you want to do it with the BlueZ D-Bus API then the key documentations is:
https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc/profile-api.txt
And the BlueZ example is at:
https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/test/test-profile
Creating this with pydbus has some issues as documented at: https://github.com/LEW21/pydbus/issues/54

Python socket closed before all data have been consumed by remote

I am writing a Python module which is communicating with a go program through unix sockets. The client (the python module) write data to the socket and the server consume them.
# Simplified version of the code used
outputStream = socket.socket(socketfamily, sockettype, protocol)
outputStream.connect(socketaddress)
outputStream.setblocking(True)
outputStream.sendall(message)
....
outputStream.close()
My issue is that the Python client tends to finish and close the socket before the data have been effectively read by the server which leads to a "broken pipe, connection reset by peer" on the server side. Whatever I do, for the Python code everything has been sent and so the calls to send() sendall() select() are all successful...
Thanks in advance
EDIT: I can't use shutdown because of mac OS
EDIT2: I also tried to remove the timeout and call setblocking(True) but it doesn't change anything
EDIT3: After ready this issue http://bugs.python.org/issue6774 it seems that the documentation is unnecessary scary so I restored the shutdown but I still have the same issue:
# Simplified version of the code used
outputStream = socket.socket(socketfamily, sockettype, protocol)
outputStream.connect(socketaddress)
outputStream.settimeout(5)
outputStream.sendall(message)
....
outputStream.shutdown(socket.SHUT_WR)
outputStream.close()
IHMO this is best done with an Asynchornous I/O library/framework. Here's such a solution using circuits:
The server echos what it receives to stdout and the client opens a file and sends this to the server waiting for it to complete before closing the socket and terminating. This is done with a mixture of Async I/O and Coroutines.
server.py:
from circuits import Component
from circuits.net.sockets import UNIXServer
class Server(Component):
def init(self, path):
UNIXServer(path).register(self)
def read(self, sock, data):
print(data)
Server("/tmp/server.sock").run()
client.py:
import sys
from circuits import Component, Event
from circuits.net.sockets import UNIXClient
from circuits.net.events import connect, close, write
class done(Event):
"""done Event"""
class sendfile(Event):
"""sendfile Event"""
class Client(Component):
def init(self, path, filename, bufsize=8192):
self.path = path
self.filename = filename
self.bufsize = bufsize
UNIXClient().register(self)
def ready(self, *args):
self.fire(connect(self.path))
def connected(self, *args):
self.fire(sendfile(self.filename, bufsize=self.bufsize))
def done(self):
raise SystemExit(0)
def sendfile(self, filename, bufsize=8192):
with open(filename, "r") as f:
while True:
try:
yield self.call(write(f.read(bufsize)))
except EOFError:
break
finally:
self.fire(close())
self.fire(done())
Client(*sys.argv[1:]).run()
In my testing of this it behaves exactly as I expect it to with no
errors and the servers gets the complete file before the client clsoes
the socket and shuts down.
After a discussion with a colleague aware of the C sockets (in cpython the socket module is a wrapper for the C sockets) he spoke about this http://ia600609.us.archive.org/22/items/TheUltimateSo_lingerPageOrWhyIsMyTcpNotReliable/the-ultimate-so_linger-page-or-why-is-my-tcp-not-reliable.html (that's how it is done in PHP internal for the record)
TL&DR: shutdown + quick poll + close or ioctl(SIOCOUTQ) on linux

How to determine from a python application if X server/X forwarding is running?

I'm writing a linux application which uses PyQt4 for GUI and which will only be used during remote sessions (ssh -XY / vnc).
So sometimes it may occur that a user will forget to run ssh with X forwarding parameters or X forwarding will be unavailable for some reason. In this case the application crashes badly (unfortunately I am force to use an old C++ library wrapped into python and it completely messes user's current session if the application crashes).
I cannot use something else so my idea is to check if X forwarding is available before loading that library. However I have no idea how to do that.
I usually use xclock to check if my session has X forwarding enabled, but using xclock sounds like a big workaround.
ADDED
If possible I would like to use another way than creating an empty PyQt window and catching an exception.
Check to see that the $DISPLAY environment variable is set - if they didn't use ssh -X, it will be empty (instead of containing something like localhost:10).
As mentioned before, you can check the DISPLAY environment variable:
>>> os.environ['DISPLAY']
'localhost:10.0'
If you're so inclined, you could actually connect to the display port to see that sshd is listening on it:
import os
import socket
def tcp_connect_to_display():
# get the display from the environment
display_env = os.environ['DISPLAY']
# parse the display string
display_host, display_num = display_env.split(':')
display_num_major, display_num_minor = display_num.split('.')
# calculate the port number
display_port = 6000 + int(display_num_major)
# attempt a TCP connection
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
sock.connect((display_host, display_port))
except socket.error:
return False
finally:
sock.close()
return True
This relies on standard X configuration using ports 6000 + display number.
Similar to your xclock solution, I like to run xdpyinfo and see if it returns an error.
X-Server can be checked with TkInter as in the following example (but there should be a similar way with PyQt4):
import time
import sys
try:
import Tkinter as tk
except ImportError:
import tkinter as tk
while True:
try:
root = tk.Tk()
break
except tk.TclError as e:
if "$DISPLAY" in str(e):
print("$DISPLAY not set. Exiting.")
sys.exit(1)
print("Waiting for X server to start...")
time.sleep(1)
print("X server running")
root.destroy()
sys.exit(0)
This will check $DISPLAY setting, X-Server process and related xhost authorization (but uses TkInter instead of PyQt4).

Start simple web server and launch browser simultaneously in Python

I want to start a simple web server locally, then launch a browser with an url just served. This is something that I'd like to write,
from wsgiref.simple_server import make_server
import webbrowser
srv = make_server(...)
srv.blocking = False
srv.serve_forever()
webbrowser.open_new_tab(...)
try:
srv.blocking = True
except KeyboardInterrupt:
pass
print 'Bye'
The problem is, I couldn't find a way to set a blocking option for the wsgiref simple server. By default, it's blocking, so the browser would be launched only after I stopped it. If I launch the browser first, the request is not handled yet. I'd prefer to use a http server from the standard library, not an external one, like tornado.
You either have to spawn a thread with the server, so you can continue with your control flow, or you have to use 2 python processes.
untested code, you should get the idea
class ServerThread(threading.Thread):
def __init__(self, port):
threading.Thread.__init__(self)
def run(self):
srv = make_server(...)
srv.serve_forever()
if '__main__'==__name__:
ServerThread().start()
webbrowser.open_new_tab(...)

How to implement a hub in Python

Dear all, I need to implement a TCP server in Python which receives some data from a client and then sends this data to another client. I've tried many different implementations but no way to make it run. Any help would be really appreciated.
Below is my code:
import SocketServer
import sys
import threading
buffer_size = 8182
ports = {'toserver': int(sys.argv[1]), 'fromserver': int(sys.argv[2])}
class ConnectionHandler(SocketServer.BaseRequestHandler):
def handle(self):
# I need to send the data received from the client connected to port 'toserver'
# to the client connected to port 'fromserver' - see variable 'ports' above
class TwoWayConnectionServer(threading.Thread):
def __init__(self):
self.to_server = SocketServer.ThreadingTCPServer(("", ports['toserver']), ConnectionHandler)
self.from_server = SocketServer.ThreadingTCPServer(("", ports['fromserver']), ConnectionHandler)
threading.Thread.__init__(self)
def run(self):
while (1):
self.to_server.handle_request()
self.from_server.handle_request()
def serve_non_blocking():
server = TwoWayConnectionServer()
server.run()
if __name__ == '__main__':
serve_non_blocking()
See the Twisted tutorial, and also twisted.protocols.portforward. I think that portforward module does something slightly different from what you want, it opens an outgoing connection to the destination port rather than waiting for the second client to connect, but you should be able to work from there.
Can you be more specific about what you tried and what didn't work? There are lots of ways to do this. Probably the easiest would be to use the socket library - maybe looking at some examples will help:
http://docs.python.org/library/socket.html#example

Categories