Related
Is it possible -- other than by using something like a .txt/dummy file -- to pass a value from one program to another?
I have a program that uses a .txt file to pass a starting value to another program. I update the value in the file in between starting the program each time I run it (ten times, essentially simultaneously). Doing this is fine, but I would like to have the 'child' program report back to the 'mother' program when it is finished, and also report back what files it found to download.
Is it possible to do this without using eleven files to do it (that's one for each instance of the 'child' to 'mother' reporting, and one file for the 'mother' to 'child')? I am talking about completely separate programs, not classes or functions or anything like that.
To operate efficently, and not be waiting around for hours for everything to complete, I need the 'child' program to run ten times and get things done MUCH faster. Thus I run the child program ten times and give each program a separate range to check through.
Both programs run fine, I but would like to get them to run/report back and forth with each other and hopefully not be using file 'transmission' to accomplish the task, especially on the child-mother side of the transferring of data.
'Mother' program...currently
import os
import sys
import subprocess
import time
os.chdir ('/media/')
#find highest download video
Hival = open("Highest.txt", "r")
Histr = Hival.read()
Hival.close()
HiNext = str(int(Histr)+1)
#setup download #1
NextVal = open("NextVal.txt","w")
NextVal.write(HiNext)
NextVal.close()
#call download #1
procs=[]
proc=subprocess.Popen(['python','test.py'])
procs.append(proc)
time.sleep(2)
#setup download #2-11
Histr2 = int(Histr)/10000
Histr2 = Histr2 + 1
for i in range(10):
Hiint = str(Histr2)+"0000"
NextVal = open("NextVal.txt","w")
NextVal.write(Hiint)
NextVal.close()
proc=subprocess.Popen(['python','test.py'])
procs.append(proc)
time.sleep(2)
Histr2 = Histr2 + 1
for proc in procs:
proc.wait()
'Child' program
import urllib
import os
from Tkinter import *
import time
root = Tk()
root.title("Audiodownloader")
root.geometry("200x200")
app = Frame(root)
app.grid()
os.chdir('/media/')
Fileval = open('NextVal.txt','r')
Fileupdate = Fileval.read()
Fileval.close()
Fileupdate = int(Fileupdate)
Filect = Fileupdate/10000
Filect2 = str(Filect)+"0009"
Filecount = int(Filect2)
while Fileupdate <= Filecount:
root.title(Fileupdate)
url = 'http://www.yourfavoritewebsite.com/audio/encoded/'+str(Fileupdate)+'.mp3'
urllib.urlretrieve(url,str(Fileupdate)+'.mp3')
statinfo = os.stat(str(Fileupdate)+'.mp3')
if statinfo.st_size<10000L:
os.remove(str(Fileupdate)+'.mp3')
time.sleep(.01)
Fileupdate = Fileupdate+1
root.update_idletasks()
I'm trying to convert the original VB6 program over to Linux and make it much easier to use at the same time. Hence the lack of .mainloop being missing. This was my first real attempt at anything in Python at all hence the lack of def or classes. I'm trying to come back and finish this up after 1.5 months of doing nothing with it mostly due to not knowing how to. In research a little while ago I found this is WAY over my head. I haven't ever did anything with threads/sockets/client/server interaction so I'm purely an idiot in this case. Google anything on it and I just get brought right back here to stackoverflow.
Yes, I want 10 running copies of the program at the same time, to save time. I could do without the gui interface if it was possible for the program to report back to 'mother' so the mother could print on the screen the current value that is being searched. As well as if the child could report back when its finished and if it had any file that it downloaded successfully(versus downloaded and then erased due to being to small). I would use the successful download information to update Highest.txt for the next time the program got ran.
I think this may clarify things MUCH better...that or I don't understand the nature of using server/client interaction:) Only reason time.sleep is in the program was due to try to make sure that the files could get written before the next instance of the child program got ran. I didn't know for sure what kind of timing issue I may run into so I included those lines for safety.
This can be implemented using a simple client/server topology using the multiprocessing library. Using your mother/child terminology:
server.py
from multiprocessing.connection import Listener
# client
def child(conn):
while True:
msg = conn.recv()
# this just echos the value back, replace with your custom logic
conn.send(msg)
# server
def mother(address):
serv = Listener(address)
while True:
client = serv.accept()
child(client)
mother(('', 5000))
client.py
from multiprocessing.connection import Client
c = Client(('localhost', 5000))
c.send('hello')
print('Got:', c.recv())
c.send({'a': 123})
print('Got:', c.recv())
Run with
$ python server.py
$ python client.py
When you talk about using txt to pass information between programs, we first need to know what language you're using.
Within my knowledge of Java and Python achi viable despite laborious depensendo the amount of information that wants to work.
In python, you can use the library that comes with it for reading and writing txt and schedule execution, you can use the apscheduler.
The problem statement is as follows:
I am working with Abaqus, a program for analyzing mechanical problems. It is basically a standalone Python interpreter with its own objects etc. Within this program, I run a python script to set up my analysis (so this script can be modified). It also contains a method which has to be executed when an external signal is received. These signals come from the main script that I am running in my own Python engine.
For now, I have the following workflow:
The main script sets a boolean to True when the Abaqus script has to execute a specific function, and pickles this boolean into a file. The Abaqus script regularly checks this file to see whether the boolean has been set to true. If so, it does an analysis and pickles the output, so that the main script can read this output and act on it.
I am looking for a more efficient way to signal the other process to start the analysis, since there is a lot of unnecessary checking going on right know. Data exchange via pickle is not an issue for me, but a more efficient solution is certainly welcome.
Search results always give me solutions with subprocess or the like, which is for two processes started within the same interpreter. I have also looked at ZeroMQ since this is supposed to achieve things like this, but I think this is overkill and would like a solution in python. Both interpreters are running python 2.7 (although different versions)
Edit:
Like #MattP, I'll add this statement of my understanding:
Background
I believe that you are running a product called abaqus. The abaqus product includes a linked-in python interpreter that you can access somehow (possibly by running abaqus python foo.py on the command line).
You also have a separate python installation, on the same machine. You are developing code, possibly including numpy/scipy, to run on that python installation.
These two installations are different: they have different binary interpreters, different libraries, different install paths, etc. But they live on the same physical host.
Your objective is to enable the "plain python" programs, written by you, to communicate with one or more scripts running in the "Abaqus python" environment, so that those scripts can perform work inside the Abaqus system, and return results.
Solution
Here is a socket based solution. There are two parts, abqlistener.py and abqclient.py. This approach has the advantage that it uses a well-defined mechanism for "waiting for work." No polling of files, etc. And it is a "hard" API. You can connect to a listener process from a process on the same machine, running the same version of python, or from a different machine, or from a different version of python, or from ruby or C or perl or even COBOL. It allows you to put a real "air gap" into your system, so you can develop the two parts with minimal coupling.
The server part is abqlistener. The intent is that you would copy some of this code into your Abaqus script. The abq process would then become a server, listening for connections on a specific port number, and doing work in response. Sending back a reply, or not. Et cetera.
I am not sure if you need to do setup work for each job. If so, that would have to be part of the connection. This would just start ABQ, listen on a port (forever), and deal with requests. Any job-specific setup would have to be part of the work process. (Maybe send in a parameter string, or the name of a config file, or whatever.)
The client part is abqclient. This could be moved into a module, or just copy/pasted into your existing non-ABQ program code. Basically, you open a connection to the right host:port combination, and you're talking to the server. Send in some data, get some data back, etc.
This stuff is mostly scraped from example code on-line. So it should look real familiar if you start digging into anything.
Here's abqlistener.py:
# The below usage example is completely bogus. I don't have abaqus, so
# I'm just running python2.7 abqlistener.py [options]
usage = """
abacus python abqlistener.py [--host 127.0.0.1 | --host mypc.example.com ] \\
[ --port 2525 ]
Sets up a socket listener on the host interface specified (default: all
interfaces), on the given port number (default: 2525). When a connection
is made to the socket, begins processing data.
"""
import argparse
parser = argparse.ArgumentParser(description='Abacus listener',
add_help=True,
usage=usage)
parser.add_argument('-H', '--host', metavar='INTERFACE', default='',
help='Interface IP address or name, or (default: empty string)')
parser.add_argument('-P', '--port', metavar='PORTNUM', type=int, default=2525,
help='port number of listener (default: 2525)')
args = parser.parse_args()
import SocketServer
import json
class AbqRequestHandler(SocketServer.BaseRequestHandler):
"""Request handler for our socket server.
This class is instantiated whenever a new connection is made, and
must override `handle(self)` in order to handle communicating with
the client.
"""
def do_work(self, data):
"Do some work here. Call abaqus, whatever."
print "DO_WORK: Doing work with data!"
print data
return { 'desc': 'low-precision natural constants','pi': 3, 'e': 3 }
def handle(self):
# Allow the client to send a 1kb message (file path?)
self.data = self.request.recv(1024).strip()
print "SERVER: {} wrote:".format(self.client_address[0])
print self.data
result = self.do_work(self.data)
self.response = json.dumps(result)
print "SERVER: response to {}:".format(self.client_address[0])
print self.response
self.request.sendall(self.response)
if __name__ == '__main__':
print args
server = SocketServer.TCPServer((args.host, args.port), AbqRequestHandler)
print "Server starting. Press Ctrl+C to interrupt..."
server.serve_forever()
And here's abqclient.py:
usage = """
python2.7 abqclient.py [--host HOST] [--port PORT]
Connect to abqlistener on HOST:PORT, send a message, wait for reply.
"""
import argparse
parser = argparse.ArgumentParser(description='Abacus listener',
add_help=True,
usage=usage)
parser.add_argument('-H', '--host', metavar='INTERFACE', default='',
help='Interface IP address or name, or (default: empty string)')
parser.add_argument('-P', '--port', metavar='PORTNUM', type=int, default=2525,
help='port number of listener (default: 2525)')
args = parser.parse_args()
import json
import socket
message = "I get all the best code from stackoverflow!"
print "CLIENT: Creating socket..."
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print "CLIENT: Connecting to {}:{}.".format(args.host, args.port)
s.connect((args.host, args.port))
print "CLIENT: Sending message:", message
s.send(message)
print "CLIENT: Waiting for reply..."
data = s.recv(1024)
print "CLIENT: Got response:"
print json.loads(data)
print "CLIENT: Closing socket..."
s.close()
And here's what they print when I run them together:
$ python2.7 abqlistener.py --port 3434 &
[2] 44088
$ Namespace(host='', port=3434)
Server starting. Press Ctrl+C to interrupt...
$ python2.7 abqclient.py --port 3434
CLIENT: Creating socket...
CLIENT: Connecting to :3434.
CLIENT: Sending message: I get all the best code from stackoverflow!
CLIENT: Waiting for reply...
SERVER: 127.0.0.1 wrote:
I get all the best code from stackoverflow!
DO_WORK: Doing work with data!
I get all the best code from stackoverflow!
SERVER: response to 127.0.0.1:
{"pi": 3, "e": 3, "desc": "low-precision natural constants"}
CLIENT: Got response:
{u'pi': 3, u'e': 3, u'desc': u'low-precision natural constants'}
CLIENT: Closing socket...
References:
argparse, SocketServer, json, socket are all "standard" Python libraries.
To be clear, my understanding is that you are running Abaqus/CAE via a Python script as an independent process (let's call it abq.py), which checks for, opens, and reads a trigger file to determine if it should run an analysis. The trigger file is created by a second Python process (let's call it main.py). Finally, main.py waits to read the output file created by abq.py. You want a more efficient way to signal abq.py to run an analysis, and you're open to different techniques to exchange data.
As you mentioned, subprocess or multiprocessing might be an option. However, I think a simpler solution is to combine your two scripts, and optionally use a callback function to monitor the solution and process your output. I'll assume there is no need to have abq.py constantly running as a separate process, and that all analyses can be started from main.py whenever it is appropriate.
Let main.py have access to the Abaqus Mdb. If it's already built, you open it with:
mdb = openMdb(FileName)
A trigger file is not needed if main.py starts all analyses. For example:
if SomeCondition:
j = mdb.Job(name=MyJobName, model=MyModelName)
j.submit()
j.waitForCompletion()
Once complete, main.py can read the output file and continue. This is straightforward if the data file was generated by the analysis itself (e.g. .dat or .odb files). OTH, if the output file is generated by some code in your current abq.py, then you can probably just include it in main.py instead.
If that doesn't provide enough control, instead of the waitForCompletion method you can add a callback function to the monitorManager object (which is automatically created when you import the abaqus module: from abaqus import *). This allows you to monitor and respond to various messages from the solver, such as COMPLETED, ITERATION, etc. The callback function is defined like:
def onMessage(jobName, messageType, data, userData):
if messageType == COMPLETED:
# do stuff
else:
# other stuff
Which is then added to the monitorManager and the job is called :
monitorManager.addMessageCallback(jobName=MyJobName,
messageType=ANY_MESSAGE_TYPE, callback=onMessage, userData=MyDataObj)
j = mdb.Job(name=MyJobName, model=MyModelName)
j.submit()
One of the benefits to this approach is that you can pass in a Python object as the userData argument. This could potentially be your output file, or some other data container. You could probably figure out how to process the output data within the callback function - for example, access the Odb and get the data, then do any manipulations as needed without needing the external file at all.
I agree with the answer, except for some minor syntax problems.
defining instance variables inside the handler is a no no. not to mention they are not being defined in any sort of init() method. Subclass TCPServer and define your instance variables in TCPServer.init(). Everything else will work the same.
The issue
I am trying to play with a Bluefruit LE Friend dongle (Adafruit).
They provide a python library to communicate with it.
Unfortunately, the example they provide doesn't work:
user#server# python ../../low_level.py
Using adapter: user-VirtualBox
Disconnecting any connected UART devices...
Searching for UART device...
Connecting to device...
Discovering services...
Traceback (most recent call last):
File "../../low_level.py", line 106, in <module>
ble.run_mainloop_with(main)
File "build/bdist.linux-x86_64/egg/Adafruit_BluefruitLE/bluez_dbus/provider.py", line 118, in _user_thread_main
File "../../low_level.py", line 70, in main
device.discover([UART_SERVICE_UUID], [TX_CHAR_UUID, RX_CHAR_UUID])
File "build/bdist.linux-x86_64/egg/Adafruit_BluefruitLE/bluez_dbus/device.py", line 106, in discover
File "build/bdist.linux-x86_64/egg/Adafruit_BluefruitLE/bluez_dbus/device.py", line 137, in advertised
File "/usr/lib/python2.7/uuid.py", line 134, in __init__
raise ValueError('badly formed hexadecimal UUID string')
ValueError: badly formed hexadecimal UUID string
What I tried
I added a line in /usr/lib/python2.7/uuid.py to print the uuid value when the exception is raised; I get the value 1800. I can't find where does this value comes from.
I can't find the file build/bdist.linux-x86_64/egg/Adafruit_BluefruitLE/bluez_dbus/device.py to debug with that one…
Here's the code, from their GitHub repo:
# Example of low level interaction with a BLE UART device that has an RX and TX
# characteristic for receiving and sending data. This doesn't use any service
# implementation and instead just manipulates the services and characteristics
# on a device. See the uart_service.py example for a simpler UART service
# example that uses a high level service implementation.
# Author: Tony DiCola
import logging
import time
import uuid
import Adafruit_BluefruitLE
# Enable debug output.
#logging.basicConfig(level=logging.DEBUG)
# Define service and characteristic UUIDs used by the UART service.
UART_SERVICE_UUID = uuid.UUID('6E400001-B5A3-F393-E0A9-E50E24DCCA9E')
TX_CHAR_UUID = uuid.UUID('6E400002-B5A3-F393-E0A9-E50E24DCCA9E')
RX_CHAR_UUID = uuid.UUID('6E400003-B5A3-F393-E0A9-E50E24DCCA9E')
# Get the BLE provider for the current platform.
ble = Adafruit_BluefruitLE.get_provider()
# Main function implements the program logic so it can run in a background
# thread. Most platforms require the main thread to handle GUI events and other
# asyncronous events like BLE actions. All of the threading logic is taken care
# of automatically though and you just need to provide a main function that uses
# the BLE provider.
def main():
# Clear any cached data because both bluez and CoreBluetooth have issues with
# caching data and it going stale.
ble.clear_cached_data()
# Get the first available BLE network adapter and make sure it's powered on.
adapter = ble.get_default_adapter()
adapter.power_on()
print('Using adapter: {0}'.format(adapter.name))
# Disconnect any currently connected UART devices. Good for cleaning up and
# starting from a fresh state.
print('Disconnecting any connected UART devices...')
ble.disconnect_devices([UART_SERVICE_UUID])
# Scan for UART devices.
print('Searching for UART device...')
try:
adapter.start_scan()
# Search for the first UART device found (will time out after 60 seconds
# but you can specify an optional timeout_sec parameter to change it).
device = ble.find_device(service_uuids=[UART_SERVICE_UUID])
if device is None:
raise RuntimeError('Failed to find UART device!')
finally:
# Make sure scanning is stopped before exiting.
adapter.stop_scan()
print('Connecting to device...')
device.connect() # Will time out after 60 seconds, specify timeout_sec parameter
# to change the timeout.
# Once connected do everything else in a try/finally to make sure the device
# is disconnected when done.
try:
# Wait for service discovery to complete for at least the specified
# service and characteristic UUID lists. Will time out after 60 seconds
# (specify timeout_sec parameter to override).
print('Discovering services...')
device.discover([UART_SERVICE_UUID], [TX_CHAR_UUID, RX_CHAR_UUID])
# Find the UART service and its characteristics.
uart = device.find_service(UART_SERVICE_UUID)
rx = uart.find_characteristic(RX_CHAR_UUID)
tx = uart.find_characteristic(TX_CHAR_UUID)
# Write a string to the TX characteristic.
print('Sending message to device...')
tx.write_value('Hello world!\r\n')
# Function to receive RX characteristic changes. Note that this will
# be called on a different thread so be careful to make sure state that
# the function changes is thread safe. Use Queue or other thread-safe
# primitives to send data to other threads.
def received(data):
print('Received: {0}'.format(data))
# Turn on notification of RX characteristics using the callback above.
print('Subscribing to RX characteristic changes...')
rx.start_notify(received)
# Now just wait for 30 seconds to receive data.
print('Waiting 60 seconds to receive data from the device...')
time.sleep(60)
finally:
# Make sure device is disconnected on exit.
device.disconnect()
# Initialize the BLE system. MUST be called before other BLE calls!
ble.initialize()
# Start the mainloop to process BLE events, and run the provided function in
# a background thread. When the provided main function stops running, returns
# an integer status code, or throws an error the program will exit.
ble.run_mainloop_with(main)
This code depends on D-Bus, specifically the 'properties' offered by it. I would to say that the problem is here:
uuids = self._props.Get(_INTERFACE, 'UUIDs')
https://github.com/adafruit/Adafruit_Python_BluefruitLE/blob/master/Adafruit_BluefruitLE/bluez_dbus/device.py#L131
Probably you either didn't properly install D-Bus, or there's been another issue (linking to it, configuration, version issue, etc). When you try and get the 'UUIDs' property, you get invalid strings, which then get sense down to uuid.UUID() here:
return [uuid.UUID(str(x)) for x in uuids]
and you get the error you've seen. I would print out the value of the uuids array for debugging purposes, see what the value of the elements in it are, and proceed accordingly.
Good luck!
Well u have two ways to fix it.
Right way read more about this and configure your adafruit bluetooth.
Or bad way u can just comment this if statement check
if len(hex) != 32:
raise ValueError('badly formed hexadecimal UUID string')
in /usr/lib64/python2.7/uuid.py problem is comming from the len of "hex" parameter is need to be configured like thist 128 bit 6e400001b5a3f393e0a9e50e24dcca9e and not 16 bit0x1800.
To see which parameters of GATT u need to configure just put a
print "this is value of hex", hex
in /usr/lib64/python2.7/uuid.py before the previosly commented if statement.
I'm having trouble getting my Windows 7 laptop to talk to a Newport CONEX-PP motion controller. I've tried python (Spyder/Anaconda) and a serial port streaming program called Termite and in either case the results are the same: no response from the device. The end goal is to communicate with the controller using python.
The controller connects to my computer via a USB cable they sold me that is explicitly for use with this device. The connector has a pair of lights that blink when the device receives data (red) or sends data (green). There is also a packaged GUI program that comes with the device that seems to work fine. I haven't tried every button, the ones I have tried have the expected result.
The documentation for accessing this device is next to non-existant. The CD in the box has one way to connect to it and the webpage linked above has a different way. The first way (CD from the box) creates a hierarchy of modules that ends in a module it does not recognize (this is a code snippet provided by Newport):
import sys
sys.path.append(r'C:\Newport\MotionControl\CONEX-PP\Bin')
import clr
clr.AddReference("Newport.CONEXPP.CommandInterface")
from CommandInterfaceConexPP import *
import System
instrument="COM5"
print 'Instrument Key=>', instrument
myPP = ConexPP()
ret = myPP.OpenInstrument(instrument)
print 'OpenInstrument => ', ret
result, response, errString = myPP.SR_Get(1)
That last line returns:
Traceback (most recent call last):
File "< ipython-input-2-5d824f156d8f >", line 2, in
result, response, errString = myPP.SR_Get(1)
TypeError: No method matches given arguments
I'm guessing this is because the various module references are screwy in some way. But I don't know, I'm relatively new to python and the only time I have used it for serial communication the example files provided by the vendor simply worked.
The second way to communicate with the controller is via the visa module (the CONEX_SMC_common module imports the visa module):
import sys
sys.path.append(r'C:\Newport\NewportPython')
class CONEX(CONEXSMC): def __init__(self):
super(CONEX,self).__init__() device_key = 'com5'
self.connect=self.rm.open_resource(device_key, baud_rate=57600, timeout=2000, data_bits=8, write_termination='\r\n',read_termination='\r\n')
mine.connect.read()
That last mine.connect.read() command returns:
VisaIOError: VI_ERROR_TMO (-1073807339): Timeout expired before operation completed.
If, instead, I write to the port mine.connect.write('VE') the light on the connector flashes red as if it received some data and returns:
(4L, < StatusCode.success: 0 >)
If I ask for the dictionary of the "mine" object mine.__dict__, I get:
{'connect': <'SerialInstrument'(u'ASRL5::INSTR')>,
'device_key': u'ASRL5::INSTR',
'list_of_devices': (u'ASRL5::INSTR',),
'rm': )>}
The ASRL5::INSTR resource for VISA is at least related to the controller, because when I unplug the device from the laptop it disappears and the GUI program will stop working.
Maybe there is something simple I'm missing here. I have NI VISA installed and I'm not just running with the DLL that comes from the website. Oh, I found a Github question / answer with this exact problem but the end result makes no sense, the thread is closed after hgrecco tells him to use "open_resource" which is precisely what I am using.
Results with Termite are the same, I can apparently connect to the controller and get the light to flash red, but it never responds, either through Termite or by performing the requested action.
I've tried pySerial too:
import serial
ser = serial.Serial('com5')
ser.write('VE\r\n')
ser.read()
Python just waits there forever, I assume because I haven't set a timeout limit.
So, if anyone has any experience with this particular motion controller, Newport devices or with serial port communication in general and can shed some light on this problem I'd much appreciate it. After about 7 hours on this I'm out of ideas.
After coming back at this with fresh eyes and finding this GitHub discussion I decided to give pySerial another shot because neither of the other methods in my question are yet working. The following code works:
import serial
ser = serial.Serial('com5',baudrate=115200,timeout=1.0,parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, bytesize=serial.EIGHTBITS)
ser.write('1TS?\r\n')
ser.read(10)
and returns
'1TS000033\r'
The string is 9 characters long, so my arbitrarily chosen 10 character read ended up picking up one of the termination characters.
The problem is that python files that come with the device, or available on the website are at best incomplete and shouldn't be trusted for anything. The GUI manual has the baud rate required. I used Termite to figure out the stop bit settings - or at least one that works.
3.5 years later...
Here is a gist with a class that supports Conex-CC
It took me hours to solve this!
My device is Conex-CC, not PP, but it's seem to be the same idea.
For me, the serial solution didn't work because there was absolutely no response from the serial port, either through the code nor by direct TeraTerm access.
So I was trying to adapt your code to my device (because for Conex-CC, even the code you were trying was not given!).
It is important to say that import clr is based on pip install pythonnet and not pip install clr which will bring something related to colors.
After getting your error, I was looking for this Pythonnet error and have found this answer, which led me to the final solution:
import clr
# We assume Newport.CONEXCC.CommandInterface.dll is copied to our folder
clr.AddReference("Newport.CONEXCC.CommandInterface")
from CommandInterfaceConexCC import *
instrument="COM4"
print('Instrument Key=>', instrument)
myCC = ConexCC()
ret = myCC.OpenInstrument(instrument)
print('OpenInstrument => ', ret)
response = 0
errString = ''
result, response, errString = myCC.SR_Get(1, response, errString)
print('Positive SW Limit: result=%d,response=%.2f,errString=\'%s\''%(result,response,errString))
myCC.CloseInstrument()
And here is the result I've got:
Instrument Key=> COM4
OpenInstrument => 0
Positive SW Limit: result=0,response=25.00,errString=''�
For Conex-CC serial connections are possible using both pyvisa
import pyvisa
rm = pyvisa.ResourceManager()
inst = rm.open_resource('ASRL6::INSTR',baud_rate=921600, write_termination='\r\n',read_termination='\r\n')
pos = inst.query('01PA?').strip()
and serial
import serial
serial = serial.Serial(port='com6',baudrate=921600,bytesize=8,parity='N',stopbits=1,xonxoff=True)
serial.write('01PA?'.encode('ascii'))
serial.read_until(b'\r\n')
All the commands are according to the manual
I have been trying to work with the standard GPS (gps.py) module in python 2.6. This is supposed to act as a client and read GPS Data from gpsd running in Ubuntu.
According to the documentation from GPSD webpage on client design (GPSD Client Howto), I should be able to use the following code (slightly modified from the example) for getting latest GPS Readings (lat long is what I am mainly interested in)
from gps import *
session = gps() # assuming gpsd running with default options on port 2947
session.stream(WATCH_ENABLE|WATCH_NEWSTYLE)
report = session.next()
print report
If I repeatedly use the next() it gives me buffered values from the bottom of the queue (from when the session was started), and not the LATEST Gps reading. Is there a way to get more recent values using this library? In a Way, seek the Stream to the latest values?
Has anyone got a code example using this library to poll the gps and get the value i am looking for ?
Here is what I am trying to do:
start the session
Wait for user to call the gps_poll() method in my code
Inside this method read the latest TPV (Time Position Velocity) report and return lat long
Go back to waiting for user to call gps_poll()
What you need to do is regularly poll 'session.next()' - the issue here is that you're dealing with a serial interface - you get results in the order they were received. Its up to you to maintain a 'current_value' that has the latest retrieved value.
If you don't poll the session object, eventually your UART FIFO will fill up and you won't get any new values anyway.
Consider using a thread for this, don't wait for the user to call gps_poll(), you should be polling and when the user wants a new value they use 'get_current_value()' which returns current_value.
Off the top of my head it could be something as simple as this:
import threading
import time
from gps import *
class GpsPoller(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.session = gps(mode=WATCH_ENABLE)
self.current_value = None
def get_current_value(self):
return self.current_value
def run(self):
try:
while True:
self.current_value = self.session.next()
time.sleep(0.2) # tune this, you might not get values that quickly
except StopIteration:
pass
if __name__ == '__main__':
gpsp = GpsPoller()
gpsp.start()
# gpsp now polls every .2 seconds for new data, storing it in self.current_value
while 1:
# In the main thread, every 5 seconds print the current value
time.sleep(5)
print gpsp.get_current_value()
The above answers are very inefficient and overly complex for anyone using modern versions of gpsd and needing data at only specific times, instead of streaming.
Most GPSes send their position information at least once per second. Presumably since many GPS-based applications desire real-time updates, the vast majority of gpsd client examples I've seen use the above method of watching a stream from gpsd and receiving realtime updates (more or less as often as the gps sends them).
However, if (as in the OP's case) you don't need streaming information but just need the last-reported position whenever it's requested (i.e. via user interaction or some other event), there's a much more efficient and simpler method: let gpsd cache the latest position information, and query it when needed.
The gpsd JSON protocol has a ?POLL; request, which returns the most recent GPS information that gpsd has seen. Instead of having to iterate over the backlog of gps messages, and continually read new messages to avoid full buffers, you can send a ?WATCH={"enable":true} message at the start of the gpsd session, and then query the latest position information whenever you need it with ?POLL;. The response is a single JSON object containing the most recent information that gpsd has seen from the GPS.
If you're using Python3, the easiest way I've found is to use the gpsd-py3 package available on pypi. To connect to gpsd, get the latest position information, and print the current position:
import gpsd
gpsd.connect()
packet = gpsd.get_current()
print(packet.position())
You can repeat the gpsd.get_current() call whenever you want new position information, and behind the scenes the gpsd package will execute the ?POLL; call to gpsd and return an object representing the response.
Doing this with the built-in gps module isn't terribly straightforward, but there are a number of other Python clients available, and it's also rather trivial to do with anything that can perform socket communication, including this example using telnet:
$ telnet localhost 2947
Trying ::1...
Connected to localhost.
Escape character is '^]'.
{"class":"VERSION","release":"3.16","rev":"3.16","proto_major":3,"proto_minor":11}
?WATCH={"enable":true}
{"class":"DEVICES","devices":[{"class":"DEVICE","path":"/dev/pts/10","driver":"SiRF","activated":"2018-03-02T21:14:52.687Z","flags":1,"native":1,"bps":4800,"parity":"N","stopbits":1,"cycle":1.00}]}
{"class":"WATCH","enable":true,"json":false,"nmea":false,"raw":0,"scaled":false,"timing":false,"split24":false,"pps":false}
?POLL;
{"class":"POLL","time":"2018-03-02T21:14:54.873Z","active":1,"tpv":[{"class":"TPV","device":"/dev/pts/10","mode":3,"time":"2005-06-09T14:34:53.280Z","ept":0.005,"lat":46.498332203,"lon":7.567403907,"alt":1343.165,"epx":24.829,"epy":25.326,"epv":78.615,"track":10.3788,"speed":0.091,"climb":-0.085,"eps":50.65,"epc":157.23}],"gst":[{"class":"GST","device":"/dev/pts/10","time":"1970-01-01T00:00:00.000Z","rms":0.000,"major":0.000,"minor":0.000,"orient":0.000,"lat":0.000,"lon":0.000,"alt":0.000}],"sky":[{"class":"SKY","device":"/dev/pts/10","time":"2005-06-09T14:34:53.280Z","xdop":1.66,"ydop":1.69,"vdop":3.42,"tdop":3.05,"hdop":2.40,"gdop":5.15,"pdop":4.16,"satellites":[{"PRN":23,"el":6,"az":84,"ss":0,"used":false},{"PRN":28,"el":7,"az":160,"ss":0,"used":false},{"PRN":8,"el":66,"az":189,"ss":45,"used":true},{"PRN":29,"el":13,"az":273,"ss":0,"used":false},{"PRN":10,"el":51,"az":304,"ss":29,"used":true},{"PRN":4,"el":15,"az":199,"ss":36,"used":true},{"PRN":2,"el":34,"az":241,"ss":41,"used":true},{"PRN":27,"el":71,"az":76,"ss":42,"used":true}]}]}
?POLL;
{"class":"POLL","time":"2018-03-02T21:14:58.856Z","active":1,"tpv":[{"class":"TPV","device":"/dev/pts/10","mode":3,"time":"2005-06-09T14:34:53.280Z","ept":0.005,"lat":46.498332203,"lon":7.567403907,"alt":1343.165,"epx":24.829,"epy":25.326,"epv":78.615,"track":10.3788,"speed":0.091,"climb":-0.085,"eps":50.65,"epc":157.23}],"gst":[{"class":"GST","device":"/dev/pts/10","time":"1970-01-01T00:00:00.000Z","rms":0.000,"major":0.000,"minor":0.000,"orient":0.000,"lat":0.000,"lon":0.000,"alt":0.000}],"sky":[{"class":"SKY","device":"/dev/pts/10","time":"2005-06-09T14:34:53.280Z","xdop":1.66,"ydop":1.69,"vdop":3.42,"tdop":3.05,"hdop":2.40,"gdop":5.15,"pdop":4.16,"satellites":[{"PRN":23,"el":6,"az":84,"ss":0,"used":false},{"PRN":28,"el":7,"az":160,"ss":0,"used":false},{"PRN":8,"el":66,"az":189,"ss":45,"used":true},{"PRN":29,"el":13,"az":273,"ss":0,"used":false},{"PRN":10,"el":51,"az":304,"ss":29,"used":true},{"PRN":4,"el":15,"az":199,"ss":36,"used":true},{"PRN":2,"el":34,"az":241,"ss":41,"used":true},{"PRN":27,"el":71,"az":76,"ss":42,"used":true}]}]}
Adding my two cents.
For whatever reason my raspberry pi would continue to execute a thread and I'd have to hard reset the pi.
So I've combined sysnthesizerpatel and an answer I found on Dan Mandel's blog here.
My gps_poller class looks like this:
import os
from gps import *
from time import *
import time
import threading
class GpsPoller(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.session = gps(mode=WATCH_ENABLE)
self.current_value = None
self.running = True
def get_current_value(self):
return self.current_value
def run(self):
try:
while self.running:
self.current_value = self.session.next()
except StopIteration:
pass
And the code in use looks like this:
from gps_poll import *
if __name__ == '__main__':
gpsp = GpsPoller()
try:
gpsp.start()
while True:
os.system('clear')
report = gpsp.get_current_value()
# print report
try:
if report.keys()[0] == 'epx':
print report['lat']
print report['lon']
time.sleep(.5)
except(AttributeError, KeyError):
pass
time.sleep(0.5)
except(KeyboardInterrupt, SystemExit):
print "\nKilling Thread.."
gpsp.running = False
gpsp.join()
print "Done.\nExiting."
You can also find the code here: Here and Here
I know its an old thread but just for everyone understanding, you can also use pyembedded python library for this.
pip install pyembedded
from pyembedded.gps_module.gps import GPS
import time
gps = GPS(port='COM3', baud_rate=9600)
while True:
print(gps.get_lat_long())
time.sleep(1)
https://pypi.org/project/pyembedded/