I'm currently using Popen to send instructions to a utility (canutils... the cansend function in particular) via the command line.
The entire function looks like this.
def _CANSend(self, register, value, readWrite = 'write'):
"""send a CAN frame"""
queue=self.CANbus.queue
cobID = hex(0x600 + self.nodeID) #assign nodeID
indexByteLow,indexByteHigh,indexByteHigher,indexByteHighest = _bytes(register['index'], register['objectDataType'])
subIndex = hex(register['subindex'])
valueByteLow,valueByteHigh,valueByteHigher,valueByteHighest = _bytes(value, register['objectDataType'])
io = hex(COMMAND_SPECIFIER[readWrite])
frame = ["cansend", self.formattedCANBus, "-i", cobID, io, indexByteLow, indexByteHigh, subIndex, valueByteLow, valueByteHigh, valueByteHigher, valueByteHighest, "0x00"]
Popen(frame,stdout=PIPE)
a=queue.get()
queue.task_done()
return a
I was running into some issues as I was trying to send frames (the Popen frame actually executes the command that sends the frame) in rapid succession, but found that the Popen line was taking somewhere on the order of 35 ms to execute... every other line was less than 2 us.
So... what might be a better way to invoke the cansend function (which, again, is part of the canutils utility..._CANSend is the python function above that calls ) more rapidly?
I suspect that most of that time is due to the overhead of forking every time you run cansend. To get rid of it, you'll want an approach that doesn't have to create a new process for each send.
According to this blog post, SocketCAN is supported by python 3.3. It should let your program create and use CAN sockets directly. That's probably the direction you'll want to go.
Related
Is it possible -- other than by using something like a .txt/dummy file -- to pass a value from one program to another?
I have a program that uses a .txt file to pass a starting value to another program. I update the value in the file in between starting the program each time I run it (ten times, essentially simultaneously). Doing this is fine, but I would like to have the 'child' program report back to the 'mother' program when it is finished, and also report back what files it found to download.
Is it possible to do this without using eleven files to do it (that's one for each instance of the 'child' to 'mother' reporting, and one file for the 'mother' to 'child')? I am talking about completely separate programs, not classes or functions or anything like that.
To operate efficently, and not be waiting around for hours for everything to complete, I need the 'child' program to run ten times and get things done MUCH faster. Thus I run the child program ten times and give each program a separate range to check through.
Both programs run fine, I but would like to get them to run/report back and forth with each other and hopefully not be using file 'transmission' to accomplish the task, especially on the child-mother side of the transferring of data.
'Mother' program...currently
import os
import sys
import subprocess
import time
os.chdir ('/media/')
#find highest download video
Hival = open("Highest.txt", "r")
Histr = Hival.read()
Hival.close()
HiNext = str(int(Histr)+1)
#setup download #1
NextVal = open("NextVal.txt","w")
NextVal.write(HiNext)
NextVal.close()
#call download #1
procs=[]
proc=subprocess.Popen(['python','test.py'])
procs.append(proc)
time.sleep(2)
#setup download #2-11
Histr2 = int(Histr)/10000
Histr2 = Histr2 + 1
for i in range(10):
Hiint = str(Histr2)+"0000"
NextVal = open("NextVal.txt","w")
NextVal.write(Hiint)
NextVal.close()
proc=subprocess.Popen(['python','test.py'])
procs.append(proc)
time.sleep(2)
Histr2 = Histr2 + 1
for proc in procs:
proc.wait()
'Child' program
import urllib
import os
from Tkinter import *
import time
root = Tk()
root.title("Audiodownloader")
root.geometry("200x200")
app = Frame(root)
app.grid()
os.chdir('/media/')
Fileval = open('NextVal.txt','r')
Fileupdate = Fileval.read()
Fileval.close()
Fileupdate = int(Fileupdate)
Filect = Fileupdate/10000
Filect2 = str(Filect)+"0009"
Filecount = int(Filect2)
while Fileupdate <= Filecount:
root.title(Fileupdate)
url = 'http://www.yourfavoritewebsite.com/audio/encoded/'+str(Fileupdate)+'.mp3'
urllib.urlretrieve(url,str(Fileupdate)+'.mp3')
statinfo = os.stat(str(Fileupdate)+'.mp3')
if statinfo.st_size<10000L:
os.remove(str(Fileupdate)+'.mp3')
time.sleep(.01)
Fileupdate = Fileupdate+1
root.update_idletasks()
I'm trying to convert the original VB6 program over to Linux and make it much easier to use at the same time. Hence the lack of .mainloop being missing. This was my first real attempt at anything in Python at all hence the lack of def or classes. I'm trying to come back and finish this up after 1.5 months of doing nothing with it mostly due to not knowing how to. In research a little while ago I found this is WAY over my head. I haven't ever did anything with threads/sockets/client/server interaction so I'm purely an idiot in this case. Google anything on it and I just get brought right back here to stackoverflow.
Yes, I want 10 running copies of the program at the same time, to save time. I could do without the gui interface if it was possible for the program to report back to 'mother' so the mother could print on the screen the current value that is being searched. As well as if the child could report back when its finished and if it had any file that it downloaded successfully(versus downloaded and then erased due to being to small). I would use the successful download information to update Highest.txt for the next time the program got ran.
I think this may clarify things MUCH better...that or I don't understand the nature of using server/client interaction:) Only reason time.sleep is in the program was due to try to make sure that the files could get written before the next instance of the child program got ran. I didn't know for sure what kind of timing issue I may run into so I included those lines for safety.
This can be implemented using a simple client/server topology using the multiprocessing library. Using your mother/child terminology:
server.py
from multiprocessing.connection import Listener
# client
def child(conn):
while True:
msg = conn.recv()
# this just echos the value back, replace with your custom logic
conn.send(msg)
# server
def mother(address):
serv = Listener(address)
while True:
client = serv.accept()
child(client)
mother(('', 5000))
client.py
from multiprocessing.connection import Client
c = Client(('localhost', 5000))
c.send('hello')
print('Got:', c.recv())
c.send({'a': 123})
print('Got:', c.recv())
Run with
$ python server.py
$ python client.py
When you talk about using txt to pass information between programs, we first need to know what language you're using.
Within my knowledge of Java and Python achi viable despite laborious depensendo the amount of information that wants to work.
In python, you can use the library that comes with it for reading and writing txt and schedule execution, you can use the apscheduler.
I'm trying to start a dbus timer from python.
At the moment I was able to launch it through this script:
import dbus
from subprocess import call
def scheduleWall( time, message ):
call(['systemd-run --on-active='+str(time) +' --unit=scheduled-message --description="'+ message +'" wall "'+ message +'"'], shell=True)
I'd like to not use "call", but try to use "StartTransientUnit", but I wasn't able to understand the format of the call at all! I'm rather new to dbus and python.
def scheduleWall( time, message ):
try:
bus = dbus.SystemBus()
systemd1 = bus.get_object("org.freedesktop.systemd1"," /org/freedesktop/systemd1")
manager = dbus.Interface(systemd1, 'org.freedesktop.systemd1.Manager')
obj = manager.StartTransientUnit('scheduled-message.timer','fail',[????],[????])
except:
pass
Is startTransientUnit the right method to call? how should I call it?
TL;DR: stick to systemd-run :)
I don’t think StartTransientUnit is quite the right method – you need to create two transient units, after all: the timer unit, and the service unit that it will start (which will run wall later). Perhaps you can use StartTransientUnit for the timer, but at least not for the service. You also need to set all the properties that the two units need (OnActiveSec= for the timer, ExecStart= for the service, probably some more…) – you can see how systemd-run does it by running busctl monitor org.freedesktop.systemd1 and then doing systemctl run --on-active 1s /bin/true in another terminal. (The main calls seem to be UnitNew and JobNew.)
I’ll admit, to me this seems rather complicated, and if systemd-run already exists to do the job for you, why not use it? The only change I would make is to eliminate the shell part and pass an array of arguments instead of a single space-separated string, with something like this (untested):
subprocess.run(['systemd-run', '--on-active', str(time), ' --unit', 'scheduled-message', '--description', message, 'wall', message)
I'm looking for a way to determine that a script I wrote, packed by PyInstaller, is the only copy of itself running - so that it can quit if it finds itself open already.
I'd also like to implement an argument to kill all currently running versions of the .exe. Killing them one by one by simple list of PIDs associated with the .exe isn't an option since I could accidentally kill my own process before finishing.
It would be the best if I could use only win32 APIs, as this script is sometimes called by services and thus is unfriendly to many subprocess.Popen calls. I don't want to have to go through UAC spoofing. However, sometimes the .exe is invoked by the Windows Scheduler or by user-land programs.
My current version of finding processes uses win32pdh. I'm not exactly sure where to attribute this, though it's very close to first example from here: http://www.programcreek.com/python/example/51184/win32pdh.OpenQuery
def get_win_processes():
win32pdh.EnumObjects(None, None, win32pdh.PERF_DETAIL_WIZARD)
junk, instances = win32pdh.EnumObjectItems(None,None,'Process', win32pdh.PERF_DETAIL_WIZARD)
proc_dict = {}
for instance in instances:
if proc_dict.has_key(instance):
proc_dict[instance] = proc_dict[instance] + 1
else:
proc_dict[instance]=0
proc_ids = []
for instance, max_instances in proc_dict.items():
for inum in xrange(max_instances+1):
hq = win32pdh.OpenQuery() # initializes the query handle
try:
path = win32pdh.MakeCounterPath( (None, 'Process', instance, None, inum, 'ID Process') )
counter_handle=win32pdh.AddCounter(hq, path) #convert counter path to counter handle
try:
win32pdh.CollectQueryData(hq) #collects data for the counter
type, val = win32pdh.GetFormattedCounterValue(counter_handle, win32pdh.PDH_FMT_LONG)
proc_ids.append((instance, val))
except win32pdh.error, e:
pass
win32pdh.RemoveCounter(counter_handle)
except win32pdh.error, e:
pass
win32pdh.CloseQuery(hq)
return proc_ids
However, this returns two processes, one of which is guardian process for PyInstaller, the other is the actual instance of the program. Furthermore, it doesn't indicate which one is the currently-running guardian or child.
Example output when exe is 'wcdo.exe' and there are two copies running:
(u'wcdo', 11700)
(u'wcdo', 8748)
(u'wcdo', 4152)
(u'wcdo', 9308)
Thanks!
You could query wmic and check which applications are connected ...
C:\>wmic process where name="webserver2.exe" get processid,parentprocessid,commandline
CommandLine ParentProcessId ProcessId
webserver2.exe --scheduled 3136 2212
webserver2.exe --scheduled 2212 6004
Here:
3112 is cmd.exe
4140 the 'pyInstaller wrappper' (because it is parent and process)
3220 the application itself
Using PHD seems to be overhead, it is slow and quite unflexible to indentify processes on Windows.
Calling 'wmic' through subprocess and parsing the output is done in a few lines.
Additionally there is a format flag, how the wmic output is presented (csv, xml, ...)
Btw. you could try to create your exe with py2exe, that does not use a wrapper application.
Not sure if it is relevant, to identify how the application was started. But you could add a special command line argument to your Windows Scheduler to run wcdo.exe --scheduled.
I am writing a little Python script that parses the input from a QR reader (which is seen as a keyboard by the system).
At the moment I am using raw_input() but this function waits for an EOF/end-of-line symbol in order to submit the received string to the program.
I am wondering if there is a way to continuously parse the input string and not just in chunks limited by a line end.
In practice:
- is there a way in python to asynchronously and continuously parse a console input ?
- is there a way to change raw_input() (or an equivalent function) to look for another character in order to submit the string read into the program?
It seems like you're generally trying to solve two problems:
Read input in chunks
Parse that input asynchronously
For the first part, it will vary greatly based on the specifics of the input function your calling, but for standard input you could use something like
sys.stdin.read(1)
As for parsing asynchronously, there are a number of approaches you could take. Python is synchronous, so you will necessarily have to involve some subprocess calls. Manually spawning a function using the subprocess library is one option. You could also use something like Redis or some lightweight job queue to pop input chunks on and have them read and processed by another background script. Finally, gevent is a very popular coroutine based library for spawning asynchronous processes. Using gevent, this whole set up would look something like this:
class QRLoader(object):
def __init__(self):
self.data = []
def add_data(data):
self.data.append(data)
# if self._data constitutes a full QR code
# do something with data
gevent.spawn(parse_async)
def parse_async():
# do something with qr_loader.data
qr_loader = QRLoader()
while True:
data = sys.stdin.read(1)
if data:
qr_loader.add_data(data)
I am trying to make a program(in python) that as I write it writes to a file and opens to a certain window that I have already created.I have looked allarund for a vaible soution bt it would seem that multi-threading may be the only option.
I was hoping that when option autorun is "activated" it will:
while 1:
wbuffer = textview.get_buffer()
text = wbuffer.get_text(wbuffer.get_start_iter(), wbuffer.get_end_iter())
openfile = open(filename,"w")
openfile.write(text)
openfile.close()
I am using pygtk and have a textview window, but when I get the buffer it sits forever.
I am thinking that I need to multi-thread it and queue it so one thread will be writing the buffer while it is being queued.
my source is here. (I think the statement is at line 177.)
any help is much appreciated. :)
and here is the function:
def autorun(save):
filename = None
chooser = gtk.FileChooserDialog("Save File...", None,
gtk.FILE_CHOOSER_ACTION_SAVE,
(gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL,
gtk.STOCK_SAVE, gtk.RESPONSE_OK))
response = chooser.run()
if response == gtk.RESPONSE_OK: filename = chooser.get_filename()
filen = filename
addr = (filename)
addressbar.set_text("file://" + filename)
web.open(addr)
chooser.destroy()
wbuffer = textview.get_buffer()
while 1:
text = wbuffer.get_text(wbuffer.get_start_iter(), wbuffer.get_end_iter())
time.sleep(1)
openfile = open(filename,"w")
openfile.write(text)
openfile.close()
Though not too easy to see exactly what your GTK-stuff not included here is doing, the main problem is that the control needs to be returned to the gtk main-loop. Else the program will hang.
So if you have a long process (like this eternal one here), then you need to thread it. The problem is that you need the thread to exit nicely when the main program quits, so you'll have to redesign a bit around that. Also, threading with gtk needs to be initialized correctly (look here).
However, I don't think you need threading, instead you could connect the changed signal of your TextBuffer to a function that writes the buffer to the target-file (if the user has put the program in autorun-mode). A problem with this is if the buffer gets large or program slow, in which case, you should consider threading the callback of the changed signal. So this solution requires to make sure you don't get into the situation where save-requests get stacked on top of each other because the user is faster at typing than the computer is saving. Takes some design thought.
So, finally, the easier solution: you may not want the buffer to save for every button-press. In which case, you could have the save-function (which could look like your first code-block without the loop) on a timeout instead. Just don't make the time-out too short.