I have a module in written in System Verilog that dumps the contents of the SRAM into a file. I would like to read from this file and use the data in a separate program written in python, but in real time. I don't have much control over the writing from the verilog code. Is it possible to somehow manage the two read and writes? Currently when it reads from the file, there is a (seemingly) random number inserted at the start of every line and that throws off the parsing. I assume these prefixes only appear when they are reading and writing at the same time because if I run them both very slowly it works fine.
window = Tk()
canvas = Canvas(window, width=WIDTH, height=HEIGHT, bg="#000000")
canvas.pack()
img = PhotoImage(width=WIDTH, height=HEIGHT)
canvas.create_image((WIDTH/2, HEIGHT/2), image=img, state="normal")
def redraw():
fp = open('test_data.txt','r')
lines=fp.readlines()
for i in range(len(lines)):
#do stuff
fp.close()
window.after(35,redraw)
window.after(35,redraw)
mainloop()
This is what is reading.
Any suggestions are appreciated.
Reading and writing a file from multiple processes is likely to be unpredictable.
If you are running on a Unix-like system, you could use mkfifo to make a file-like object which you can write to and read from simultaneously and the data will stay in the correct order.
On Windows you need a NamedPipe - which you could create from Python and then connect to by opening as a normal file in SystemVerilog (I believe!)
http://docs.activestate.com/activepython/2.4/pywin32/win32pipe.html
I would suggest using VPI to directly access the contents of the SRAM straight from the simulation. This also opens the possibility of dynamically adjusting your stimulus (e.g. sending data until a FIFO is full) rather than relying on files for input/output.
Since you're using Python you could look into Cocotb, an open-source Python cosimulation framework. Basically you can use the python 'dot' notation to traverse the design hierarchy and pull out values:
# Pull out the values from the simulation
for index in range(len(dut.path.through.hierarchy.ram)):
val = dut.path.through.hierarchy.ram[index].value.integer
# do stuff
I created a quick example on EDA Playground with a simplistic example: http://www.edaplayground.com/s/57/565
Disclaimer: I'm one of the Cocotb developers.
You can use pipe, in this example a cmd (windows) command line writes in the pipe, then the program shows its output from the same pipe:
import subprocess,sys
p = subprocess.Popen("netsatat",shell=False ,stdout=subprocess.PIPE)
while True:
out = p.stdout.readline()
if out == '' and p.poll() != None:
break
if out != b'':
print(out.decode('ascii','backslashreplace'))
else :
break;
Related
I would like to reduce the size of some PDF files. There are many ways to do so, but most of them don't work for my purposes. For example, pdftk, cpdf, and pdfoptsize all fail to reduce the sizes of my files. Ghostscript can reduce the file size, but only at an unacceptable cost in terms of legibility of the figures. There seem to be some great APIs for size reduction, but I don't want to pay. So I would like to automate the "Reduce File Size" option in Acrobat, which works well. Is there a way to do this in Python or from the command line?
I am running Windows 10 with Acrobat DC; I also have access to Acrobat X. I can set up a "Batch Processing" job in Acrobat, but even then, I would need to run it from Python or from the command line.
I can use the Acrobat API from Python, but I don't see how to use it to run the "Reduce File Size" command. I can set the PDSaveCollectGarbage flag, but it doesn't help. Here is a minimal example of a Python script that opens and resaves a file -- it illustrates the extent of my knowledge in this area:
import os
from win32com.client.dynamic import Dispatch
src = os.path.abspath('original.pdf')
PDSaveFull = 0x01
PDSaveCollectGarbage = 0x20
SAVEFLAG = PDSaveFull | PDSaveCollectGarbage
try:
app = Dispatch("AcroExch.AVDoc")
if app.Open(src, src):
pddoc = app.GetPDDoc()
pddoc.Save(SAVEFLAG, os.path.abspath('./new.pdf'))
except Exception as e:
print(str(e))
finally:
app.Close(-1)
I have a python script that takes a long time to run.
I placed print-outs throughout the script to observe its progress.
As this script different programs, some of whom print many messages, it is unfeasible to print directly to the screen.
Therefore, I am using a report file
f_report = open(os.path.join("//shared_directory/projects/work_area/", 'report.txt'), 'w')
To which I print my massages:
f_report.write(" "+current_image+"\n")
However, when I look at the file while the script is running, I do not see the messages. They appear only when the program finishes and closes the file, making my approach useless for monitoring on-going progress.
What should I do in order to make python output the messages to the report file in real time?
Many thanks.
You should use flush() function to write immediately to the file.
f_report.write(" "+current_image+"\n")
f_report.flush()
try this:
newbuffer = 0
f_report = open(os.path.join("//shared_directory/projects/work_area/", 'report.txt'), 'w', newbuffer)
it sets up a 0 buffer which will push OS to write content to file "immediately". well, different OS may behavior differently but in general content will be flushed out right away.
I am making a game in python 2.7 for fun and am trying to make a map to go along with it. I am using file I/O to read and write the map and have also got notepad ++ set to silent update, however I can only see the changes once my program has fully run and want to view the file as it is updated.
I have this code which i am testing with:
from time import sleep
map = open('C:\Users\Ryan\Desktop\Codes\Python RPG\Maps\map.txt', 'r+')
map.truncate()
print "file deleted"
sleep(1)
worldMap = open('C:\Users\Ryan\Desktop\Codes\Python RPG\Maps\worldMap.txt', 'r')
for line in worldMap:
map.write(line)
print "file updated"
worldMap.close()
map.close()
Any help is greatly appricated :)
By default Python uses buffered I/O. This means that written data is stored in memory before actually written to the file. Calling file's flush method causes the data to be written to the file.
I have a data acquisition system that produces ASCII data. The data is acquired over USB with serial communication protocol (virtual serial, as the manufacturer of the box claims). I have a Python program/script that uses PySerial with a PySide GUI that plots the acquired data and saves it to HDF5 files. I'm having a very weird problem and I don't know how to tackle it. I wish you guys could help me and provide advice on how you would debug this problem.
How the problem shows up: The problem is that if I use a software like Eltima Data Logger, the data acquired looks fine. However, if I use my software (with PySerial), some chunks of the data seems to be missing. What's weird is that the missing chunk of the data is incompatible with the method of reading. I read line by line, and what is missing from the data is like 100 bytes or 64 bytes chunks that sometimes include newlines!!! I know what's missing because the device buffers the data on an SD Card before sending it to the computer. This made me believe for a long time that the hardware has a problem, until I used this software, Eltima, that showed that it's acquiring the data fine.
The following is the configuration of Eltima:
My configuration:
This whole thing is running in a QThread.
The following is the methods I use in my code (with some minor polishing to make it reusable here):
self.obj = serial.Serial()
self.obj.port = instrumentName
self.obj.baudrate = 115200
self.obj.bytesize = serial.EIGHTBITS
self.obj.parity = serial.PARITY_ODD
self.obj.stopbits = serial.STOPBITS_ONE
self.obj.timeout = 1
self.obj.xonxoff = False
self.obj.rtscts = False
self.obj.dsrdtr = False
self.obj.writeTimeout = 2
self.obj.open()
The algorithm I use for reading, is that I have a loop that looks for a specific header line, and once found, it keeps pushing lines into buffer until a specific end line is found; and this data is finally processed. Following is my code:
try:
# keep reading until a header line is found that indicates the beginning of a batch of data
while not self.stopped:
self.line = self.readLine()
self.writeDumpFileLine(self.line)
if self.line == DataBatch.d_startString:
print("Acquiring batch, line by line...")
self.dataStrQueue.append(self.line)
break
# after the header line, keep reading until a specific string is found
while not self.stopped:
self.line = self.readLine()
self.writeDumpFileLine(self.line)
self.dataStrQueue.append(self.line)
if self.line == DataBatch.d_endString:
break
except Exception as e1:
print("Exception while trying to read. Error: " + str(e1))
The self.writeDumpFileLine() takes the line from the device and dumps it in a file directly before processing for debugging purposes. These dump files have confirmed the problem of missing chunks.
The implementation of self.readLine() is quite simple:
def readLine(self):
lineData = decodeString(self.obj.readline())
lineData = lineData.replace(acquisitionEndlineChar, "")
return lineData
I would like to point out that I also have an implementation that pulls thousands of lines and parses them based on inWaiting(), and this method has the same problem too!
Now I'm starting to wonder: Is it PySerial? What else could be causing this problem?
Thank you so much for any efforts. If you require any additional information, please ask!
UPDATE:
Actually I have just confirmed that the problem can be reproduced by getting the system to lag a little bit. I use PyCharm to program this software, and while the program is running, if I press Ctrl+S to save, the GUI of PyCharm freezes a little bit (and hence its terminal). Repeating this many times causes the problem in a reproducible manner!!!!
Despite my obviously beginning Python skills, I’ve got a script that pulls a line of data from a 2,000-row CSV file, reads key parameters, and outputs a buffer CSV file organized as an N-by-2 rectangle, and uses the subprocess module to call the external program POVCALLC.EXE, which takes a CSV file organized that way as input. The relevant portion of the code is shown below. I THINK that subprocess or one of its methods should allow me to interact with the external program, but am not quite sure how - or indeed whether this is the module I need.
In particular, when POVCALLC.EXE starts it first asks for the input file, which in this case is buffer.csv. It then asks for several additional parameters including the name of an output file, which come from outside the snippet below. It then starts computing results, and then ask for further user input, including several carriage returns . Obviously, I would prefer to automate this interaction for the 2,000 rows in the original CSV.
Am I on the right track with subprocess, or should I be looking elsewhere to automate this interaction with the external executable?
Many thanks in advance!
# Begin inner loop to fetch Lorenz curve data for each survey
for i in range(int(L_points_number)):
index = 3 * i
line = []
P = L_points[index]
line.append(P)
L = L_points[index + 1]
line.append(L)
with open('buffer.csv', 'a', newline='') as buffer:
writer = csv.writer(buffer, delimiter=',')
P=1
line.append(P)
L=1
line.append(L)
writer.writerow(line)
subprocess.call('povcallc.exe')
# TODO: CALL povcallc and compute results
# TODO: USE Regex to interpret results and append them to
# output file
If your program expects these arguments on the standard input (e.g. after running POVCALLC you type csv filenames into the console), you could use subprocess.Popen() [see https://docs.python.org/3/library/subprocess.html#subprocess.Popen ] with stdin redirection (stdin=PIPE), and use the returned object to send data to stdin.
It would looks something like this:
my_proc = subprocess.Popen('povcallc.exe', stdin=subprocess.PIPE)
my_proc.communicate(input="my_filename_which_is_expected_by_the_program.csv")
You can also use the tuple returned by communicate to automatically check the programs stdout and stderr (see the link to docs for more).