I am using mac osx 10.9 and python 2.7.
Made app using py2app which is basically continuously checking for save time of a file.
Monitoring any changes made in the file in a while loop with small sleep in each iteration.
Process should not stop at all, but it is exiting with error-32 Broken pipe after 15-20 minutes.
How to resolve it.
try:
while True:
app_log.debug("while true")
time.sleep(5)
configProp.read(propfile)
fileNameList=configProp.sections()
if len(fileNameList)!=0:
app_log.debug("fileNameList is not zero")
for i in range(0,len(fileNameList)):
tempnameinfile=configProp.options(fileNameList[i])
openTimeLive = configProp.get(fileNameList[i], "openTimeLive")
openTimeLive = float(openTimeLive)
openTime=float(openTime)
configureTime = 3600*float(configureTime)
monitorTime=float(openTimeLive + configureTime)
if monitorTime > time.time():
lastSavedTime = os.path.getmtime(str(tempname))
app_log.debug(lastSavedTime)
aa = abs((float(openTime)) - (float(lastSavedTime)))
if abs(aa) > 1 :
app_log.debug("file modified")
t = ThreadClass(fileNameList[i])
# t.setDaemon(True)
t.start()
time.sleep(5)
configProp.set(fileNameList[i], str(tempnameinfile[0]),lastSavedTime)
with open(propfile, 'wb') as propFile:
configProp.write(propFile)
app_log.debug("completed")
except Exception as e:
app_log.error(e)
print e
Related
I searched here a lot of different questions and answers, but I did not found a general approach for:
reading from stdout and stderr, what is available - until the last byte (currently!) available (even if it is not a \n)
depending upon the read information write something to stdin
the command line tool will react to this stdin input and write (much later) something/nothing
start from beginning - or leave loop if process has finished, capturing its return code
Most examples here findable write only ONCE to stdin and read only ONCE (before/afterwards) from stdout and/or stderr.
My intention is to "weave" reading from stdout and/or stderr and writing to stdin!
Here an example:
starting a command line tool (finally with parameters) - e. g. python3.exe
reading always from the stdout and stderr
e. g. read everything and after reading >>> from stdout
write print('Hello World.')\n
e. g. read everything (Hello World.\n)and after reading >>> from stdout
write x = [6, 0]\n
e. g. read everything and after reading >>> from stdout
write y = x[0] / x[1]\n
e. g. read everything ( ... ZeroDivisionError: division by zero on stdout/stderr)
...
I tried to solve it with this found internet example (after other failed attempts):
# Example #27
# of https://www.programcreek.com/python/example/85342/asyncio.create_subprocess_shell
# Source Project: Python-Journey-from-Novice-to-Expert Author: PacktPublishing File: 07_processes.py License: MIT License 5 votes vote downvote up
import asyncio
import sys
async def read_from_pipe(pipe, buf, timeout_sec):
while True:
try:
pipe_byte = await asyncio.wait_for(pipe.read(1), timeout_sec)
except asyncio.TimeoutError:
break
else:
if len(pipe_byte) == 1:
buf.append(pipe_byte[0])
else:
pipe_byte == b'\n' # in case of end of file: fake end of line
if pipe_byte == b'\n':
return len(buf)
async def run_script(version):
process = await asyncio.create_subprocess_shell(
r'C:\Programs\Python\Python38-32\python.exe',
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
stdin=asyncio.subprocess.PIPE,
)
if version == 0:
# Write a simple Python script to the interpreter
process.stdin.write(b'\n'.join((
b'import math',
b'x = 2 ** 8',
b'y = math.sqrt(x)',
b'z = math.sqrt(y)',
b'print("x: %d" % x)',
b'print("y: %d" % y)',
b'print("z: %d" % z)',
b'for i in range(int(z)):',
b' print("i: %d" % i)',
)))
# Make sure the stdin is flushed asynchronously
await process.stdin.drain()
# And send the end of file so the Python interpreter will
# start processing the input. Without this the process will
# stall forever.
process.stdin.write_eof()
# Fetch the lines from the stdout asynchronously
async for out in process.stdout:
# Decode the output from bytes and strip the whitespace
# (newline) at the right
print(out.decode('utf-8').rstrip())
# Wait for the process to exit
await process.wait()
elif version == 1:
cmds = [b'import math',
b'x = 2 ** 8',
b'y = math.sqrt(x)',
b'z = math.sqrt(y)',
# b'q = z / 0',
b'print("x: %d" % x)',
b'print("y: %d" % y)',
b'print("z: %d" % z)',
b'for i in range(int(z)):',
b' print("i: %d" % i)',
b'exit(0)',
]
idx = 0
while True:
stdout_buf = bytearray(b'')
out_read = await read_from_pipe(process.stdout, stdout_buf, 0.5)
print(f'stdout[{out_read}]: {stdout_buf.decode("ascii")}\n') if out_read else None
stderr_buf = bytearray(b'')
err_read = await read_from_pipe(process.stderr, stderr_buf, 0.5)
print(f'stderr[{err_read}]: {stdout_buf.decode("ascii")}\n') if err_read else None
if idx < len(cmds):
current_cmd = cmds[idx].decode('ascii')
print(f'writing command at index {idx}: "{current_cmd}"')
process.stdin.write(cmds[idx])
process.stdin.write(b'\n')
await process.stdin.drain()
process.stdin.write_eof() # tried with/without this line, afterwards program hangs
idx += 1
else:
break
await process.wait()
if sys.platform == "win32":
codepage = 'cp437'
loop = asyncio.ProactorEventLoop() # For subprocess' pipes on Windows
asyncio.set_event_loop(loop)
else:
codepage = 'utf-8'
loop = asyncio.get_event_loop()
version = 1 # version = 0 runs but is not alternatingly reading stdout/stderr and writing to stdin!
returncode = loop.run_until_complete(run_script(1))
print(f'done with return code = {returncode}.')
Currently it doesn't read anything from the stdout and stderr.
And after the entries in cmds are written, program hangs too.
Finally it should run under linux.
How do I write the program correctly?
Is python3.exe a "too special" command line tool and is the root cause of these problems?
Hint:
This example and the solution do not have to be performant at all. The intended command line tool to control is quite slow (overall execution 20 s to 20 min). Multithreading and multiprocessing is not really required, if not needed for a (simplified) working solution.
I found out that python3.exe is a bit too special to control. I better use e. g. cmd /S on windows (I read /bin/bash for Linux) - this works now:
# Example #27
# of https://www.programcreek.com/python/example/85342/asyncio.create_subprocess_shell
# Source Project: Python-Journey-from-Novice-to-Expert Author: PacktPublishing File: 07_processes.py License: MIT License 5 votes vote downvote up
import asyncio
import sys
async def read_from_pipe(pipe, buf, timeout_sec):
while True:
try:
pipe_byte = await asyncio.wait_for(pipe.read(1), timeout_sec)
except asyncio.TimeoutError:
return len(buf) # no more bytes available currently on that pipe
else:
if len(pipe_byte) == 1:
buf.append(pipe_byte[0])
else:
return len(buf) # end of pipe reached
async def run_script():
process = await asyncio.create_subprocess_shell(
'cmd /S',
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
stdin=asyncio.subprocess.PIPE,
)
cmds = [b'dir P*C*S*.*',
b'echo %temp%',
b'exit']
idx = 0
while True:
stdout_buf = bytearray(b'')
out_read = await read_from_pipe(process.stdout, stdout_buf, 0.5)
print(f'stdout[{out_read}]: {stdout_buf.decode("ascii")}\n') if out_read else None
stderr_buf = bytearray(b'')
err_read = await read_from_pipe(process.stderr, stderr_buf, 0.5)
print(f'stderr[{err_read}]: {stdout_buf.decode("ascii")}\n') if err_read else None
if idx < len(cmds):
current_cmd = cmds[idx].decode('ascii')
print(f'writing command at index {idx}: "{current_cmd}"')
process.stdin.write(cmds[idx])
process.stdin.write(b'\n')
await process.stdin.drain()
idx += 1
else:
pass
if process.returncode is not None:
print(f'return code = {process.returncode}')
return process.returncode
if sys.platform == "win32":
codepage = 'cp437'
loop = asyncio.ProactorEventLoop() # For subprocess' pipes on Windows
asyncio.set_event_loop(loop)
else:
codepage = 'utf-8'
loop = asyncio.get_event_loop()
returncode = loop.run_until_complete(run_script())
print(f'done with return code = {returncode}.')
The output is on my computer:
PS C:\Git\ownPythonRepository\Python\CliTap> c:; cd 'c:\Git\ownPythonRepository\Python\CliTap'; & 'C:\Programs\Python\Python38-32\python.exe' 'c:\Users\BitLauncher\.vscode\extensions\ms-python.python-2022.14.0\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher' '63136' '--' 'c:\Git\ownPythonRepository\Python\CliTap\PythonConsoleSandbox.py'
stdout[137]: Microsoft Windows [Version 10.0.11111.2222]
(c) Microsoft Corporation. All rights reserved.
C:\Git\ownPythonRepository\Python\CliTap>
stdout[340]: dir P*C*S*.*
Volume in drive C is What
Volume Serial Number is 9999-9999
Directory of C:\Git\ownPythonRepository\Python\CliTap
2022-09-26 23:52 2,365 PythonConsoleSandbox.py
1 File(s) 2,365 bytes
0 Dir(s) 99,999,999,999 bytes free
C:\Git\ownPythonRepository\Python\CliTap>
writing command at index 1: "echo %temp%"
stdout[93]: echo %temp%
C:\Users\BitLau~1\AppData\Local\Temp
C:\Git\ownPythonRepository\Python\CliTap>
writing command at index 2: "exit"
stdout[5]: exit
return code = 1
done with return code = 1.
PS C:\Git\ownPythonRepository\Python\CliTap>
That's it - now I will be able to write depending upon stdout and/or stderr specific commands to stdin... great. Later I can improve it by multithreading :-) if needed.
I've researched many pool.map on SO and still can't seem to find anything that hints at my issue.
I have if __name__ == '__main__' in every .py file. I have freeze_support() in each .py that contains import multiprocessing, I am still at a loss for what is happening. I've moved the freeze_support() around in my code with the same unsuccessful results.
Script A calls Script B, Script B calls Script C (where the multiprocessing happens). Locally this scenario works perfectly, but when I load it to a Windows Server 2008 machine, strange things start happening.
On the server I can see the first iterable printed to the interpreter, but it then jumps back to Script B and keeps processing. There are 51 other items in the list for Script C.
Script B Code:
if not arcpy.Exists(MergedDataFC):
ScriptC.intersect_main(input1, input2)
if not arcpy.Exists(MergedDataSHP):
shpList = arcpy.ListFields(*.shp) # output of multiprocess
# Merge all shapefiles into single shapefile
# Being executed before the multiprocess finishes all 52 items
Script C Code:
import multiprocessing as mp
def intersect_main(input1,input2):
try:
mp.freeze_support()
# Create a list of states for input1 polygons
log.log("Creating Polygon State list...")
fldList = arcpy.ListFields(input1)
flds = [fld.name for fld in fldList]
idList = []
with arcpy.da.SearchCursor(input1, flds) as cursor:
for row in cursor:
idSTATE = row[flds.index("STATE")]
idList.append(idSTATE)
idList = set(idList)
log.log("There are " + str(len(idList)) + " States (polygons) to process.")
log.log("Sending to pool")
# declare number of cores to use, use 1 less than the max
cpuNum = mp.cpu_count() -1
# Create the pool object
pool = mp.Pool(processes=cpuNum)
# Fire off list to worker function.
# res is a list that is created with what ever the worker function is returning
log.log ("Entering intersectWork")
res = pool.map((intersectWork(input1, input2, idSTATE)),idList)
pool.close()
pool.join()
# If an error has occurred report it
if False in res:
log.log ("A worker failed!")
log.log (strftime('[%H:%M:%S]', localtime()))
raise Exception
else:
log.log("Finished multiprocessing!")
log.log (strftime('[%H:%M:%S]', localtime()))
except Exception, e:
tb = sys.exc_info()[2]
# Geoprocessor threw an error
log.log("An error occurred on line " + str(tb.tb_lineno))
log.log (str(e))
def intersectWork(input1,input2, idSTATE):
try:
if idSTATE == None:
query = "STATE IS NULL"
idSTATE = 'pr'
else:
query = "STATE = '" + idSTATE + "'"
DEMOlayer = arcpy.MakeFeatureLayer_management(input1,"input1_" + idSTATE)
log.log (query)
arcpy.SelectLayerByAttribute_management(DEMOlayer,"NEW_SELECTION",query)
# Do the Intersect
outFC = r'C:/EclipseWorkspace' + '/INTER_' + idSTATE.upper() + '.shp'
strIntersect = str(DEMOlayer) + ";" + str(input2)
arcpy.Intersect_analysis(strIntersect, outFC, "ALL", "", "LINE")
return True
except:
# Some error occurred so return False
log.log(arcpy.GetMessage(2))
return False
if __name__ == '__main__':
intersect_main(input1, input2)
Edit
All the data on the server is stored locally, no across network processing.
The issue was the full path to the data wasn't being properly passed into the pool.map() on the server, from previous modules. I had to add all the files paths under the import statements. Not very elegant looking, but it's working.
I had a code that was running successfully, but takes too long to run. So I decided to try to parallelize it.
Here is a simplified version of the code:
import multiprocessing as mp
import os
import time
output = mp.Queue()
def calcSum(Nstart,Nstop,output):
pid = os.getpid()
for s in range(Nstart, Nstop):
file_name = 'model' + str(s) + '.pdb'
file = 'modelMap' + str(pid) + '.dat'
#does something with the contents of the pdb file
#creates another file by using some other library:
someVar.someFunc(file_name=file)
#uses a function to read the file
density += readFile(file)
os.remove(file)
print pid,s
output.put(density)
if __name__ == '__main__':
snapshots = int(sys.argv[1])
cpuNum = int(sys.argv[2])
rangeSet = np.zeros((cpuNum)) + snapshots//cpuNum
for i in range(snapshots%cpuNum):
rangeSet[i] +=1
processes = []
for c in range(cpuNum):
na,nb = (np.sum(rangeSet[:c])+1, np.sum(rangeSet[:c+1]))
processes.append(mp.Process(target=calcSum,args=(int(na),int(nb),output)))
for p in processes:
p.start()
print 'now i''m here'
results = [output.get() for p in processes]
print 'now i''m there'
for p in processes:
p.join()
print 'think i''l stay around'
t1 =time.time()
print len(results)
print (t1-t0)
I run this code with the command python run.py 10 4.
This code prints the pid and s successfully in the outer loop in calcSum. I can also see that two CPUs are at 100% in the terminal. What happens is that finally pid 5 and pid 10 are printed, then the CPU usage drops to zero, and nothing happens. None of the following print statements work, and the script still looks like it's running in the terminal. I'm guessing that the processes are not exited. Is that the case? How can I fix it?
Here's the complete output:
$ python run.py 10 4
now im here
9600
9601
9602
9603
9602 7
9603 9
9601 4
9600 1
now im there
9602 8
9600 2
9601 5
9603 10
9600 3
9601 6
At that point I have to stop termination with Ctrl+C.
A few other notes:
if I comment os.remove(file) out, I can see the created files in the directory
unfortunately, I cannot bypass the part in which a file is created and then read, within calcSum
EDIT At first it worked to switch output.get() and p.join(), but upon some other edits in the code, this is no longer working. I have updated the code above.
First of all I am a complete noobie when it comes to python. Actually I started reading about it this morning when I needed to use it, so sorry if the code is a disaster.
I'd like to get this done:
A communication via serial between two devices. The device where the python program is running has to be listening for some data being sent by the other device and storing it in a file. But every 30 seconds of received data it has to send a command to the other device to tell it to stop sending and begin a scan that takes 10 seconds.
This is the code I've written. It's printing continuously Opening connection..
from serial import Serial
from threading import Timer
import time
MOVE_TIME = 30.0
SCAN_TIME = 10.0
DEVICE_ADDRESS = '/dev/ttyACM0'
BAUD_RATE = 9600
while True:
try:
print("Opening connection...")
ser = Serial(DEVICE_ADDRESS, BAUD_RATE
break
except SerialException:
print("No device attached")
def scan():
print("Scanning...")
timeout = time.time() + SCAN_TIME
while True:
#Some code I haven't thought of yet
if time.time() > timeout:
ser.write(b'r') #command to start
break
def send_stop_command():
print("Sending stop command")
ser.write(b's') #command to stop
scan()
t = Timer(MOVE_TIME + SCAN_TIME, send_stop_command)
t.start()
filename = time.strftime("%d-%m-%Y_%H:%M:%S") + ".txt"
while True:
data = ser.readline()
try:
with open(filename, "ab") as outfile:
outfile.write(data)
outfile.close()
except IOError:
print("Data could not be written")
I'm writing a code used to send order to an avr. I send several information but between each write, I have to wait for an answer (I have to wait for the robot to reach a point on the coordinate system). As I read in the documentation, readline() should at least read until the timeout but as soon as I send the first coordinate, the readline() automatically return :
SerialException: device reports readiness to read but returned no data (device disconnected?)
When I put a sleep() between each write() in the for loop, everything works fine. I tried to use inWaiting() but it still does not work. Here is an example of how I used it:
for i in chemin_python:
self.serieInstance.ecrire("goto\n" + str(float(i.x)) + '\n' + str(float(-i.y)) + '\n')
while self.serieInstance.inWaiting():
pass
lu = self.serieInstance.readline()
lu = lu.split("\r\n")[0]
reponse = self.serieInstance.file_attente.get(lu)
if reponse != "FIN_GOTO":
log.logger.debug("Erreur asservissement (goto) : " + reponse)
Here an snipet how to use serial in python
s.write(command);
st = ''
initTime = time.time()
while True:
st += s.readline()
if timeout and (time.time() - initTime > t) : return TIMEOUT
if st != ERROR: return OK
else: return ERROR
This method allows you to separately control the timeout for gathering all the data for each line, and a different timeout for waiting on additional lines.
def serial_com(self, cmd):
'''Serial communications: send a command; get a response'''
# open serial port
try:
serial_port = serial.Serial(com_port, baudrate=115200, timeout=1)
except serial.SerialException as e:
print("could not open serial port '{}': {}".format(com_port, e))
# write to serial port
cmd += '\r'
serial_port.write(cmd.encode('utf-8'))
# read response from serial port
lines = []
while True:
line = serial_port.readline()
lines.append(line.decode('utf-8').rstrip())
# wait for new data after each line
timeout = time.time() + 0.1
while not serial_port.inWaiting() and timeout > time.time():
pass
if not serial_port.inWaiting():
break
#close the serial port
serial_port.close()
return lines