I have created a data transfer program using python and the pyserial module. I am currently using it to communicate text file over a radio device between a Raspberry Pi and my computer. The problem is, the file I am trying to send, which contains 5000 lines of text and is 93.0 Kb in size is taking quite a while to send. To be exact, it takes about a full minute. I need this to be done within seconds. Here is the following code, I am sure that there are many optimizations that can be made with file reading and such that would increase the data transfer speed. My radio device has a data speed of 250 kbps, which is obviously not being reached. Any help would be greatly appreciated.
Code to send(located on raspberry pi)
def s_file():
print 'start'
readline = lambda : iter(lambda:ser.read(1),"\n")
name = "".join(readline())
print name
file_loc = directory_name + name
sleep(1)
print('Waiting for command from client to send file...')
while "".join(readline()) != "<<SENDFILE>>":
pass
with open(file_loc) as FileObj:
for lines in FileObj:
ser.write(lines)
ser.write("\n<<EOF>>\n")
print 'done'
Code to receive(on my laptop)
def r_f_bird(self): #send command to bird to start func,
if ser_open == True:
readline = lambda : iter(lambda:ser.read(1),"\n")
NAME = self.tb2.get()
ser.write('/' + NAME)
print NAME
sleep(0.5)
ser.write('\n<<SENDFILE>>\n')
start = clock()
with open(str(NAME),"wb") as outfile:
while True:
line = "".join(readline())
if line == "<<EOF>>":
break
print >> outfile, line
elapsed = clock() - start
print elapsed
ser.flush()
else:
pass
Perhaps the overhead of ser.read(1) is slowing things down. It seems like you have a \n at the end of each line, so try using pySerial's readline() method rather than rolling your own. Changing line = "".join(readline()) to line = ser.readline() ought to do it. You will also need to change your loop end condition to == "<<EOF>>\n".
You may also need to add a ser.flush() on the writing side.
Related
Mainpage displays my front page and upload is supposed to display the page
where you enter the file you wish to store with the server
def upload():
path_name = raw_input("Enter your file directory")
open_file = open(path_name,'rb').read()
name_split = path_name.split("\\")[-1].split('.')
at = 1
s.send("SAVE-"+username+"\\"+"".join(name_split[:-1])+"."+str(at)+"."+name_split[-1]+"-")
while open_file:
current = open_file[:1024]
print current
open_file = open_file[1024:]
s.send(current)
def mainpage():
global R2
R2=Tk()
gg="white"
g="blue"
R2.geometry('720x720')
R2.title(username + " Dropbox")
R2.resizable(width=False,height=False)
logoutbt= Button(R2,text="Logout",width=10,height=2,bg=g,fg=gg,font="5",relief=RAISED,overrelief=RIDGE,command=deslogout)
upload = Button(R2,text="Upload",width=10,height=2,bg=g,fg=gg,font="5",relief=RAISED,overrelief=RIDGE,command=desupload)
retrieve = Button(R2,text="Retreive",width=10,height=2,bg=g,fg=gg,font="5",relief=RAISED,overrelief=RIDGE,command=desretreive)
logoutbt.place(x = 220,y = 500)
retrieve.place(x = 350,y = 500)
upload.place(x = 480,y = 500)
R2.mainloop()
open(path_name,'rb').close()
Now when I add the command mainpage() to return back to to my main page after sending the file to the server,the server gets stuck in an infinite loop
ServerCode
if message[0] == "SAVE":
if not os.path.exists("C:\Heights\Documents\Projects\HomeWork\Project\Server1\\Files\\"+message[1].split("\\")[0]):
os.makedirs("C:\Heights\Documents\Projects\HomeWork\Project\Server1\\Files\\"+message[1].split("\\")[0])
file =open("C:\Heights\Documents\Projects\HomeWork\Project\Server1\\Files\\"+ message[1],"wb")
content = ""
while True:
data = current_socket.recv(1024)
if not data:
break
content += data
file.write(content)
file.close()
The file reaches the server fine when I don't try to return, but the moment I add that one extra line, the server doesn't exit its loop where it receives all the file content. Also,if I try to get a response from the server when it's done writing all the data down, the client and the server get stuck.
Python's socket.recv(...) inherits semantics from Unix recv(2) function, and as stated in recv(2) man:
If no messages are available at the socket, the receive calls wait for a message to arrive, unless the socket is nonblocking
Therefore, since current_socket is blocking, your server just hangs on the line data = current_socket.recv(1024) infinitely right after the whole file has been read to the content variable until the socket on the client becomes properly closed.
To avoid that:
On the client side send your file size in bytes before sending any of its contents:
import struct
...
file_len_bytes = pack('!i', len(open_file))
s.send(file_len_bytes)
while open_file:
....
On the server side read your file size and then use it to check whether the whole file has been read:
import struct
...
file_len_bytes = ""
while len(file_len_bytes) < 4:
file_len_bytes += client.recv(1)
file_len = struct.unpack("!i", file_len_bytes[:4])[0]
content = ""
bytes_read = 0
while bytes_read < file_len:
data = current_socket.recv(1024)
bytes_read += len(data)
content += data
Firstly, As a general rule, waiting for the socket to return nothing(empty string or whatever), is a bad idea. That is because in python a socket would only return empty data if the other side closed his socket. But if there was a problem or for any reason the socket was not closed properly, the socket.recv method would hang perhaps infinitely.
Secondly, I see that you intend to instantiate your TKinter App more than once.
This is bad practice, and you should consider just hiding your main window.
Hope I was helpful.
Now getting output, yet it is wrong Post modified to reflect progress.
I have been reading the documentation as well as following links from this site. I was able to find a script that works in reading data from my Arduino serial output.
It is as follows:
import time
import serial
# configure the serial connections (the parameters differs on the device you are connecting to)
ser = serial.Serial(
port='/dev/ttyACM0',
baudrate=115200,
parity=serial.PARITY_ODD,
stopbits=serial.STOPBITS_TWO,
bytesize=serial.SEVENBITS
)
ser.isOpen()
print 'Enter your commands below.\r\nInsert "exit" to leave the application.'
input=1
while 1 :
# get keyboard input
input = raw_input(">> ")
# Python 3 users
# input = input(">> ")
if input == 'exit':
ser.close()
exit()
else:
# send the character to the device
# (note that I happend a \r\n carriage return and line feed to the characters - this is requested by my device)
ser.write(input + '\r\n')
out = ''
# let's wait one second before reading output (let's give device time to answer)
time.sleep(1)
while ser.inWaiting() > 0:
out += ser.read(1)
if out != '':
print ">>" + out
The output window of Python opens and the code is executed. I am able to type 'exit' to exit or 'read' and a ton of lines from the Arduino serial monitor are now populated in the Python output window.
What I would like is to have the output from the Arduino to constantly be populated in the Python output window. (later I will try to plot the data using Matplotlib)
The code I used to attempt to read everything from the Arduinb is as such:
import time
import serial
# configure the serial connections (the parameters differs on the device you are connecting to)
ser = serial.Serial(
port='/dev/ttyACM0',
baudrate=115200,
parity=serial.PARITY_ODD,
stopbits=serial.STOPBITS_TWO,
bytesize=serial.SEVENBITS
)
ser.isOpen()
input=1
while True:
#First things first, lets wait for data prior to reading
time.sleep(1)
if (ser.inWaiting()>0):
myArduinoData = ser.read()
print myArduinoData
However, when I use the above code the Python execution hangs and I get no output from the serial monitor. This was now corrected by the above code and the community help
The new problem is the output only giving a single digit value and not the two to three digit value output to the Arduino serial monitor.
thanks to #jalo I was able to get the values in question by specifying a byte value in my inWaiting and ser.read statements. The change is as follows:
(ser.inWaiting()>4):
ser = ser.read(4)
Thank you.
EDIT: I think I've figured out a solution using subprocess.Popen with separate .py files for each stream being monitored. It's not pretty, but it works.
I'm working on a script to monitor a streaming site for several different accounts and to record when they are online. I am using the livestreamer package for downloading a stream when it comes online, but the problem is that the program will only record one stream at a time. I have the program loop through a list and if a stream is online, start recording with subprocess.call(["livestreamer"... The problem is that once the program starts recording, it stops going through the loop and doesn't check or record any of the other livestreams. I've tried using Process and Thread, but none of these seem to work. Any ideas?
Code below. Asterisks are not literally part of code.
import os,urllib.request,time,subprocess,datetime,random
status = {
"********":False,
"********":False,
"********":False
}
def gen_name(tag):
return stuff <<Bunch of unimportant code stuff here.
def dl(tag):
subprocess.call(["livestreamer","********.com/"+tag,"best","-o",".\\tmp\\"+gen_name(tag)])
def loopCheck():
while True:
for tag in status:
data = urllib.request.urlopen("http://*******.com/" + tag + "/").read().decode()
if data.find(".m3u8") != -1:
print(tag + " is online!")
if status[tag] == False:
status[tag] = True
dl(tag)
else:
print(tag+ " is offline.")
status[tag] = False
time.sleep(15)
loopCheck()
I am new to Python. I need to create a door.lock file that contains the current date and time. Also, I need to overwrite this file every x minutes with a new file containing the current date and time. I'm using this as a pseudo lock file to allow me to test on run of the software whether or not the software crashed and how long ago it crashed. My issue is I can't seem to overwrite the file. I've only failed at creating and/or appending the file. I created the following as a test:
from datetime import datetime, timedelta
ending = False
LOCK_FILENAME = "door.lock" # The lock file
LOCK_FILE_UPDATE = True
MINS_LOCK_FILE_UPDATE = 1 # the (x) time in minutes to write to lock file
NEXT_LOCK_FILE_UPDATE = datetime.now()
lock_file = open(LOCK_FILENAME, "w")
now = datetime.now()
NOW_STRING1 = str(now.strftime("%Y-%m-%d_%a_%H:%M"))
lock_file.write(NOW_STRING1)
print "First Now String"
print NOW_STRING1
# ==============================================================================
#Main Loop:
while ending is False:
# ==============================================================================
# Check if it is time to do a LOCK FILE time update
now = datetime.now()
NOW_STRING1 = str(now.strftime("%Y-%m-%d_%a_%H:%M"))
if LOCK_FILE_UPDATE: # if LOCK_FILE_UPDATE is set to True in DM settings
if NEXT_LOCK_FILE_UPDATE <= datetime.now():
lock_file.write(NOW_STRING1)
print NOW_STRING1
NEXT_LOCK_FILE_UPDATE = datetime.now() + timedelta(minutes=MINS_LOCK_FILE_UPDATE)
Will someone pinpoint my error(s) for me? TIA
When I cat the above file, door.lock, it is empty.
You need to push buffer to file. You can do it with a close() and re-open for next write.
lock_file.close()
...
lock_file = open(LOCK_FILENAME, "a")
If you are logging events you'd be better using a logger instead of a plain text file.
Solution from #MAC will work except it will append and seems that you don't want to do that so just open again with the 'w' option or yet better, use the 'w+' option so it can be truncated (which for what I get it is what you want to do) and read.
Also, consider your changes won't get written down until you close the file (having said that, consider open/close inside your loop instead).
lock_file = open(LOCK_FILENAME, "w+")
now = datetime.now()
NOW_STRING1 = str(now.strftime("%Y-%m-%d_%a_%H:%M"))
lock_file.write(NOW_STRING1)
# your loop and so on ...
lock_file.close()
So I have two Python3.2 processes that need to communicate with each other. Most of the information that needs to be communicated are standard dictionaries. Named pipes seemed like the way to go so I made a pipe class that can be instantiated in both processes. this class implements a very basic protocol for getting information around.
My problem is that sometimes it works, sometimes it doesn't. There seems to be no pattern to this behavior except the place where the code fails.
Here are the bits of the Pipe class that matter. Shout if you want more code:
class Pipe:
"""
there are a bunch of constants set up here. I dont think it would be useful to include them. Just think like this: Pipe.WHATEVER = 'WHATEVER'
"""
def __init__(self,sPath):
"""
create the fifo. if it already exists just associate with it
"""
self.sPath = sPath
if not os.path.exists(sPath):
os.mkfifo(sPath)
self.iFH = os.open(sPath,os.O_RDWR | os.O_NONBLOCK)
self.iFHBlocking = os.open(sPath,os.O_RDWR)
def write(self,dMessage):
"""
write the dict to the fifo
if dMessage is not a dictionary then there will be an exception here. There never is
"""
self.writeln(Pipe.MESSAGE_START)
for k in dMessage:
self.writeln(Pipe.KEY)
self.writeln(k)
self.writeln(Pipe.VALUE)
self.writeln(dMessage[k])
self.writeln(Pipe.MESSAGE_END)
def writeln(self,s):
os.write(self.iFH,bytes('{0} : {1}\n'.format(Pipe.LINE_START,len(s)+1),'utf-8'))
os.write(self.iFH,bytes('{0}\n'.format(s), 'utf-8'))
os.write(self.iFH,bytes(Pipe.LINE_END+'\n','utf-8'))
def readln(self):
"""
look for LINE_START, get line size
read until LINE_END
clean up
return string
"""
iLineStartBaseLength = len(self.LINE_START)+3 #'{0} : '
try:
s = os.read(self.iFH,iLineStartBaseLength).decode('utf-8')
except:
return Pipe.READLINE_FAIL
if Pipe.LINE_START in s:
#get the length of the line
sLineLen = ''
while True:
try:
sCurrent = os.read(self.iFH,1).decode('utf-8')
except:
return Pipe.READLINE_FAIL
if sCurrent == '\n':
break
sLineLen += sCurrent
try:
iLineLen = int(sLineLen.strip(string.punctuation+string.whitespace))
except:
raise Exception('Not a valid line length: "{0}"'.format(sLineLen))
#read the line
sLine = os.read(self.iFHBlocking,iLineLen).decode('utf-8')
#read the line terminator
sTerm = os.read(self.iFH,len(Pipe.LINE_END+'\n')).decode('utf-8')
if sTerm == Pipe.LINE_END+'\n':
return sLine
return Pipe.READLINE_FAIL
else:
return Pipe.READLINE_FAIL
def read(self):
"""
read from the fifo, make a dict
"""
dRet = {}
sKey = ''
sValue = ''
sCurrent = None
def value_flush():
nonlocal dRet, sKey, sValue, sCurrent
if sKey:
dRet[sKey.strip()] = sValue.strip()
sKey = ''
sValue = ''
sCurrent = ''
if self.message_start():
while True:
sLine = self.readln()
if Pipe.MESSAGE_END in sLine:
value_flush()
return dRet
elif Pipe.KEY in sLine:
value_flush()
sCurrent = Pipe.KEY
elif Pipe.VALUE in sLine:
sCurrent = Pipe.VALUE
else:
if sCurrent == Pipe.VALUE:
sValue += sLine
elif sCurrent == Pipe.KEY:
sKey += sLine
else:
return Pipe.NO_MESSAGE
It sometimes fails here (in readln):
try:
iLineLen = int(sLineLen.strip(string.punctuation+string.whitespace))
except:
raise Exception('Not a valid line length: "{0}"'.format(sLineLen))
It doesn't fail anywhere else.
An example error is:
Not a valid line length: "KE 17"
The fact that it's intermittent says to me that it's due to some kind of race condition, I'm just struggling to figure out what it might be. Any ideas?
EDIT added stuff about calling processes
How the Pipe is used is it is instantiated in processA and ProcessB by calling the constructor with the same path. Process A will then intermittently write to the Pipe and processB will try to read from it. At no point do I ever try to get the thing acting as a two way.
Here is a more long winded explanation of the situation. I've been trying to keep the question short but I think it's about time I give up on that. Anyhoo, I have a daemon and a Pyramid process that need to play nice. There are two Pipe instances in use: One that only Pyramid writes to, and one that only the daemon writes to. The stuff Pyramid writes is really short, I have experienced no errors on this pipe. The stuff that the daemon writes is much longer, this is the pipe that's giving me grief. Both pipes are implemented in the same way. Both processes only write dictionaries to their respective Pipes (if this were not the case then there would be an exception in Pipe.write).
The basic algorithm is: Pyramid spawns the daemon, the daemon loads craze object hierarchy of doom and vast ram consumption. Pyramid sends POST requests to the daemon which then does a whole bunch of calculations and sends data to Pyramid so that a human-friendly page can be rendered. the human can then respond to what's in the hierarchy by filling in HTML forms and suchlike thus causing pyramid to send another dictionary to the daemon, and the daemon sending back a dictionary response.
So: only one pipe has exhibited any problems, the problem pipe has a lot more traffic than the other one, and it is a guarentee that only dictionaries are written to either
EDIT as response to question and comment
Before you tell me to take out the try...except stuff read on.
The fact that the exception gets raised at all is what is bothering me. iLineLengh = int(stuff) looks to me like it should always be passed a string that looks like an integer. This is the case only most of the time, not all of it. So if you feel the urge to comment about how it's probably not an integer please please don't.
To paraphrase my question: Spot the race condition and you will be my hero.
EDIT a little example:
process_1.py:
oP = Pipe(some_path)
while 1:
oP.write({'a':'foo','b':'bar','c':'erm...','d':'plop!','e':'etc'})
process_2.py:
oP = Pipe(same_path_as_before)
while 1:
print(oP.read())
After playing around with the code, I suspect the problem is coming from how you are reading the file.
Specifically, lines like this:
os.read(self.iFH, iLineStartBaseLength)
That call doesn't necessarily return iLineStartBaseLength bytes - it might consume "LI" , then return READLINE_FAIL and retry. On the second attempt, it will get the remainder of the line, and somehow end up giving the non-numeric string to the int() call
The unpredictability likely comes from how the fifo is being flushed - if it happens to flush when the complete line is written, all is fine. If it flushes when the line is half-written, weirdness.
At least in the hacked-up version of the script I ended up with, the oP.read() call in process_2.py often got a different dict to the one sent (where the KEY might bleed into the previous VALUE and other strangeness).
I might be mistaken, as I had to make a bunch of changes to get the code running on OS X, and further while experimenting. My modified code here
Not sure exactly how to fix it, but.. with the json module or similar, the protocol/parsing can be greatly simplified - newline separated JSON data is much easier to parse:
import os
import time
import json
import errno
def retry_write(*args, **kwargs):
"""Like os.write, but retries until EAGAIN stops appearing
"""
while True:
try:
return os.write(*args, **kwargs)
except OSError as e:
if e.errno == errno.EAGAIN:
time.sleep(0.5)
else:
raise
class Pipe(object):
"""FIFO based IPC based on newline-separated JSON
"""
ENCODING = 'utf-8'
def __init__(self,sPath):
self.sPath = sPath
if not os.path.exists(sPath):
os.mkfifo(sPath)
self.fd = os.open(sPath,os.O_RDWR | os.O_NONBLOCK)
self.file_blocking = open(sPath, "r", encoding=self.ENCODING)
def write(self, dmsg):
serialised = json.dumps(dmsg) + "\n"
dat = bytes(serialised.encode(self.ENCODING))
# This blocks until data can be read by other process.
# Can just use os.write and ignore EAGAIN if you want
# to drop the data
retry_write(self.fd, dat)
def read(self):
serialised = self.file_blocking.readline()
return json.loads(serialised)
Try getting rid of the try:, except: blocks and seeing what exception is actually being thrown.
So replace your sample with just:
iLineLen = int(sLineLen.strip(string.punctuation+string.whitespace))
I bet it'll now throw a ValueError, and it's because you're trying to cast "KE 17" to an int.
You'll need to strip more than string.whitespace and string.punctuation if you're going to cast the string to an int.