I really don't know why my code is not saving for me the readings from the adc and gps receiver to the file I already open it in the first line in the code. it save only one record from both adc and gps receiver.
this is my code:
import MDM
f = open("cord+adc.txt", 'w')
def getADC():
res = MDM.send('AT#ADC?\r', 0)
res = MDM.receive(100)
if(res.find('OK') != -1):
return res
else:
return ""
def AcquiredPosition():
res = MDM.send('AT$GPSACP\r', 0)
res = MDM.receive(30)
if(res.find('OK') != -1):
tmp = res.split("\r\n")
res = tmp[1]
tmp = res.split(" ")
return tmp[1]
else:
return ""
while (1):
cordlist = []
adclist = []
p = AcquiredPosition()
res = MDM.receive(60)
cordlist.append(p)
cordlist.append("\r\n")
f.writelines(cordlist)
q = getADC()
res = MDM.receive(60)
adclist.append(q)
adclist.append("\r\n")
f.writelines(adclist)
and this is the file called "cord+adc.txt":
174506.000,2612.7354N,05027.5971E,1.0,23.1,3,192.69,0.18,0.09,191109,07
#ADC: 0
if there is another way to write my code, please advise me or just point to me the error in the above code.
thanks for any suggestion
You have two problems here, you are not closing you file. There is a bigger problem in your program though your while loop will go forever (or until something else goes wrong in your program) there is no terminating condition. You are looping while 1 but never explicitly breaking out of the loop. I assume that when the function AcquiredPosition() returns an empty string you want the loop to terminate so I added the code if not p: break after the call to the function if it returns an empty string the loop will terminate the file will be closed thanks to the with statement.You should restructure your while loop like below:
with open("cord+adc.txt", 'w') as f:
while (1):
cordlist = []
adclist = []
p = AcquiredPosition()
if not p:
break
res = MDM.receive(60)
cordlist.append(p)
cordlist.append("\r\n")
f.writelines(cordlist)
q = getADC()
res = MDM.receive(60)
adclist.append(q)
adclist.append("\r\n")
f.writelines(adclist)
Because you never explicitly flush() or close() your file, there's no guarantee at all about what will wind up in it. You should probably flush() it after each packet, and you must explicitly close() it when you wish your program to exit.
If your modem connection is a socket,
make sure your socket is functioning by calling getADC() and AcquiredPosition() directly from the interactive interpreter. Just drop the while(1) loop in a function (main() is the common practice), then import the module from the interactive prompt.
Your example is missing the initialization of the socket object, MDM. Make sure it is correctly set up to the appropriate address, with code like:
import socket
MDM = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
MDM.connect((HOST, PORT))
If MDM doesn't refer to a TCP socket, you can still try calling the mentioned methods interactively.
I don't see you closing the file anywhere. Add this as the last line of your code:
f.close()
That should contribute to fixing your problem. I don;t know much about sockets, etc, so I can't help you there.
When you write a line into a file, it is actualy buffered into memory first (this is the C way of handling files). When the maximum size for the buffer is hit or you close the file, the buffer is emptyed into the specified file.
From the explanation so far i think you got the scary image of file manipulation. Now, the best way to solve any and all problems is to flush the buffer's content to the file (meaning after the flush() function is executed and the buffer is empty you have all the content safely saved into your file). Of cource it wold be a great thing to close the file also, but in an infinite loop it's hardly possible (you could hardcode an event maybe, send it to the actual function and when the infinite loop stops - closing the program - close the file also; just a sugestion of cource, the flush () thing shold do the trick.
Related
I'm building a simple server-client app using sockets. Right now, I am trying to get my client to print to console only when it received a specific message (actually, when it doesn't receive a specific message), but for some reason, every other time I run it, it goes through the other statement in my code and is really inconsistent - sometimes it will work as it should and then it will randomly break for a couple uses.
Here is the code on my client side:
def post_checker(client_socket):
response = client_socket.recv(1024)
#check if response is "NP" for a new post from another user
if response == "NP":
new_response = client_socket.recv(1024)
print new_response
else: #print original message being sent
print response
where post_checker is called in the main function as simply "post_checker(client_socket)" Basically, sometimes I get "NPray" printed to my console (when the client only expects to receive the username "ray") and other times it will print correctly.
Here is the server code correlated to this
for sublist in user_list:
client_socket.send("NP")
client_socket.send(sublist[1] + " ")
where user_list is a nested list and sublist[1] is the username I wish to print out on the client side.
Whats going on here?
The nature of your problem is that TCP is a streaming protocol. The bufsize in recv(bufsize) is a maximum size. The recv function will always return data when available, even if not all of the bytes have been received.
See the documentation for details.
This causes problems when you've only sent half the bytes, but you've already started processing the data. I suggest you take a look at the "recvall" concept from this site or you can also consider using UDP sockets (which would solve this problem but may create a host of others as UDP is not a guaranteed protocol).
You may also want to let the python packages handle some of the underlying framework for you. Consider using a SocketServer as documented here:
buffer = []
def recv(sock):
global buffer
message = b""
while True:
if not (b"\r\n" in b"".join(buffer)):
chunk = sock.recv(1024)
if not chunk:
break
buffer.append(chunk)
concat = b"".join(buffer)
if (b"\r\n" in concat):
message = concat[:concat.index(b"\r\n")]
concat = concat[concat.index(b"\r\n") + 2:]
buffer = [concat]
break
return message
def send(sock, data):
sock.send(data + b"\r\n")
I have tested this, and in my opinion, it works perfectly
My use case: I have two scripts that send data quickly, it ends up that one time or another, the buffers receive more than they should, and gather the data, with this script it leaves everything that receives more saved, and continues receiving until there is a new line between the data, and then, it gathers the data, divides in the new line, saves the rest and returns the data perfectly separated
(I translated this, so please excuse me if anything is wrong or misunderstood)
I am trying to make a program(in python) that as I write it writes to a file and opens to a certain window that I have already created.I have looked allarund for a vaible soution bt it would seem that multi-threading may be the only option.
I was hoping that when option autorun is "activated" it will:
while 1:
wbuffer = textview.get_buffer()
text = wbuffer.get_text(wbuffer.get_start_iter(), wbuffer.get_end_iter())
openfile = open(filename,"w")
openfile.write(text)
openfile.close()
I am using pygtk and have a textview window, but when I get the buffer it sits forever.
I am thinking that I need to multi-thread it and queue it so one thread will be writing the buffer while it is being queued.
my source is here. (I think the statement is at line 177.)
any help is much appreciated. :)
and here is the function:
def autorun(save):
filename = None
chooser = gtk.FileChooserDialog("Save File...", None,
gtk.FILE_CHOOSER_ACTION_SAVE,
(gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL,
gtk.STOCK_SAVE, gtk.RESPONSE_OK))
response = chooser.run()
if response == gtk.RESPONSE_OK: filename = chooser.get_filename()
filen = filename
addr = (filename)
addressbar.set_text("file://" + filename)
web.open(addr)
chooser.destroy()
wbuffer = textview.get_buffer()
while 1:
text = wbuffer.get_text(wbuffer.get_start_iter(), wbuffer.get_end_iter())
time.sleep(1)
openfile = open(filename,"w")
openfile.write(text)
openfile.close()
Though not too easy to see exactly what your GTK-stuff not included here is doing, the main problem is that the control needs to be returned to the gtk main-loop. Else the program will hang.
So if you have a long process (like this eternal one here), then you need to thread it. The problem is that you need the thread to exit nicely when the main program quits, so you'll have to redesign a bit around that. Also, threading with gtk needs to be initialized correctly (look here).
However, I don't think you need threading, instead you could connect the changed signal of your TextBuffer to a function that writes the buffer to the target-file (if the user has put the program in autorun-mode). A problem with this is if the buffer gets large or program slow, in which case, you should consider threading the callback of the changed signal. So this solution requires to make sure you don't get into the situation where save-requests get stacked on top of each other because the user is faster at typing than the computer is saving. Takes some design thought.
So, finally, the easier solution: you may not want the buffer to save for every button-press. In which case, you could have the save-function (which could look like your first code-block without the loop) on a timeout instead. Just don't make the time-out too short.
I will write a SSH communicator class on Python. I have telnet communicator class and I should use functions like at telnet. Telnet communicator have read_until and read_very_eager functions.
read_until : Read until a given string is encountered or until timeout.
read_very_eager : Read everything that's possible without blocking in I/O (eager).
I couldn't find these functions for SSH communicator. Any idea?
You didn't state it in the question, but I am assuming you are using Paramiko as per the tag.
read_until: Read until a given string is encountered or until timeout.
This seems like a very specialized function for a particular high level task. I think you will need to implement this one. You can set a timeout using paramiko.Channel.settimeout and then read in a loop until you get either the string you want or a timeout exception.
read_very_eager: Read everything that's possible without blocking in I/O (eager).
Paramiko doesn't directly provide this, but it does provide primitives for non-blocking I/O and you can easily put this in a loop to slurp in everything that's available on the channel. Have you tried something like this?
channel.setblocking(True)
resultlist = []
while True:
try:
chunk = channel.recv(1024)
except socket.timeout:
break
resultlist.append(chunk)
return ''.join(resultlist)
Hi there even i was searching solution for the same problem.
I think it might help you ....
one observation, tell me if you find solution.
I wont get output if i remove 6th line.
I was actually printing 6th line to know the status, later i found recv_exit_status() should be called for execution of this code.
import paramiko,sys
trans = paramiko.Transport((host, 22))
trans.connect(username = user, password = passwd)
session = trans.open_channel("session")
session.exec_command('grep -rE print .')
session.recv_exit_status()
while session.recv_ready():
temp = session.recv(1024)
print temp
1.Read until > search for the data you are searching for and break the loop
2.Read_very_eager > use the above mentioned code.
I'm trying to use a unix named pipe to output statistics of a running service. I intend to provide a similar interface as /proc where one can see live stats by catting a file.
I'm using a code similar to this in my python code:
while True:
f = open('/tmp/readstatshere', 'w')
f.write('some interesting stats\n')
f.close()
/tmp/readstatshere is a named pipe created by mknod.
I then cat it to see the stats:
$ cat /tmp/readstatshere
some interesting stats
It works fine most of the time. However, if I cat the entry several times in quick successions, sometimes I get multiple lines of some interesting stats instead of one. Once or twice, it has even gone into an infinite loop printing that line forever until I killed it. The only fix that I've got so far is to put a delay of let's say 500ms after f.close() to prevent this issue.
I'd like to know why exactly this happens and if there is a better way of dealing with it.
Thanks in advance
A pipe is simply the wrong solution here. If you want to present a consistent snapshot of the internal state of your process, write that to a temporary file and then rename it to the "public" name. This will prevent all issues that can arise from other processes reading the state while you're updating it. Also, do NOT do that in a busy loop, but ideally in a thread that sleeps for at least one second between updates.
What about a UNIX socket instead of a pipe?
In this case, you can react on each connect by providing fresh data just in time.
The only downside is that you cannot cat the data; you'll have to create a new socket handle and connect() to the socket file.
MYSOCKETFILE = '/tmp/mysocket'
import socket
import os
try:
os.unlink(MYSOCKETFILE)
except OSError: pass
s = socket.socket(socket.AF_UNIX)
s.bind(MYSOCKETFILE)
s.listen(10)
while True:
s2, peeraddr = s.accept()
s2.send('These are my actual data')
s2.close()
Program querying this socket:
MYSOCKETFILE = '/tmp/mysocket'
import socket
import os
s = socket.socket(socket.AF_UNIX)
s.connect(MYSOCKETFILE)
while True:
d = s.recv(100)
if not d: break
print d
s.close()
I think you should use fuse.
it has python bindings, see http://pypi.python.org/pypi/fuse-python/
this allows you to compose answers to questions formulated as posix filesystem system calls
Don't write to an actual file. That's not what /proc does. Procfs presents a virtual (non-disk-backed) filesystem which produces the information you want on demand. You can do the same thing, but it'll be easier if it's not tied to the filesystem. Instead, just run a web service inside your Python program, and keep your statistics in memory. When a request comes in for the stats, formulate them into a nice string and return them. Most of the time you won't need to waste cycles updating a file which may not even be read before the next update.
You need to unlink the pipe after you issue the close. I think this is because there is a race condition where the pipe can be opened for reading again before cat finishes and it thus sees more data and reads it out, leading to multiples of "some interesting stats."
Basically you want something like:
while True:
os.mkfifo(the_pipe)
f = open(the_pipe, 'w')
f.write('some interesting stats')
f.close()
os.unlink(the_pipe)
Update 1: call to mkfifo
Update 2: as noted in the comments, there is a race condition in this code as well with multiple consumers.
Let's say I want to read a line from a socket, using the standard socket module:
def read_line(s):
ret = ''
while True:
c = s.recv(1)
if c == '\n' or c == '':
break
else:
ret += c
return ret
What exactly happens in s.recv(1)? Will it issue a system call each time? I guess I should add some buffering, anyway:
For best match with hardware and network realities, the value of bufsize should be a relatively small power of 2, for example, 4096.
http://docs.python.org/library/socket.html#socket.socket.recv
But it doesn't seem easy to write efficient and thread-safe buffering. What if I use file.readline()?
# does this work well, is it efficiently buffered?
s.makefile().readline()
If you are concerned with performance and control the socket completely
(you are not passing it into a library for example) then try implementing
your own buffering in Python -- Python string.find and string.split and such can
be amazingly fast.
def linesplit(socket):
buffer = socket.recv(4096)
buffering = True
while buffering:
if "\n" in buffer:
(line, buffer) = buffer.split("\n", 1)
yield line + "\n"
else:
more = socket.recv(4096)
if not more:
buffering = False
else:
buffer += more
if buffer:
yield buffer
If you expect the payload to consist of lines
that are not too huge, that should run pretty fast,
and avoid jumping through too many layers of function
calls unnecessarily. I'd be interesting in knowing
how this compares to file.readline() or using socket.recv(1).
The recv() call is handled directly by calling the C library function.
It will block waiting for the socket to have data. In reality it will just let the recv() system call block.
file.readline() is an efficient buffered implementation. It is not threadsafe, because it presumes it's the only one reading the file. (For example by buffering upcoming input.)
If you are using the file object, every time read() is called with a positive argument, the underlying code will recv() only the amount of data requested, unless it's already buffered.
It would be buffered if:
you had called readline(), which reads a full buffer
the end of the line was before the end of the buffer
Thus leaving data in the buffer. Otherwise the buffer is generally not overfilled.
The goal of the question is not clear. if you need to see if data is available before reading, you can select() or set the socket to nonblocking mode with s.setblocking(False). Then, reads will return empty, rather than blocking, if there is no waiting data.
Are you reading one file or socket with multiple threads? I would put a single worker on reading the socket and feeding received items into a queue for handling by other threads.
Suggest consulting Python Socket Module source and C Source that makes the system calls.
def buffered_readlines(pull_next_chunk, buf_size=4096):
"""
pull_next_chunk is callable that should accept one positional argument max_len,
i.e. socket.recv or file().read and returns string of up to max_len long or
empty one when nothing left to read.
>>> for line in buffered_readlines(socket.recv, 16384):
... print line
...
>>> # the following code won't read whole file into memory
... # before splitting it into lines like .readlines method
... # of file does. Also it won't block until FIFO-file is closed
...
>>> for line in buffered_readlines(open('huge_file').read):
... # process it on per-line basis
...
>>>
"""
chunks = []
while True:
chunk = pull_next_chunk(buf_size)
if not chunk:
if chunks:
yield ''.join(chunks)
break
if not '\n' in chunk:
chunks.append(chunk)
continue
chunk = chunk.split('\n')
if chunks:
yield ''.join(chunks + [chunk[0]])
else:
yield chunk[0]
for line in chunk[1:-1]:
yield line
if chunk[-1]:
chunks = [chunk[-1]]
else:
chunks = []