Raspberry PI: Unable to handle kernel paging request at virtual address - python

I am having an issue with a Python script that is running on a Raspberry Pi. Frankly, the script initially runs perfectly fine and then after a certain period of time (typically >1 hour) the computer either freezes or shuts down. I am not sure if this is a software or a hardware issue. The only clue I have so far is the following error message that appeared one time when the computer froze:
[9798.371860] Unable to handle kernel paging request at virtual address e50b405c
How should this message be interpreted? What could be a good way of keep debugging the code? Any help is relevant since I am fairly new to programming and have run out of ideas on how to troubleshoot this issue..
Here is also some background to what the Python code intends to do (not sure if it makes a difference though). In short, every other second it registers the temperature through a sensor, creates a JSON file and saves it, sends this JSON object through cURL (urllib) to a web API, receives a new JSON file, changes switches based on the data in this file, sleeps for 2 seconds and repeats this process.
Thanks!

Related

Pyserial - Embedded Systems

I am not expecting a code here, but rather to pick up on knowledge of the folks out there.
I have a python code - which uses pyserial for serial communication with an Micro Controller Unit(MCU). My MCU is 128byte RAM and has internal memory. I use ser.write command to write to the MCU and MCU responds with data - I read it using ser.read command.
The question here is - It is working excellently until last week. Since yesterday - I am able to do the serial communication only in the morning of the day. After a while, when I read the data the MCU responds with"NONE" message. I read the data next day, it works fine. Strange thing is - I have Hyperterminal installed and it properly communicates with the MCU and reads the data. So I was hoping if anyone have faced this problem before.
I am using threads in my python program - Just to check if running the program mulitple times with threads is causing the problem. To my knowledge, threads should only effect the Memory of my PC and not the MCU.
I rebooted my Computer and also the MCU and I still have this problem.
Note: Pycharm is giving me the answers I mentioned in the question. If I do the same thing in IDLE - it is giving me completely different answers
So, ultimately you're looking for advice on how to debug this sort of time dependent problem.
Somehow, state is getting created somewhere in your computer, your python process, or the microcontroller that affects things. (It's also theoretically possible that an external environmental factor is affecting things. As an example, if your microcontroller has a realtime clock, you could actually have a time-of-day dependent bug. That seems less likely than other possibilities).
First, try restarting your python program. If that fixes things, then you know some state is created inside python or your program that causes the problem.
Update your question with this information.
If that doesn't fix it, try rebooting your computer. If that fixes things, then you strongly suspect some state in your computer is affecting things.
If none of that works, try rebooting the micro controller. If rebooting both the PC and the micro controller doesn't fix things, include that in your question as it is very interesting data.
Examples of state that can get created:
flow control. The micro controller could be sending xoff, clearing clear-to-send or otherwise indicating it does not want data
Flow control in the other direction: your PC could be sending xoff, clearing request-to-send or otherwise indicating that it doesn't want data
Your program gets pyserial into a confused state--either because of a bug in your code or pyserial.
Serial port configuration--the serial port settings could be getting messed up.
Hyper terminal could do various things to clear flow control state or reconfigure the serial port.
If restarting python doesn't fix the problem, threading is very unlikely to be your issue. If restarting python fixes the problem threading may be an issue.

Python - Plot.ly - MySQL Real time streaming visualization

Hope you're well and thanks for reading.
Been revisiting an old project, leveraging plotly to stream data out of mysql with python in-between the two. I've never had great luck w/ plot.ly (which I'm sure relates more to my understanding than their platform), streams/iframes seem to stall over time and I am not apt enough to troubleshoot completely.
My current symptom is this: Plots arbitrarily stall - I'm pushing data, but the iframe isn't updating.
The current solution is: Refresh the browser every X minutes.
The solution works, but it's aggrevating, because I dont understand why the visual is stalling in the first place (is it me, is it them, etc).
As I was reviewing some of the documentation, specifically this link:
https://plot.ly/streaming/
I noticed they call out NOT continually opening and closing streams, and that heartbeats should be placed every so often to keep things alive/fresh.
Here's what I'm currently calling every 10 minutes:
pullData(mysql)
format data
open(plotly.stream1)
write data to plotly.stream1
close(plotly.stream1)
open(plotly.stream2)
write data to plotly.stream2
close(plotly.stream2)
Based on what I am reading, it sounds like I should actually execute the script once on startup, and keep the streams open, but heartbeat() them every 15 or-so seconds between actual write() calls like this:
open(plotly.stream1)
open(plotly.stream2)
every 10 minutes:
pullData(mysql)
format data
write data to plotly.stream1
write data to plotly.stream2
while not pulling and writing:
every 15 seconds:
heartbeat(plotly.stream1)
heartbeat(plotly.stream2)
if error:
close(plotly.stream1)
close(plotly.stream2)
Please excuse the sudo-mess, I'm just trying to convey an idea. Anyone have any advice? I started on my original path of opening, writing, closing based on the streaming example, but that's a one time write. The other example is a constant stream of data. I'm somewhere in between those two.
Furthermore - is this train of thought even related to the iframe not refreshing? Part of me believes the symptom is unrelated to my idea - the data is getting to plot.ly fine - it's my session that's expiring, or the iframe "connection" that's going stale. If the symptom is unrelated, at least I'll have made my source code a bit cleaner and more appropriate.
Any advice is greatly appreciated!
Thanks
-justin
Plotly will close a stream that is inactive for more than 60 seconds. You must send a newline down the streaming channel (a heartbeat) to keep it open. I recommend every 30 seconds.
Your first code example may not work as expected because the client side websocket (that connects the Plot to our system) may close when your first source stream (the stream that connects your script to our system) exits. When you disconnect a source stream a signal is sent to our system that lets it know your stream is now inactive. If a new source stream does not reconnect quickly we close the client connecting websockets.
Now, when your script gets more data and opens a new stream it will successfully stream data to our system but the client-side websocket, now closed, will not pass the data to the Plot. We will cache a certain amount of points for you behind the scenes so that when you refresh the page the websocket reconnects and you get the last n points (where n is set by max-points in the API call).
This is why sending the heartbeat is important. We keep the source stream open and that in turn ensures that all the connected Clients keep their websockets open.
This isn't necessarily the most robust behaviour for a streaming platform to have and we will likely make it better in the future. For now though you will likely see better results by attempting to implement the code in your second example.
Hope that helped!

pythonw.exe creates network activity when running any script

When I run any python script that doesn't even contain any code or imports that could access the internet in any way, I get 2 pythonw.exe processes pop up in my resource monitor under network activity. One of them is always sending more than receiving while the other has the same activity but the amount of sending vs receiving is reversed. The amount of overall activity is dependent on the file size, regardless of how many line are commented out. Even a blank .py document will create network activity of about 200 kb/s. The activity drops from its peak, which is as high as 15,000 kb/s for a file with 10,000 lines, to around zero after around 20 seconds, and then the processes quit on their own. The actual script has finished running long before the network processes stop.
Because the activity is dependent on file size I'm suspicious that every time I run a python script, the whole thing is being transmitted to a server somewhere else in the world.
Is this something that could be built into python, a virus that's infecting my computer, or just something that python is supposed to do and its innocent activity?
If anyone doesn't have an answer but could check to see if this activity affects their own installation of python, that would be great. Thanks!
EDIT:
Peter Wood, to start the process just run any python script from the editor, its runs on its own, at least for me. I'm on 2.7.8.
Robert B, I think you may be right, but why would the communication continue after the script has finished running?

Go Pro Hero 3 - Streaming video over wifi

I recently acquired a Go Pro Hero 3. Its working fine but when i attempt to stream live video/audio it gitches every now and then.
Initially i just used vlc to open the m3u8 file, however when that was glitchy i downloaded the android app and attempted to stream over that.
It was a little better on the app.
I used wireshark and i think the cause of it is its simply not transferring/buffering fast enough. Tried just to get everything with wget in loop, it got through 3 loops before it either: caught up (possible but i dont think so ... though i may double check that) or fell behind and hence timed out/hung.
There is also delay in the image, but i can live with that.
I have tried lowering the resolution/frame rate but im not sure if it is actually doing anything as i can't tell any difference. I think it may be just the settings for recording on the go pro. Either way, it didn't work.
Essentially i am looking for any possible methods for removing this 'glitchiness'
My current plan is to attempt writing something in python to get the files over UDP (no TCP overhead).
Ill just add a few more details/symptoms:
The Go Pro is using the Apple m3u8 streaming format.
At anyone time there are 16 .ts files in the folder. (26 Kb each)
These get overwritten in a loop (circular buffer)
When i stream on vlc:
Approx 1s delay - streams fine for ~0.5s, stops for a little less than that, then repeats.
What i think is happening is the file its trying to transfer gets overwritten which causes it to timeout.
Over the android App:
Less delay and shorter 'timeouts' but still there
I want to write a python script to try get a continuous image. The files are small enough that they should fit in a single UDP packet (i think ... 65Kb ish right?)
Is there anything i could change in terms of wifi setting on my laptop to improve it too?
Ie some how dedicate it to that?
Thanks,
Stephen
I've been working on creating a GoPro API recently for Node.js and found the device very glitchy too. Its much more stable after installing the latest gopro firmware (3.0.0).
As for streaming, I couldnt get around the wifi latency and went for a record and copy approach.

long time running python script

I have application of following parts:
client->nginx->uwsgi(python)
and some python scripts can be running long time (2-6 minutes). After execution of script I should give to client content, but connection break with error "gateway timeout 504". What can I use for my case to avoid this error?
So is your goal to reduce the run time of the scripts, or to not have them time out? Browsers are going to give up on a 6 minute request no matter what you try.
Perhaps try doing the work on the server, and then polling for progress with AJAX requests?
Or, if possible, try optimizing the scripts. For example, if you have some horribly slow SQL stuff going on, try cleaning that up.
Otherwise, without more information, a more specific answer is hard to give.
I once set up a system where the "main page" contained an Iframe which showed the output of the long running program as text/plain. I think the the handler for the the Iframe content was a Python CGI script which emitted all headers and then the program output line by line under an Apache server.
I don't know whether this would work under your configuration.
This heavily depends on your server setup (i.e. how easy it is to push data back to the client), but is it possible while running your lengthy application to periodically send some “null” content (e.g plain newlines assuming your output is html) so that the browser thinks this is just a slow connection and not a stalled one?

Categories