Why is Spark's show() function very slow? - python

I have
df.select("*").filter(df.itemid==itemid).show()
and that never terminated, however if I do
print df.select("*").filter(df.itemid==itemid)
It prints in less than a second. Why is this?

That's because select and filter are just building up the execution instructions, so they aren't doing anything with the data. Then, when you call show it actually executes those instructions. If it isn't terminating, then I'd review the logs to see if there are any errors or connection issues. Or maybe the dataset is still too large - try only taking 5 to see if that comes back quick.

This usually happens if you dont have enough available memory in computer. free up some memory and try again.

Related

"IOStream.flush timed out" errors when multithreading

I am new to Python programming and having a problem with a multithreaded program (using the "threading" module) that runs fine at the beginning, but after a while starts repeatedly printing "IOStream.flush timed out" errors.
I am not even sure how to debug such an error, because I don't know what line is causing it. I read a bit about this error and saw that it might be related to memory consumption, so I tried profiling my program using a memory profiler on the Spyder IDE. Nothing jumped out at me, however (although I admit that I am not sure what to look for when it comes to Python memory leaks).
A few more observations:
I have an outer loop that runs my function over a large number of files. The files are just number data with the same formatting (they are quite large though and there is download latency, which is why I have made my application multithreaded so that each thread works on different files). If I have a long list of files, the problem occurs. If I shorten the list, the program concludes without problem. I am not sure why that is, although if it is some kind of memory leak then I would assume when I run the program longer, the problem grows until it reaches some kind of memory limit.
Normally, I use 128 threads in the program. If I reduce the number of threads to 48 or less, the program works fine and completes correctly. So clearly the problem is caused by multithreading (I'm using the "threading" module). This makes it a bit trickier to debug and figure out what is causing the problem. It seems something around 64 threads starts causing problems.
The program never explicitly crashes out. Once it gets to the point where it has this error, it just keeps repeatedly printing "IOStream.flush timed out". I have to close the Spyder IDE to stop it (Restart kernel doesn't work).
Right before this error happens, the program appears to stall. At least no more "prints" happen to the console (the various threads are all printing debug information to the screen). The last lines printed are standard debugging/status print statements that usually work when the number of threads is reduced or the number of files to process is decreased.
I have no idea how to debug this and get to the bottom of the problem. Any suggestions on how to get to the bottom of this would be much appreciated. Thanks in advance!
Specs:
Python 3.8.8
Spyder 4.2.5
Windows 10

Implementing a dead man's switch for running code

I have some Python code that's a bit buggy. It runs in an infinite loop and I expect it to print something about once every 50 milliseconds, but sometimes it hangs and stops printing or even outright segfaults. Unfortunately, the problem seems to be fairly rare, so I've had trouble nailing down the exact cause of the issue. I want to have the code up and running while I debug the problem, so while I try to figure it out, I'd like to create a dead man's switch that runs my code, but stops if the code doesn't print anything in a certain time frame (say, 5 seconds) or exits and finally executes a command to notify me that something went wrong (e.g. 'spd-say terminated').
I put this into the terminal for my first attempt at this:
python alert.py; spd-say terminated;
Unfortunately, it didn't seem to work - at this point I realized that the code was not only crashing but also hanging (and I'm also not sure whether this would even work if the code crashes). Unfortunately, I'm not very familiar with bash yet (I assume that's what I'm using when I run stuff in the terminal), and I'm not sure how I could set up something like what I want. I'm also open to using other things besides bash to do this if it would be particularly difficult to do for some reason.
What would be the best way to implement what I want to do?
You could run two python programs with a pipeline between them. On one side you have you buggy script writing something on the pipeline every less than 5 seconds. On the receiving end of the pipeline you have a very simple script that checks how long it has been since it last received anything. If this time is more than 5 seconds.
This way you decouple your watchdog from your buggy script.

Debug a Python program which seems paused for no reason

I am writing a Python program to analyze log files. So basically I have about 30000 medium-size log files and my Python script is designed to perform some simple (line-by-line) analysis of each log file. Roughly it takes less than 5 seconds to process one file.
So once I set up the processing, I just left it there and after about 14 hours when I came back, my Python script simply paused right after analyzing one log file; seems that it hasn't written into the file system for the analyzing output of this file, and that's it. No more proceeding.
I checked the memory usage, it seems fine (less than 1G), I also tried to write to the file system (touch test), it also works as normal. So my question is that, how should I proceed to debug the issue? Could anyone share some thoughts on that? I hope this is not too general. Thanks.
You may use Trace or track Python statement execution and/or The Python Debugger module.
Try this tool https://github.com/khamidou/lptrace with command:
sudo python lptrace -p <process_id>
It will print every python function your program invokes and may help you understand where your program stucks or in an infinity loop.
If it does not output anything, that's proberbly your program get stucks, so try
pstack <process_id>
to check the stack trace and find out where stucks. The output of pstack is c frames, but I believe somehow you can find something useful to solve your problem.

Is it possible to see what a Python process is doing?

I created a python script that grabs some info from various websites, is it possible to analyze how long does it take to download the data and how long does it take to write it on a file?
I am interested in knowing how much it could improve running it on a better PC (it is currently running on a crappy old laptop.
If you just want to know how long a process takes to run the time command is pretty handy. Just run time <command> and it will report how much time it took to run with it counted in a few categories, like wall clock time, system/kernel time and user space time. This won't tell you anything about which parts of the system are taking up the amount of time. You can always look at a profiler if you want/need that type of information.
That said, as Barmar said, if you aren't doing much processing of the sites you are grabbing, the laptop is probably not going to be a limiting factor.
You can always store the system time in a variable before a block of code that you want to test, do it again after then compare them.

How do I load up python in the shell from Vim, but not use it right away?

I'm doing TDD, but the system I'm working with takes 6 seconds to get through boilerplate code. This code is not part of my work, nor of my tests (it's Autodesk Maya's headless/batch/CLI Python mode). I've talked to support, and there's no way around the load time, so I though maybe I could load and initialize Python first in the background, as I code, and then my mapping would simply run the nosetests inside of that when I'm ready. My tests take something like 0.01 seconds, so this should feel instantaneous, which would really help the red/green/refactor cycle.
In short, instead of firing off /path/to/mayapy /path/to/runtests.py /current/buffer/path, Vim would just fire up /path/to/mayapy with the boilerplate stuff from runtests.py, then somehow hold onto that running instance. When I hit my mapping, it would send into that running instance the call to nosetest with the current buffer's path (and then fire up another instance to hold onto while waiting for the next run). How do I hold onto the running instance and call into it later? I'm even considering having a chain of 2 or 3, for the times when I make minor mistakes and rerun 2 seconds later.
Vim-ipython, the excellent work of Paul Ivanov, is an interface between vim and ipython sessions (demo video). This may relieve you of some of the boilerplate of sending buffers to python and waiting on results.
I'm not entirely sure this is exactly what you want, but with a bit of python and vim glue code it may be a good step in the right direction, but I'm guessing you'd need to do a bit of experimentation to get a workflow you're happy with.

Categories