When I run any python script that doesn't even contain any code or imports that could access the internet in any way, I get 2 pythonw.exe processes pop up in my resource monitor under network activity. One of them is always sending more than receiving while the other has the same activity but the amount of sending vs receiving is reversed. The amount of overall activity is dependent on the file size, regardless of how many line are commented out. Even a blank .py document will create network activity of about 200 kb/s. The activity drops from its peak, which is as high as 15,000 kb/s for a file with 10,000 lines, to around zero after around 20 seconds, and then the processes quit on their own. The actual script has finished running long before the network processes stop.
Because the activity is dependent on file size I'm suspicious that every time I run a python script, the whole thing is being transmitted to a server somewhere else in the world.
Is this something that could be built into python, a virus that's infecting my computer, or just something that python is supposed to do and its innocent activity?
If anyone doesn't have an answer but could check to see if this activity affects their own installation of python, that would be great. Thanks!
EDIT:
Peter Wood, to start the process just run any python script from the editor, its runs on its own, at least for me. I'm on 2.7.8.
Robert B, I think you may be right, but why would the communication continue after the script has finished running?
Related
I am running a multi-process (and multi-threaded) python script on debian linux. One of the processes repeatedly crashes after 5 or 6 days. It is always the same, unique workload on the process that crashes. There are no entries in syslog about the crash - the process simply disappears silently. It also behaves completely normally and produces normal results, then suddenly stops.
How can I instrument the rogue process. Increasing the loglevel will produce large amounts of logs, so that's not my preferred option.
I used good-old log analysis to determine what happens when the process fails.
increased log level of the rogue process to INFO after 4 days
monitored the application for the rogue process failing
pin-pointed the point in time of the failure in syslog
analysed syslog at that time
I found following error at that time; first row is the last entry made by the rogue process (just before it fails), the 2nd row is the one pointing to the underlying error.
In this case there is a problem with pyzmq bindings or zeromq library. I'll open a ticket with them.
Aug 10 08:30:13 rpi6 python[16293]: 2021-08-10T08:30:13.045 WARNING w1m::pid 16325, tid 16415, taking reading from sensors with map {'000005ccbe8a': ['t-top'], '000005cc8eba': ['t-mid'], '00000676e5c3': ['t
Aug 10 08:30:14 rpi6 python[16293]: Too many open files (bundled/zeromq/src/ipc_listener.cpp:327)
A
Hope this helps someone in the future.
I'm running a python script remotely from a task machine and it creates a process that is supposed to be running for 3 hours. However, it seems to be terminating prematurely at exactly 2 hours. I don't believe it is a problem with the code because after the while loop ends, I am logging to a log file. The log file doesn't show that it exits out of that while loop successfully. Is there a specific setting on the machine that I need to look into that's interrupting my python process?
Is this perhaps a Scheduled Task? If so, have you checked the task's properties?
On my Windows 7 machine under the "Settings" tab is a checkbox for "Stop the task if it runs longer than:" with a box where you can specify the duration.
One of the suggested durations on my machine is "2 hours."
I have a long-running script at work (windows unfortunately) where I programmed it to print the current analysis results if I ctrl-c. However, I was curious if after doing ctrl-c, I could start the script running again where it left off?
This is actually 3 questions:
-is it possible to do this without any programming changes? - e.g. I accidentally hit ctrl-c and want to retroactively start it where it left off
-can I use a command like ctrl-z (only on Mac I believe) on windows and program the script to print results when I issue it?
-what is the best programmatic way of automatically finishing the execution of the line I am on (massive .txt file of data) when I use an interrupt command, store that line number (in a file maybe), and restart the program on the next line with the next execution?
Thanks!
(FYI: I'm a novice Pythoner and my script currently takes about 10 min to perform 1 million lines. Files I will use in the future will often have 100+ million lines)
The short answer to your first question is No. Ctrl-C signals the interpreter, which unwinds the stack, presents you with a stack trace, and halts. You can't recover from ctrl-C for the same reason that you can't recover from any other untrapped exception. What you are asking for is a quick way to put Humpty Dumpty back together again.
You can restart a chess game from any point simply by laying out the pieces according to a picture you made before abandoning the game. But you can't easily do that with a program. The problem is that knowing the line number where the program stopped is not nearly enough information to recreate the state of the program at the time: the values of all the variables, the state of the stack, how much of the input it had read, and so forth. In other words, the picture is complicated, and laying out the pieces accurately is hard.
If your program is writing to the Windows console, you can suspend output by pressing ctrl-S and restart it by pressing ctrl-Q. These control characters are holdovers from the days of Teletype machines, but modern terminal emulators still obey them. This is a quick way to do what you want without program changes. Unsophisticated, but maybe good enough to begin with.
And your program will probably run a lot faster if it writes its output to file, for later examination in a text editor, rather than writing directly to the Windows console.
A full-on solution to your problem is something that I hesitate to recommend to a novice. The idea is to split calculation and display into two processes. The calculation process does its thing and feeds its results line by line to the display process. The display process listens to the calculation process and puts the results that it gets on the screen, but can also accept pause and resume commands. What happens while it is in the paused state is a design decision. You can decide either that the calculation process should block (easier option) or that it should buffer its results until the display process is ready to accept them again (harder option).
I am having an issue with a Python script that is running on a Raspberry Pi. Frankly, the script initially runs perfectly fine and then after a certain period of time (typically >1 hour) the computer either freezes or shuts down. I am not sure if this is a software or a hardware issue. The only clue I have so far is the following error message that appeared one time when the computer froze:
[9798.371860] Unable to handle kernel paging request at virtual address e50b405c
How should this message be interpreted? What could be a good way of keep debugging the code? Any help is relevant since I am fairly new to programming and have run out of ideas on how to troubleshoot this issue..
Here is also some background to what the Python code intends to do (not sure if it makes a difference though). In short, every other second it registers the temperature through a sensor, creates a JSON file and saves it, sends this JSON object through cURL (urllib) to a web API, receives a new JSON file, changes switches based on the data in this file, sleeps for 2 seconds and repeats this process.
Thanks!
This is similar to a few questions on the internet, but this code seems to be working for awhile instead of returning an error instantly, which suggests to me it is maybe not just a host-file error?
I am running a code that spawns multiple MPI processes which then each create a loop, within which they send some data with bcast and scatter, then gathers data from those processes. This runs the algorithm and saves data. It then disconnects from the spawned comm, and creates another set of spawns on the next loop. This works for a few minutes, then after around 300 files, it will spit this out:
[T7810:10898] [[50329,0],0] ORTE_ERROR_LOG: Not found in file ../../../../../orte/mca/plm/base/plm_base_launch_support.c at line 758
--------------------------------------------------------------------------
mpirun was unable to start the specified application as it encountered an error.
More information may be available above.
I am testing this on a local machine (single node), so the end deployment will have multiple nodes that each spawn their own mpi processes within that node. I am trying to figure out if this is an issue with testing the multiple nodes on my local machine and will work fine on the HPC or is a more serious error.
How can I debug this? Is there a way to be printing out what MPI is trying to do during, or monitor MPI, such as a verbose mode?
Since MPI4PY is so close to MPI (logically, if not in terms of lines-of-code), one way to debug this is to write the C version of your program and see if the problem persists. When you report this bug to OpenMPI, they are going to want a small c test case anyway.