I have a question about the Python Interpreter. How does is treat the same script running 100 times, for example with different sys.argv entries? Does it create a different memory space for each script or something different?
System is Linux , CentOS 6.5. Is there any operational limit that can be observed and tuned?
You won't have any problem with what you're trying to do. You can call the same script in parallel a lot of times, with different input arguments. (sys.argv entries). For each run, a new memory space will be allocated.
Related
How can I limit memory usage for a Python script via command line?
For context, I'm implementing a code judge so I need to run every script students submit, I was able to do the same for Java with the following command:
java -Xmx<memoryLimit> Main
So far no luck with Python, any ideas?
PS: I'm using Python 3.8
Thank you.
You can use ulimit on Linux systems. (Within Python, there's also resource.setrlimit() to limit the current process.)
Something like this (sorry, my Bash is rusty) should be a decent enough wrapper:
#!/bin/bash
ulimit -m 10240 # kilobytes
exec python3 $#
Then run e.g. that-wrapper.sh student-script.py.
(That said, are you sure you can trust your students not to submit something that uploads your secret SSH keys and/or trashes your file system? I'd suggest a stronger sandbox such as running everything in a Docker container.)
Not sure why you want/need that. In contrast to Java, Python is very good at handling memory. It has proper garbage collectors and is quite efficient in using memory. So in my 10+ years of python programming, I never had to limit memory in python. However, if you really need it, check out this thread Limit RAM usage to python program. Someone seems to have posted a solution.
You usually limit the memory on OS level, not in python itself. You could also use Docker to achieve all of that.
I have scheduled a python web-scraper to run everyday at specified time. This puts load on spyder memory after a while results in system crash. Is there a way to solve this issue?
I had the same problem with using Spyder IDE for a process that I was running.
Things like deleting variables and using gc.collect() didn't work to free memory within a loop for my script on Ubuntu 20.04.
The way I got around having memory crash problems in long loops inside of Spyder was to run the script in terminal instead of Spyder using python my_script.py. This worked for me and my loops or long processes aren't crashing my memory anymore! Goodluck!
I run a complex python program that is computationally demanding.
While it is complex in terms of number of lines of code, the code itself is simple: it is not multi-thread, not multi-process and does not use any "external" library, with the exception of colorama, installed via pip.
The program does not require a big amount of memory.
When I run it, and monitor via "htop", it shows one (out of the eight) cpu is used 100% by the script, and around 1.16GB (out of 62.8GB) of memory are used (this number remains more or less steady).
After a while (10 to 20 minutes) of running the script, my ubuntu dell desktop running ubuntu 16.04 systematically freezes. I can move the mouse, but clicks do not work, the keyboard is unresponsive, and running programs (e.g. htop) freeze. I can only (hard) reboot. Note that the last frame displayed by htop does not show anything unexpected (e.g. no higher usage of memory).
I never experience such freezes when not running the python program.
I do nothing special in parallel of running the script, aside of browsing with firefox or dealing with mails using thunderbird (i.e. nothing that would use cpu or ram in a significative fashion).
I have printed traces in my python code: it never crashes at the same state.
I also print kernel logs in another terminal: nothing special is printed at the time of the freeze.
I do not use any IDE, and run the script directly from a terminal.
Searching for similar issue, it seems that they are usually related to overusage of memory, which does not seem to be my case.
I have no idea how to investigate this issue.
I have written a python (python version is 3) script that runs 24/7. The way I run my script in my Windows machine is the following. I right click on the .py file, then click on "Edit with IDLE" and then "Run". The script has no issue but, due to the many instructions printed in the python shell (I use a logger), after a couple of days this python shell gets very heavy. My newbie question is the following. Is there to limit the number of rows temporarily saved in the python shell to a specific number? Or perhaps somebody has a better suggestion to run this constantly running script that prints a lot of the script steps in the shell? Please, notice how I'm not asking how to run a script 24/7, it's my understanding the best way to do it is though a VPS. My problem is that the data saved in the displayed python shell gets bigger and bigger every day, so I only wonder how to limit the data temporarily displayed/saved in it. Thanks
I am testing a C++ code compiled to exe (O errors, 0 warnings). The code represents a console application. I run the apllication in the following ways:
a) from the windows7 command line: average time 497sec
b) from Python script using
subprocess.call ()
with the average time 1201 sec!
Results:
The application runs almost 3 tines longer from Python script than from command line... Is this significant performance decrease normal ?
Are you measuring from the point that subprocess.call() is executed or from the point that you load the Python script? I would imagine that a large portion of that time arises from waiting for the Python interpreter to load, the subprocess module to load, any other modules you import, etc. If your Python script which calls the program will end up being large then I think this will become insignificant. If it will be short, you may be better off creating a Windows batch (.bat) file to call the program (assuming those still exist in Win7...I haven't used Windows in a while).