In an android kivy app I need to make some heavy computations that do not release GIL. I'd like the app to remain responsive. I also need to be able to cancel the operation. The obvious choice seems to run it in a separate process and use shared memory for the cancel flag. But it does not work since android does not have /dev/shm. Is there a way to get hold of android's shared memory from python?
Related
In context to this question What is “runtime”? (https://stackoverflow.com/questions/3900549/what-is-runtime/3900561)
I am trying to understand what would a python runtime be made of. My guess is:
The python process that contains all runtime variables.
The GIL
The underlying interpreter code (CPython etc.).
Now if this is right, can we say that multiprocessing in python creates multiple runtimes and a python process is something we can directly relate to the runtime? (I think this is the right option)
Or, every python thread with its own stack which works on the same GIL and memory space as the parent process can be called as having a separate runtime?
Or, doesn't matter how many threads or processes are running, it will all come under a single runtime?
Simply put, what is the definition of runtime in the context of Python?
PS: I understand the difference between threads and processes. GIL: I understand the impacts but I do not grok it.
You are talking about two different (yet similar) concepts in computer science; multiprocess, and multithreading. Here is some compilation of questions/answers that might be useful:
Multiprocessing -- Wikipedia
Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system.The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them.
Multithreading -- Wikipedia
In computer architecture, multithreading is the ability of a central processing unit (CPU) (or a single core in a multi-core processor) to provide multiple threads of execution concurrently, supported by the operating system. This approach differs from multiprocessing. In a multithreaded application, the threads share the resources of a single or multiple cores, which include the computing units, the CPU caches, and the translation lookaside buffer (TLB).
What is the difference between a process and a thread? -- StackOverflow
Process
Each process provides the resources needed to execute a program. A process has a virtual address space, executable code, open handles to system objects, a security context, a unique process identifier, environment variables, a priority class, minimum and maximum working set sizes, and at least one thread of execution. Each process is started with a single thread, often called the primary thread, but can create additional threads from any of its threads.
Thread
A thread is an entity within a process that can be scheduled for execution. All threads of a process share its virtual address space and system resources. In addition, each thread maintains exception handlers, a scheduling priority, thread local storage, a unique thread identifier, and a set of structures the system will use to save the thread context until it is scheduled. The thread context includes the thread's set of machine registers, the kernel stack, a thread environment block, and a user stack in the address space of the thread's process. Threads can also have their own security context, which can be used for impersonating clients.
Meaning of “Runtime Environment” and of “Software framework”? -- StackOverflow
A runtime environment basically is a virtual machine that runs on top of a machine - provides machine abstraction. It is generally lower level than a library. A framework can contain a runtime environment, but is generally tied to a library.
Runtime System -- Wikipedia
In computer programming, a runtime system, also called runtime environment, primarily implements portions of an execution model. Most languages have some form of runtime system that provides an environment in which programs run. This environment may address a number of issues including the layout of application memory, how the program accesses variables, mechanisms for passing parameters between procedures, interfacing with the operating system, and otherwise. Typically the runtime system will have some responsibility for setting up and managing the stack and heap, and may include features such as garbage collection, threads or other dynamic features built into the language.
global interpreter lock -- Python Docs
The mechanism used by the CPython interpreter to assure that only one thread executes Python bytecode at a time. This simplifies the CPython implementation by making the object model (including critical built-in types such as dict) implicitly safe against concurrent access. Locking the entire interpreter makes it easier for the interpreter to be multi-threaded, at the expense of much of the parallelism afforded by multi-processor machines.
However, some extension modules, either standard or third-party, are designed so as to release the GIL when doing computationally-intensive tasks such as compression or hashing. Also, the GIL is always released when doing I/O.
Past efforts to create a “free-threaded” interpreter (one which locks shared data at a much finer granularity) have not been successful because performance suffered in the common single-processor case. It is believed that overcoming this performance issue would make the implementation much more complicated and therefore costlier to maintain.
What is the Python Global Interpreter Lock (GIL)?
-- Real Python
Useful source for more info on GIL.
Does python os.fork uses the same python interpreter? -- StackOverflow
Whenever you fork, the entire Python process is duplicated in memory (including the Python interpreter, your code and any libraries, current stack etc.) to create a second process - one reason why forking a process is much more expensive than creating a thread.
This creates a new copy of the python interpreter.
One advantage of having two python interpreters running is that you now have two GIL's (Global Interpreter Locks), and therefore can have true multi-processing on a multi-core system.
Threads in one process share the same GIL, meaning only one runs at a given moment, giving only the illusion of parallelism.
Memory Management -- Python Docs
Memory management in Python involves a private heap containing all Python objects and data structures. The management of this private heap is ensured internally by the Python memory manager. The Python memory manager has different components which deal with various dynamic storage management aspects, like sharing, segmentation, preallocation or caching.
When you spawn a thread via the threading library, you are effectively spawning jobs inside a single Python runtime. This runtime ensures the threads have a shared memory and manages the running sequence of these threads via the global interpreter lock:
Understanding the Python GIL -- dabeaz
When you spawn a process via the multiprocessing library, you are spawning a new process that contains a new Python interpreter (a new runtime) that runs the designated code. If you want to share memory you have to use multiprocessing.shared_memory:
multiprocessing.shared_memory -- Python Docs
This module provides a class, SharedMemory, for the allocation and management of shared memory to be accessed by one or more processes on a multicore or symmetric multiprocessor (SMP) machine. To assist with the life-cycle management of shared memory especially across distinct processes, a BaseManager subclass, SharedMemoryManager, is also provided in the multiprocessing.managers module.
Can we say that multiprocessing in python creates multiple runtimes and a python process is something we can directly relate to the runtime?
Yes. Different GIL, different memory space, different runtime.
Every python thread with its own stack which works on the same GIL and memory space as the parent process can be called as having a separate runtime?
Depends what you mean by "stack". Same GIL, shared memory space, same runtime.
Doesn't matter how many threads and processes are running, it will all come under a single runtime?
Depends if multithreading/multiprocess.
Simply put, what is the definition of runtime in the context of Python?
The runtime environment is literally python.exe or /usr/bin/python. It's the Python executable that will interpret your Python code by transforming it into CPU-readable bytecode. When you multithread, you only have one python running. When you multiprocess you have multiple pythons running.
I hope that a core dev can come in and speak more to this in greater detail. For now the above is simply just a compilation of sources for you to start understanding/seeing the bigger picture.
I am running a multithreaded python(3.3) application which has been compiled using cx_freeze. I need to monitor the CPU usage, Memory Usage, thread info, objects info, process status.
I know there is inbuilt python profiler (cprofile) and then there is yappi and others which don't seem to serve my purpose because i want to run these profiler within my application.This way i will be able to view the profiler results and take necessary action (eg - stopping the application whenever CPU usage goes above a certain threshold)
My application is designed to run on Linux as a backgroud process.
I'm developing a long-running multi-threaded Python application for Windows, and I want the process to know the CPU time that each of its threads has taken. I can get the overall times for the entire process with os.times() but I need to know the per-thread times.
I know that there are external tools such as the Sysinternals Process Explorer, but my program itself needs to have this information. If I were on Linux, I look in the /proc filesystem, as described here. If I were writing C code, I'd use the GetThreadTimes call, as described here.
So how can I accomplish this on Windows using Python?
win32process.GetThreadTimes
You want the Python for Windows Extensions to do hairy windows things.
Or you can simply use yappi. (https://code.google.com/p/yappi/) It transparently uses GetThreadTimes() if CPU clock type is selected for profiling.
See here also for an example: https://code.google.com/p/yappi/wiki/YThreadStats_v082
While developing a Django app deployed on Apache mod_wsgi I found that in case of multithreading (Python threads; mod_wsgi processes=1 threads=8) Python won't use all available processors. With the multiprocessing approach (mod_wsgi processes=8 threads=1) all is fine and I can load my machine at full.
So the question: is this Python behavior normal? I doubt it because using 1 process with few threads is the default mod_wsgi approach.
The system is:
2xIntel Xeon 5XXX series (8 cores (16 with hyperthreading)) on FreeBSD 7.2 AMD64 and Python 2.6.4
Thanks all for answers.
We all found that this behavior is normal because of GIL. Here is a good explanation:
http://jessenoller.com/2009/02/01/python-threads-and-the-global-interpreter-lock/
or stackoverflow GIL discussion: What is a global interpreter lock (GIL)?.
Will Python use all processors in thread mode? No.
Python won't use all available processors; is this Python behavior normal? Yes, it's normal because of the GIL.
For a discussion see http://mail.python.org/pipermail/python-3000/2007-May/007414.html.
You may find that having a couple (or 4) of threads per core/process can still improve performance if there is some blocking, for example waiting for a response from the database would cause that process to block other connections otherwise.
Will python use all processors in thread mode? No.
It this normal? Yes, this is normal. Python makes no effort to locate all your cores.
"1 process with few threads is default mod_wsgi approach". But that's not optimal or even desirable. That's just a default. Don't read anything into it.
If you want to use all your computer's resources, make the OS handle it. Use processes.
The distinction between multi-processing and multi-threading is hard to measure for the most part. Using processes or threads barely matters. It's usually simpler to use processes, since there's trivial OS support for this.
Bottom Line
Use multiple processes, that allows the OS (and Apache) to make as much use as possible of the system.
Threads share a limited set of I/O resources for the Process they're part of, and web page serving is I/O bound. Processes have independent I/O resources and will more easily max out your processor.
There is still hope. The GIL is only an implementation artifact of the C Python implementation that you download from python.org. Jython and IronPython are two other implementations of Python, and they have no GIL, so you may have better threading results with one of them.
Yes. Python is not really multi-threaded. Instead, there is a global lock and each thread gets to execute a few operations in turn. This makes it much more simple to write MT applications in Python since there can't be any problems with stale caches, etc.
So one Python process can only ever occupy a single CPU. To fully utilize a multi-core system, you must run several Python processes.
I don't know if it is still the case, but there is a global lock in the Python interpreter, which prevents the use of all processor resources from a single interpreter, even when using multi threading. IIRC, the global lock has to do with I/O.
It seems you are watching the result of this lock, so, personally, I would use multiple processes with a single thread.
Even though Python and Ruby have one kernel thread per interpreter thread, they have a global interpreter lock (GIL) that is used to protect potentially shared data structures, so this inhibits multi-processor execution. Even though the portions in those languajes that are written in C or C++ can be free-threaded, that's not possible with pure interpreted code unless you use multiple processes. What's the best way to achieve this? Using FastCGI? Creating a cluster or a farm of virtualized servers? Using their Java equivalents, JRuby and Jython?
I'm not totally sure which problem you want so solve, but if you deploy your python/django application via an apache prefork MPM using mod_python apache will start several worker processes for handling different requests.
If one request needs so much resources, that you want to use multiple cores have a look at pyprocessing. But I don't think that would be wise.
The 'standard' way to do this with rails is to run a "pack" of Mongrel instances (ie: 4 copies of the rails application) and then use apache or nginx or some other piece of software to sit in front of them and act as a load balancer.
This is probably how it's done with other ruby frameworks such as merb etc, but I haven't used those personally.
The OS will take care of running each mongrel on it's own CPU.
If you install mod_rails aka phusion passenger it will start and stop multiple copies of the rails process for you as well, so it will end up spreading the load across multiple CPUs/cores in a similar way.
Use an interface that runs each response in a separate interpreter, such as mod_wsgi for Python. This lets multi-threading be used without encountering the GIL.
EDIT: Apparently, mod_wsgi no longer supports multiple interpreters per process because idiots couldn't figure out how to properly implement extension modules. It still supports running requests in separate processes FastCGI-style, though, so that's apparently the current accepted solution.
In Python and Ruby it is only possible to use multiple cores, is to spawn new (heavyweight) processes.
The Java counterparts inherit the possibilities of the Java platform. You could imply use Java threads. That is for example a reason why sometimes (often) Java Application Server like Glassfish are used for Ruby on Rails applications.
For Python, the PyProcessing project allows you to program with processes much like you would use threads. It is included in the standard library of the recently released 2.6 version as multiprocessing. The module has many features for establishing and controlling access to shared data structures (queues, pipes, etc.) and support for common idioms (i.e. managers and worker pools).