How to "own" a processor? - python

I have a somewhat different problem to solve with threads in Python.
I have a Python application that talks to very high-speed hardware through a polling interface. I would like the Python process to "own" one of the CPUs, that is, I don't want the Python application to share its CPU with any other process.
What I am currently seeing is that my application gets put "on hold" for tens of milliseconds at a time while it is servicing hardware that executes commands in tens of microseconds, thus the hardware goes idle while my Python application is put on hold.
I am running under Windows 10. Is there a system call of some kind that lets me own the CPU?

Related

What is runtime in context of Python? What does it consist of?

In context to this question What is “runtime”? (https://stackoverflow.com/questions/3900549/what-is-runtime/3900561)
I am trying to understand what would a python runtime be made of. My guess is:
The python process that contains all runtime variables.
The GIL
The underlying interpreter code (CPython etc.).
Now if this is right, can we say that multiprocessing in python creates multiple runtimes and a python process is something we can directly relate to the runtime? (I think this is the right option)
Or, every python thread with its own stack which works on the same GIL and memory space as the parent process can be called as having a separate runtime?
Or, doesn't matter how many threads or processes are running, it will all come under a single runtime?
Simply put, what is the definition of runtime in the context of Python?
PS: I understand the difference between threads and processes. GIL: I understand the impacts but I do not grok it.
You are talking about two different (yet similar) concepts in computer science; multiprocess, and multithreading. Here is some compilation of questions/answers that might be useful:
Multiprocessing -- Wikipedia
Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system.The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them.
Multithreading -- Wikipedia
In computer architecture, multithreading is the ability of a central processing unit (CPU) (or a single core in a multi-core processor) to provide multiple threads of execution concurrently, supported by the operating system. This approach differs from multiprocessing. In a multithreaded application, the threads share the resources of a single or multiple cores, which include the computing units, the CPU caches, and the translation lookaside buffer (TLB).
What is the difference between a process and a thread? -- StackOverflow
Process
Each process provides the resources needed to execute a program. A process has a virtual address space, executable code, open handles to system objects, a security context, a unique process identifier, environment variables, a priority class, minimum and maximum working set sizes, and at least one thread of execution. Each process is started with a single thread, often called the primary thread, but can create additional threads from any of its threads.
Thread
A thread is an entity within a process that can be scheduled for execution. All threads of a process share its virtual address space and system resources. In addition, each thread maintains exception handlers, a scheduling priority, thread local storage, a unique thread identifier, and a set of structures the system will use to save the thread context until it is scheduled. The thread context includes the thread's set of machine registers, the kernel stack, a thread environment block, and a user stack in the address space of the thread's process. Threads can also have their own security context, which can be used for impersonating clients.
Meaning of “Runtime Environment” and of “Software framework”? -- StackOverflow
A runtime environment basically is a virtual machine that runs on top of a machine - provides machine abstraction. It is generally lower level than a library. A framework can contain a runtime environment, but is generally tied to a library.
Runtime System -- Wikipedia
In computer programming, a runtime system, also called runtime environment, primarily implements portions of an execution model. Most languages have some form of runtime system that provides an environment in which programs run. This environment may address a number of issues including the layout of application memory, how the program accesses variables, mechanisms for passing parameters between procedures, interfacing with the operating system, and otherwise. Typically the runtime system will have some responsibility for setting up and managing the stack and heap, and may include features such as garbage collection, threads or other dynamic features built into the language.
global interpreter lock -- Python Docs
The mechanism used by the CPython interpreter to assure that only one thread executes Python bytecode at a time. This simplifies the CPython implementation by making the object model (including critical built-in types such as dict) implicitly safe against concurrent access. Locking the entire interpreter makes it easier for the interpreter to be multi-threaded, at the expense of much of the parallelism afforded by multi-processor machines.
However, some extension modules, either standard or third-party, are designed so as to release the GIL when doing computationally-intensive tasks such as compression or hashing. Also, the GIL is always released when doing I/O.
Past efforts to create a “free-threaded” interpreter (one which locks shared data at a much finer granularity) have not been successful because performance suffered in the common single-processor case. It is believed that overcoming this performance issue would make the implementation much more complicated and therefore costlier to maintain.
What is the Python Global Interpreter Lock (GIL)?
-- Real Python
Useful source for more info on GIL.
Does python os.fork uses the same python interpreter? -- StackOverflow
Whenever you fork, the entire Python process is duplicated in memory (including the Python interpreter, your code and any libraries, current stack etc.) to create a second process - one reason why forking a process is much more expensive than creating a thread.
This creates a new copy of the python interpreter.
One advantage of having two python interpreters running is that you now have two GIL's (Global Interpreter Locks), and therefore can have true multi-processing on a multi-core system.
Threads in one process share the same GIL, meaning only one runs at a given moment, giving only the illusion of parallelism.
Memory Management -- Python Docs
Memory management in Python involves a private heap containing all Python objects and data structures. The management of this private heap is ensured internally by the Python memory manager. The Python memory manager has different components which deal with various dynamic storage management aspects, like sharing, segmentation, preallocation or caching.
When you spawn a thread via the threading library, you are effectively spawning jobs inside a single Python runtime. This runtime ensures the threads have a shared memory and manages the running sequence of these threads via the global interpreter lock:
Understanding the Python GIL -- dabeaz
When you spawn a process via the multiprocessing library, you are spawning a new process that contains a new Python interpreter (a new runtime) that runs the designated code. If you want to share memory you have to use multiprocessing.shared_memory:
multiprocessing.shared_memory -- Python Docs
This module provides a class, SharedMemory, for the allocation and management of shared memory to be accessed by one or more processes on a multicore or symmetric multiprocessor (SMP) machine. To assist with the life-cycle management of shared memory especially across distinct processes, a BaseManager subclass, SharedMemoryManager, is also provided in the multiprocessing.managers module.
Can we say that multiprocessing in python creates multiple runtimes and a python process is something we can directly relate to the runtime?
Yes. Different GIL, different memory space, different runtime.
Every python thread with its own stack which works on the same GIL and memory space as the parent process can be called as having a separate runtime?
Depends what you mean by "stack". Same GIL, shared memory space, same runtime.
Doesn't matter how many threads and processes are running, it will all come under a single runtime?
Depends if multithreading/multiprocess.
Simply put, what is the definition of runtime in the context of Python?
The runtime environment is literally python.exe or /usr/bin/python. It's the Python executable that will interpret your Python code by transforming it into CPU-readable bytecode. When you multithread, you only have one python running. When you multiprocess you have multiple pythons running.
I hope that a core dev can come in and speak more to this in greater detail. For now the above is simply just a compilation of sources for you to start understanding/seeing the bigger picture.

Run Python ZMQ Workers simultaneously

I am pretty new in the Python and at distributed systems.
I am using the ZeroMQ Venitlator-Worker-Sink configuration:
Ventilator - Worker - Sink
Everything is working fine at the moment, my problem is, that I need a lot of workers. Every worker is doing the same work.
At the moment every worker is working in his own Python file and has his own Output-Console.
If I have programm changes, I have to change (or copy) the code in every file.
Next problem is that I have to start/run every file, so it quiet annoying to start 12 files.
What are here the best solutions? Threads, processes?
I have to say that the goal is to run every worker on a diffrent raspberry pi.
This appears to be more of a dev/ops problem. You have your worker code, which is presumably a single codebase, on multiple distributed machines or instances. You make a change to that codebase and you need the resulting code to be distributed to each instance, and then the process restarted.
To start, you should at minimum be using a source control system, like Git. With such a system you could at least go to each instance and pull the most recent commit and restart. Beyond that, you could set up a system like Ansible to go out and run those actions on each instance initiated from a single command.
There's a whole host of other tools, strategies and services that will help you do those things in a myriad of different ways. Using Docker to create a single worker container and then distribute and run that container on your various instances is probably one of the more popular ways to do what you're after, but it'll require a more fundamental change to your infrastructure.
Hope this helps.

Need to setup profiler for a multithreaded python application

I am running a multithreaded python(3.3) application which has been compiled using cx_freeze. I need to monitor the CPU usage, Memory Usage, thread info, objects info, process status.
I know there is inbuilt python profiler (cprofile) and then there is yappi and others which don't seem to serve my purpose because i want to run these profiler within my application.This way i will be able to view the profiler results and take necessary action (eg - stopping the application whenever CPU usage goes above a certain threshold)
My application is designed to run on Linux as a backgroud process.

Python multithreading, How is it using multiple Cores?

I am running a multithreaded application(Python2.7.3) in a Intel(R) Core(TM)2 Duo CPU E7500 # 2.93GHz. I thought it would be using only one core but using the "top" command I see that the python processes are constantly changing the core no. Enabling "SHOW THREADS" in the top command shows diffrent thread processes working on different cores.
Can anyone please explain this? It is bothering me as I know from theory that multithreading is executed on a single core.
First off, multithreading means the inverse, namely that multiple cores are being utilized (via threading) at the same time. CPython is indeed crippled when it comes to this, though whenever you call into C code (this includes parts of the standard library, but also extension modules like Numpy) the lock which prevents concurrent execution of Python code may be unlocked. You can still have multiple threads, they just won't be interpreting Python at the same time (instead, they'll take turns quite frequently). You also speak of "Python processes" -- are you confusing terminology, or is this "multithreaded" Python application in fact multiprocessing? Of course multiple Python processes can run concurrently.
However, from your wording I suspect another source of confusion. Even a single thread can run on multiple cores... just not at the same time. It is up to the operating system which thread is running on which CPU, and the OS scheduler does not necessarily re-assign a thread to the same CPU where it used to run before it was suspended (it's beneficial, as David Schwartz notes in the comments, but not vital). That is, it's perfectly normal for a single thread/process to jump from CPU to CPU.
Threads are designed to take advantage of multiple cores when they are available. If you only have one core, they'll run on one core too. :-)
There's nothing to be concerned about, what you observe is "working as intended".

Is anyone using zeromq to coordinate multiple Python interpreters in the same process?

I love Python's global interpreter lock because it makes the underlying C code simple.
But it means that each Python interpreter main loop is restricted to one thread at a time.
This is bad because recently the number of cores per processor chip has been doubling frequently.
One of the supposed advantages to zeromq is that it makes multi-threaded programming "easy" or easier.
Is it possible to launch multiple Python interpreters in the same process and have them communicate only using in-process zeromq with no other shared state? Has anyone tried it? Does it work well? Please comment and/or provide links.
I don't know of any way to create multiple instances of the Python interpreter within a single process, but I do have experience with splitting multiple instances across multiple processes and communicating with zmq.
I've been using multiprocessing to implement an island-model architecture for global optimization, with zmq for managing communication between the islands. Each island is its own process with its own Python interpreter, created and managed by the master archipelago process.
Using multiprocessing allows you to launch as many independent Python interpreters as you wish, but they all reside in their own processes with a separate memory space. I believe the OS scheduler takes care of assigning processes to cores and sharing CPU time. The separate memory space is the hardest part, because it means you have to explicitly communicate. To communicate between processes, the objects/data you wish to send must be serializable, because zmq sends byte-strings.
The nice thing about zmq is that it's a piece of cake to scale across systems distributed over a network, and it's pretty lightweight. You can create just about any communication pattern you wish, using REP/REQ, PUB/SUB, or whatever.
But no, it's not as easy as just spinning up a few threads from the threading module.
Edit: Also, here's a Stack Overflow question similar to yours. Inside are some more relevant links which indicate that it may be possible to run multiple Python interpreters within a single process, but it doesn't look simple. Multiple independent embedded Python Interpreters on multiple operating system threads invoked from C/C++ program

Categories