I have read through serval tutorials about concurrency in python and I also know the differences between concurrency and Parallelism, but I am still a little bit confused about the definition of concurrency.
Many people define concurrency as executing multiple tasks at the same time. I am wondering what are tasks in python. Are they functions? Can I say concurrency in Python is executing multiple functions at the same time?
Many people define concurrency as executing multiple tasks at the same time
The tasks here is not defined from the computer's view, but we human's view. As long as we can confirm things are not being served strictly in order(no one is blocking another), we can say they are happening concurrently.
Can I say concurrency in Python is executing multiple functions at the same time?
There are plenty of ways to support concurrency in Python, executing multiple functions(via multi-threads or multi-processes) at the same time is absolutely one of them(actually this is parallelism), but not the only one.
Related
So I have a quite time-extensive python program and I was wondering, if (since my CPU is multi-core) I can run the program on multiple threads at once. I always check Task Manager and python uses only one thread but pushes it to the max.
I tried searching, but I only found ways to run a function with different datasets on different threads, so I didn't try anything yet, I hope you can help!
multi-threading won't help you.
But Python's "multiprocessing" might - however, parallelization is not automatic, and you have to adapt your program, knowing what you are doing, in order to have any gains with it.
Python's multi-threading is capped to only have a single thread running actual Python code at once - you ave some gains if parts of your workload are spent with I/O, but not with a CPU intensive task.
Multiprocessing is a module on Python's standard library which provide the same interface as `"threading" and will actually run your code in parallel, in multiple processes each one with its own Python runtime. Its major drawback is that any data exchanged between processes have to be serialized and de-serialized, and that add some overhead.
In either case, you have to write your program so that certain functions (which can be entry-points for big workloads) run in new threads or sub-processes. Since you have no example code, there is no example we could create for you showing how the code could be - but look for tutorials on "python multiprocessing" - those should help you out.
As far as we know, it is bad if we start too many threads, and it may significantly decrease performance and increase memory usage. However, I can't find anywhere if the situation is the same if we call too many async functions.
As far as I know, asyncio is a kind of abstraction for parallel computing, and it may use or may not use actual threading.
In my project, multiple asynchronous tasks are run, and each such task (currently, it is done using threading) may start other threads. It is a risky situation. I'm thinking of two ways how to solve the issue with too many threads. The first one is to limit the number of 'software' threads to the number of 'hardware' threads. Another one is to use asyncio. Is the second option reasonable in such a case?
As far as I know, asyncio is a kind of abstraction for parallel computing and it may use or may not use actual threading.
Please do not confuse parallelism with asynchronous. In Python, you can achieve parallelism only using multiprocessing.
In my project, multiple asynchronous tasks are run, and each such task may start other threads.
All asynchronous tasks are run in one event loop and use only one thread.
I'm thinking of two ways how to solve the issue with too many threads. The first one is to limit the number of 'software' threads to the number of 'hardware' threads. Another one is to use asyncio. Is the second option reasonable in such a case?
In this answer I have demonstrated situations where we can use async functions. It mainly depends on the operations you do. If your application works with threading and does not need multiprocessing, it can be converted to asynchronous tasks.
I notice when I run my heavily CPU dependant python programs, it only uses a single core. Is it possible to assign multiple cores to the program when I run it?
You have to program explicitly for multiple cores. See the Symmetric Multiprocessing options on this page for the many parallel processing solutions in Python. Parallel Python is a good choice if you can't be bothered to compare the options, look at the examples here.
Some problems can't take advantage of multiple cores though. Think about how you could possibly run up the stairs faster with the help of three friends. Not going to happen!
I wonder why nobody mentioned CPython's GIL (Global Interpreter Lock) yet. It basically means that multiple threads inside one Python interpreter cannot use the power of multiple cores because many operations are protected by a global lock in order to be thread-safe. This only applies to a small amount of applications - the CPU-bound ones. For more info, just search for the term "GIL", there are already many questions on it (like that one, for example).
This answer of course assumes that you are in fact using multiple threads, or else you won't be able to use multiple cores anyway (multiprocessing would be another possibility).
If any part of your problem can be run in parallel, you should look into the multiprocessing module
Does the presence of python GIL imply that in python multi threading the same operation is not so different from repeating it in a single thread?.
For example, If I need to upload two files, what is the advantage of doing them in two threads instead of uploading them one after another?.
I tried a big math operation in both ways. But they seem to take almost equal time to complete.
This seems to be unclear to me. Can someone help me on this?.
Thanks.
Python's threads get a slightly worse rap than they deserve. There are three (well, 2.5) cases where they actually get you benefits:
If non-Python code (e.g. a C library, the kernel, etc.) is running, other Python threads can continue executing. It's only pure Python code that can't run in two threads at once. So if you're doing disk or network I/O, threads can indeed buy you something, as most of the time is spent outside of Python itself.
The GIL is not actually part of Python, it's an implementation detail of CPython (the "reference" implementation that the core Python devs work on, and that you usually get if you just run "python" on your Linux box or something.
Jython, IronPython, and any other reimplementations of Python generally do not have a GIL, and multiple pure-Python threads can execute simultaneously.
The 0.5 case: Even if you're entirely pure-Python and see little or no performance benefit from threading, some problems are really convenient in terms of developer time and difficulty to solve with threads. This depends in part on the developer, too, of course.
It really depends on the library you're using. The GIL is meant to prevent Python objects and its internal data structures to be changed at the same time. If you're doing an upload, the library you use to do the actual upload might release the GIL while it's waiting for the actual HTTP request to complete (I would assume that is the case with the HTTP modules in the standard library, but I didn't check).
As a side note, if you really want to have things running in parallel, just use multiple processes. It will save you a lot of trouble and you'll end up with better code (more robust, more scalable, and most probably better structured).
It depends on the native code module that's executing. Native modules can release the GIL and then go off and do their own thing allowing another thread to lock the GIL. The GIL is normally held while code, both python and native, are operating on python objects. If you want more detail you'll probably need to go and read quite a bit about it. :)
See:
What is a global interpreter lock (GIL)? and Thread State and the Global Interpreter Lock
Multithreading is a concept where two are more tasks need be completed simultaneously, for example, I have word processor in this application there are N numbers of a parallel task have to work. Like listening to keyboard, formatting input text, sending a formatted text to display unit. In this context with sequential processing, it is time-consuming and one task has to wait till the next task completion. So we put these tasks in threads and simultaneously complete the task. Three threads are always up and waiting for the inputs to arrive, then take that input and produce the output simultaneously.
So multi-threading works faster if we have multi-core and processors. But in reality with single processors, threads will work one after the other, but we feel it's executing with greater speed, Actually, one instruction executes at a time and a processor can execute billions of instructions at a time. So the computer creates illusion that multi-task or thread working parallel. It just an illusion.
Let's assume we have some task, that could be divided into independent subtasks and we want to process these tasks in parallel on the same machine.
I read about multithreading and ran into this post, which describes GlobalInterpreterLocks. Since I do not understand fully how processes are handled under the hood, I got to ask:
Putting aside the gain of threading: Is Multithreading (in my case in python) effectively the same as calling a script multiple times?
I hope this question does not lead to far and its answer is understandable for someone whose knowledge about the things happening on the low levels of a computer are sparse. Thanks for any enlightening in this matter.
Is Multithreading (in my case in python) effectivle the same as calling a script multiple times?
In a word, no.
Due to the GIL, in Python it is far easier to achieve true parallelism by using multiple processes than it is by using multiple threads. Calling the script multiple times (presumably with different arguments) is an example of using multiple processes. The multiprocessing module is another way to achieve parallelism by using multiple processes. Both are likely to give better performance than using threads.
If I were you, I'd probably consider multiprocessing as the first choice for distributing work across cores.
It is not the same thing one is Multithreading while the other is opening separate process for one another:
here is a short explanation taken from here :
It is important to first define the differences between processes and
threads. Threads are different than processes in that they share
state, memory, and resources. This simple difference is both a
strength and a weakness for threads. On one hand, threads are
lightweight and easy to communicate with, but on the other hand, they
bring up a whole host of problems including deadlocks, race
conditions, and sheer complexity. Fortunately, due to both the GIL and
the queuing module, threading in Python is much less complex to
implement than in other languages.