I'm having some trouble conceptualizing what the big deal is with greenlets. I understand how the ability to switch between running functions in the same process could open the door to a world of possibilities; but i haven't come across any examples of how they solve problems standard python techniques cannot (other than the nested-functions-in-generators problem--which, honestly..."meh").
Take this example from greenlet's main page that is basically a more complex way of doing this:
def test0():
print 12
print 56
print 34
I know it's just a superfluous example, but that seems to be the long and the short of what greenlets can do. Unless you are that much of a control-freak that you have to be the one who decides when, where, and how every line of code in your application is executed, how is test0 improved by using greenlets? Or take the GUI example (which is what interested me in greenlets in the first place); It's shouldn't hard to ponder a strategy that doesn't require the while loop in process_commands, no?
I've seen some of the cool things can be done with greenlets; but only in conjunction with some other dark sorcery implemented in another package (e.g., Stackless, gevent, etc.). Even with those, the greenlets aren't sufficient, requiring them to subclass.
My question:
What are some real-world examples of how one can one use greenlets, by themselves, to enhance the functionality of python? I suspect the answer lies in networking--which would probably be why i don't understand. But are there any others?
Note that your example has explicitly woven all the prints together into one function. In a real program, you don't just have two functions; you have some arbitrary number of functions, some of them even from third-party libraries you don't control, and rewriting all that code to interleave all the statements is not quite so simple.
GUIs are actually an excellent example: by letting the event loop (which is the way you handle commands in practice, btw) suspend itself when there are no events to read, your GUI can remain interactive on the same thread. If the event loop had to actually stop and wait for the user to press a key, your GUI would freeze, because nothing would be telling the OS to redraw the window.
Not that I'm a huge fan of gevent in particular; I'm placing my bets on the stdlib asyncio library. :) But it's all the same idea really: when you have some work to do that involves a lot of waiting, let other code run in the meantime.
Essentially any problem where you don't want to block the rest of application while waiting for something to "come back at you" (e.g. sleep, socket). Or in other words, any problem where event-driven development would make things easier.
Networking as you mentioned.
GUI.
Simulations/games where you might have 1000s of Actors and you want them somewhat to act independently.
Gluing synchronous with asynchronous libraries/frameworks.
Related
I have an existing program that has its own main loop, and does computations based on input it receives - let's say from the user, to make it simple. I want to now do the computations remotely instead of locally, and I decided to implement the RPCs in Twisted.
Ideally I just want to change one of my functions, say doComputation(), to make a call to twisted to perform the RPC, get the results, and return. The rest of the program should stay the same. How can I accomplish this, though? Twisted hijacks the main loop when I call reactor.run(). I also read that you don't really have threads in twisted, that all the tasks run in sequence, so it seems I can't just create a LoopingCall and run my main loop that way.
You have a couple of different options, depending on what sort of main loop your existing program has.
If it's a mainloop from a GUI library, Twisted may already have support for it. In that case, you can just go ahead and use it.
You could also write your own reactor. There isn't a lot of great documentation for this, but you can look at the way that qtreactor implements a reactor plugin externally to Twisted.
You can also write a minimal reactor using threadedselectreactor. The documentation for this is also sparse, but the wxpython reactor is implemented using it. Personally I wouldn't recommend this approach as it is difficult to test and may result in confusing race conditions, but it does have the advantage of letting you leverage almost all of Twisted's default networking code with only a thin layer of wrapping.
If you are really sure that you don't want your doComputation to be asynchronous, and you want your program to block while waiting for Twisted to answer, do the following:
start Twisted in another thread before your main loop starts up, with something like twistedThread = Thread(target=reactor.run); twistedThread.start()
instantiate an object to do your RPC communication (let's say, RPCDoer) in your own main loop's thread, so that you have a reference to it. Make sure to actually kick off its Twisted logic with reactor.callFromThread so you don't need to wrap all of its Twisted API calls.
Implement RPCDoer.doRPC to return a Deferred, using only Twisted API calls (i.e. don't call into your existing application code, so you don't need to worry about thread safety for your application objects; pass doRPC all the information that it needs as arguments).
You can now implement doComputation like this:
def doComputation(self):
rpcResult = blockingCallFromThread(reactor, self.myRPCDoer.doRPC)
return self.computeSomethingFrom(rpcResult)
Remember to call reactor.callFromThread(reactor.stop); twistedThread.join() from your main-loop's shutdown procedure, otherwise you may see some confusing tracebacks or log messages on exit.
Finally, one option that you should really consider, especially in the long term: dump your existing main loop, and figure out a way to just use Twisted's. In my experience this is the right answer for 9 out of 10 askers of questions like this. I'm not saying that this is always the way to go - there are plenty of cases where you really need to keep your own main loop, or where it's just way too much effort to get rid of the existing loop. But, maintaining your own loop is work too. Keep in mind that the Twisted loop has been extensively tested by millions of users and used in a huge variety of environments. If your loop is also extremely mature, that may not be a big deal, but if you're writing a small, new program, the difference in reliability may be significant.
It seems like the correct and very simple answer here is a LoopingCall:
http://www.saltycrane.com/blog/2008/10/running-functions-periodically-using-twisteds-loopingcall/
from datetime import datetime
from twisted.internet.task import LoopingCall
from twisted.internet import reactor
def doComputation():
print "Custom fn run at", datetime.now()
lc = LoopingCall(doComputation)
lc.start(0.1) # run your own loop 10 times a second
# put your other twisted here
reactor.run()
Am just getting my head round Twisted, threading, stackless, etc. etc. and would appreciate some high level advice.
Suppose I have remote clients 1 and 2, connected via a websocket running in a page on their browsers. Here is the ideal goal:
for cl in (1,2):
guess[cl] = show(cl, choice("Pick a number:", range(1,11)))
checkpoint()
if guess[1] == guess[2]:
show((1,2), display("You picked the same number!"))
Ignoring the mechanics of show, choice and display, the point is that I want the show call to be asynchronous. Each client gets shown the choice. The code waits at checkpoint() for all the threads (or whatever) to rejoin.
I would be interested in hearing answers even if they involve hairy things like rewriting the source code. I'd also be interested in less hairy answers which involve compromising a bit on the syntax.
The most simple solution code wise is to use a framework like Autobahn which support remote procdure calls (RPC). That means you can call some JavaScript in the browser and wait for the result.
If you want to call two clients, you will have to use threads.
You can also do it manually. The approach works along these lines:
You need to pass a callback to show().
show() needs to register the callback with some kind of string ID in a global dict
show() must send this ID to the client
When the client sends the answer, it must include the ID.
The Python handler can then remove the callback from the global dict and invoke it with the answer
The callback needs to collect the results.
When it has enough results (two in your case), it must send status updates to the client.
You can simplify the code using yield but the theory behind is a bit complex to understand: What does the "yield" keyword do in Python? and coroutines
In Python, the most widely-used approach to async/event-based network programming that hides that model from the programmer is probably gevent.
Beware: this kind of trickery works by making tasks yield control implicitly, which encourages the same sorts of surprising bugs that tend to appear when OS threads are involved. Local reasoning about such problems is significantly harder than with explicit yielding, and the convenience of avoiding callbacks might not be worth the trouble introduced by the inherent pitfalls. Perhaps just as important to a library author like yourself: this approach is not pure Python, and would force dependencies and interpreter restrictions on the users of your library.
A lot of discussion about this topic sprouted up (especially between the gevent and twisted camps) while Guido was working on the asyncio library, which was called tulip at the time. He summarized the main issues here.
I have an existing program that has its own main loop, and does computations based on input it receives - let's say from the user, to make it simple. I want to now do the computations remotely instead of locally, and I decided to implement the RPCs in Twisted.
Ideally I just want to change one of my functions, say doComputation(), to make a call to twisted to perform the RPC, get the results, and return. The rest of the program should stay the same. How can I accomplish this, though? Twisted hijacks the main loop when I call reactor.run(). I also read that you don't really have threads in twisted, that all the tasks run in sequence, so it seems I can't just create a LoopingCall and run my main loop that way.
You have a couple of different options, depending on what sort of main loop your existing program has.
If it's a mainloop from a GUI library, Twisted may already have support for it. In that case, you can just go ahead and use it.
You could also write your own reactor. There isn't a lot of great documentation for this, but you can look at the way that qtreactor implements a reactor plugin externally to Twisted.
You can also write a minimal reactor using threadedselectreactor. The documentation for this is also sparse, but the wxpython reactor is implemented using it. Personally I wouldn't recommend this approach as it is difficult to test and may result in confusing race conditions, but it does have the advantage of letting you leverage almost all of Twisted's default networking code with only a thin layer of wrapping.
If you are really sure that you don't want your doComputation to be asynchronous, and you want your program to block while waiting for Twisted to answer, do the following:
start Twisted in another thread before your main loop starts up, with something like twistedThread = Thread(target=reactor.run); twistedThread.start()
instantiate an object to do your RPC communication (let's say, RPCDoer) in your own main loop's thread, so that you have a reference to it. Make sure to actually kick off its Twisted logic with reactor.callFromThread so you don't need to wrap all of its Twisted API calls.
Implement RPCDoer.doRPC to return a Deferred, using only Twisted API calls (i.e. don't call into your existing application code, so you don't need to worry about thread safety for your application objects; pass doRPC all the information that it needs as arguments).
You can now implement doComputation like this:
def doComputation(self):
rpcResult = blockingCallFromThread(reactor, self.myRPCDoer.doRPC)
return self.computeSomethingFrom(rpcResult)
Remember to call reactor.callFromThread(reactor.stop); twistedThread.join() from your main-loop's shutdown procedure, otherwise you may see some confusing tracebacks or log messages on exit.
Finally, one option that you should really consider, especially in the long term: dump your existing main loop, and figure out a way to just use Twisted's. In my experience this is the right answer for 9 out of 10 askers of questions like this. I'm not saying that this is always the way to go - there are plenty of cases where you really need to keep your own main loop, or where it's just way too much effort to get rid of the existing loop. But, maintaining your own loop is work too. Keep in mind that the Twisted loop has been extensively tested by millions of users and used in a huge variety of environments. If your loop is also extremely mature, that may not be a big deal, but if you're writing a small, new program, the difference in reliability may be significant.
It seems like the correct and very simple answer here is a LoopingCall:
http://www.saltycrane.com/blog/2008/10/running-functions-periodically-using-twisteds-loopingcall/
from datetime import datetime
from twisted.internet.task import LoopingCall
from twisted.internet import reactor
def doComputation():
print "Custom fn run at", datetime.now()
lc = LoopingCall(doComputation)
lc.start(0.1) # run your own loop 10 times a second
# put your other twisted here
reactor.run()
There are many questions related to Stackless Python. But none answering this my question, I think (correct me if wrong - please!). There's some buzz about it all the time so I curious to know. What would I use Stackless for? How is it better than CPython?
Yes it has green threads (stackless) that allow quickly create many lightweight threads as long as no operations are blocking (something like Ruby's threads?). What is this great for? What other features it has I want to use over CPython?
It allows you to work with massive amounts of concurrency. Nobody sane would create one hundred thousand system threads, but you can do this using stackless.
This article tests doing just that, creating one hundred thousand tasklets in both Python and Google Go (a new programming language): http://dalkescientific.com/writings/diary/archive/2009/11/15/100000_tasklets.html
Surprisingly, even if Google Go is compiled to native code, and they tout their co-routines implementation, Python still wins.
Stackless would be good for implementing a map/reduce algorithm, where you can have a very large number of reducers depending on your input data.
Stackless Python's main benefit is the support for very lightweight coroutines. CPython doesn't support coroutines natively (although I expect someone to post a generator-based hack in the comments) so Stackless is a clear improvement on CPython when you have a problem that benefits from coroutines.
I think the main area where they excel are when you have many concurrent tasks running within your program. Examples might be game entities that run a looping script for their AI, or a web server that is servicing many clients with pages that are slow to create.
You still have many of the typical problems with concurrency correctness however regarding shared data, but the deterministic task switching makes it easier to write safe code since you know exactly where control will be transferred and therefore know the exact points at which the shared state must be up to date.
Thirler already mentioned that stackless was used in Eve Online. Keep in mind, that:
(..) stackless adds a further twist to this by allowing tasks to be separated into smaller tasks, Tasklets, which can then be split off the main program to execute on their own. This can be used for fire-and-forget tasks, like sending off an email, or dispatching an event, or for IO operations, e.g. sending and receiving network packets. One tasklet waits for a packet from the network while others continue running the game loop.
It is in some ways like threads, but is non-preemptive and explicitly scheduled, so there are fewer issues with synchronization. Also, switching between tasklets is much faster than thread switching, and you can have a huge number of active tasklets whereas the number of threads is severely limited by the computer hardware.
(got this citation from here)
At PyCon 2009 there was given a very interesting talk, describing why and how Stackless is used at CCP Games.
Also, there is a very good introductory material, which describes why stackless is a good solution for Your applications. (it may be somewhat old, but I think that it is worth reading).
EVEOnline is largely programmed in Stackless Python. They have several dev blogs on the use of it. It seems it is very useful for high performance computing.
While I've not used Stackless itself, I have used Greenlet for implementing highly-concurrent network applications. Some of the use cases Linden Lab has put it towards are: high-performance smart proxies, a fast system for distributing commands over huge numbers of machines, and an application that does a ton of database writes and reads (at a ratio of about 1:2, which is very write-heavy, so it's spending most of its time waiting for the database to return), and a web-crawler-type-thing for internal web data. Basically any app that's expecting to have to do a lot of network I/O will benefit from being able to create a bajillion lightweight threads. 10,000 connected clients doesn't seem like a huge deal to me.
Stackless or Greenlet aren't really a complete solution, though. They are very low-level and you're going to have to do a lot of monkeywork to build an application with them that uses them to their fullest. I know this because I maintain a library that provides a networking and scheduling layer on top of Greenlet, specifically because writing apps is so much easier with it. There are a bunch of these now; I maintain Eventlet, but also there is Concurrence, Chiral, and probably a few more that I don't know about.
If the sort of app you want to write sounds like what I wrote about, consider one of these libraries. The choice of Stackless vs Greenlet is somewhat less important than deciding what library best suits the needs of what you want to do.
The basic usefulness for green threads, the way I see it, is to implement a system in which you have a large amount of objects that do high latency operations. A concrete example would be communicating with other machines:
def Run():
# Do stuff
request_information() # This call might block
# Proceed doing more stuff
Threads let you write the above code naturally, but if the number of objects is large enough, threads just cannot perform adequately. But you can use green threads even for in really large amounts. The request_information() above could switch out to some scheduler where other work is waiting and return later. You get all the benefits of being able to call "blocking" functions as if they return immediately without using threads.
This is obviously very useful for any kind of distributed computing if you want to write code in a straightforward way.
It is also interesting for multiple cores to mitigate waiting for locks:
def Run():
# Do some calculations
green_lock(the_foo)
# Do some more calculations
The green_lock function would basically attempt to acquire the lock and just switch out to a main scheduler if it fails due to other cores using the object.
Again, green threads are being used to mitigate blocking, allowing code to be written naturally and still perform well.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
What are the modules used to write multi-threaded applications in Python? I'm aware of the basic concurrency mechanisms provided by the language and also of Stackless Python, but what are their respective strengths and weaknesses?
In order of increasing complexity:
Use the threading module
Pros:
It's really easy to run any function (any callable in fact) in its
own thread.
Sharing data is if not easy (locks are never easy :), at
least simple.
Cons:
As mentioned by Juergen Python threads cannot actually concurrently access state in the interpreter (there's one big lock, the infamous Global Interpreter Lock.) What that means in practice is that threads are useful for I/O bound tasks (networking, writing to disk, and so on), but not at all useful for doing concurrent computation.
Use the multiprocessing module
In the simple use case this looks exactly like using threading except each task is run in its own process not its own thread. (Almost literally: If you take Eli's example, and replace threading with multiprocessing, Thread, with Process, and Queue (the module) with multiprocessing.Queue, it should run just fine.)
Pros:
Actual concurrency for all tasks (no Global Interpreter Lock).
Scales to multiple processors, can even scale to multiple machines.
Cons:
Processes are slower than threads.
Data sharing between processes is trickier than with threads.
Memory is not implicitly shared. You either have to explicitly share it or you have to pickle variables and send them back and forth. This is safer, but harder. (If it matters increasingly the Python developers seem to be pushing people in this direction.)
Use an event model, such as Twisted
Pros:
You get extremely fine control over priority, over what executes when.
Cons:
Even with a good library, asynchronous programming is usually harder than threaded programming, hard both in terms of understanding what's supposed to happen and in terms of debugging what actually is happening.
In all cases I'm assuming you already understand many of the issues involved with multitasking, specifically the tricky issue of how to share data between tasks. If for some reason you don't know when and how to use locks and conditions you have to start with those. Multitasking code is full of subtleties and gotchas, and it's really best to have a good understanding of concepts before you start.
You've already gotten a fair variety of answers, from "fake threads" all the way to external frameworks, but I've seen nobody mention Queue.Queue -- the "secret sauce" of CPython threading.
To expand: as long as you don't need to overlap pure-Python CPU-heavy processing (in which case you need multiprocessing -- but it comes with its own Queue implementation, too, so you can with some needed cautions apply the general advice I'm giving;-), Python's built-in threading will do... but it will do it much better if you use it advisedly, e.g., as follows.
"Forget" shared memory, supposedly the main plus of threading vs multiprocessing -- it doesn't work well, it doesn't scale well, never has, never will. Use shared memory only for data structures that are set up once before you spawn sub-threads and never changed afterwards -- for everything else, make a single thread responsible for that resource, and communicate with that thread via Queue.
Devote a specialized thread to every resource you'd normally think to protect by locks: a mutable data structure or cohesive group thereof, a connection to an external process (a DB, an XMLRPC server, etc), an external file, etc, etc. Get a small thread pool going for general purpose tasks that don't have or need a dedicated resource of that kind -- don't spawn threads as and when needed, or the thread-switching overhead will overwhelm you.
Communication between two threads is always via Queue.Queue -- a form of message passing, the only sane foundation for multiprocessing (besides transactional-memory, which is promising but for which I know of no production-worthy implementations except In Haskell).
Each dedicated thread managing a single resource (or small cohesive set of resources) listens for requests on a specific Queue.Queue instance. Threads in a pool wait on a single shared Queue.Queue (Queue is solidly threadsafe and won't fail you in this).
Threads that just need to queue up a request on some queue (shared or dedicated) do so without waiting for results, and move on. Threads that eventually DO need a result or confirmation for a request queue a pair (request, receivingqueue) with an instance of Queue.Queue they just made, and eventually, when the response or confirmation is indispensable in order to proceed, they get (waiting) from their receivingqueue. Be sure you're ready to get error-responses as well as real responses or confirmations (Twisted's deferreds are great at organizing this kind of structured response, BTW!).
You can also use Queue to "park" instances of resources which can be used by any one thread but never be shared among multiple threads at one time (DB connections with some DBAPI compoents, cursors with others, etc) -- this lets you relax the dedicated-thread requirement in favor of more pooling (a pool thread that gets from the shared queue a request needing a queueable resource will get that resource from the apppropriate queue, waiting if necessary, etc etc).
Twisted is actually a good way to organize this minuet (or square dance as the case may be), not just thanks to deferreds but because of its sound, solid, highly scalable base architecture: you may arrange things to use threads or subprocesses only when truly warranted, while doing most things normally considered thread-worthy in a single event-driven thread.
But, I realize Twisted is not for everybody -- the "dedicate or pool resources, use Queue up the wazoo, never do anything needing a Lock or, Guido forbid, any synchronization procedure even more advanced, such as semaphore or condition" approach can still be used even if you just can't wrap your head around async event-driven methodologies, and will still deliver more reliability and performance than any other widely-applicable threading approach I've ever stumbled upon.
It depends on what you're trying to do, but I'm partial to just using the threading module in the standard library because it makes it really easy to take any function and just run it in a separate thread.
from threading import Thread
def f():
...
def g(arg1, arg2, arg3=None):
....
Thread(target=f).start()
Thread(target=g, args=[5, 6], kwargs={"arg3": 12}).start()
And so on. I often have a producer/consumer setup using a synchronized queue provided by the Queue module
from Queue import Queue
from threading import Thread
q = Queue()
def consumer():
while True:
print sum(q.get())
def producer(data_source):
for line in data_source:
q.put( map(int, line.split()) )
Thread(target=producer, args=[SOME_INPUT_FILE_OR_SOMETHING]).start()
for i in range(10):
Thread(target=consumer).start()
Kamaelia is a python framework for building applications with lots of communicating processes.
(source: kamaelia.org) Kamaelia - Concurrency made useful, fun
In Kamaelia you build systems from simple components that talk to each other. This speeds development, massively aids maintenance and also means you build naturally concurrent software. It's intended to be accessible by any developer, including novices. It also makes it fun :)
What sort of systems? Network servers, clients, desktop applications, pygame based games, transcode systems and pipelines, digital TV systems, spam eradicators, teaching tools, and a fair amount more :)
Here's a video from Pycon 2009. It starts by comparing Kamaelia to Twisted and Parallel Python and then gives a hands on demonstration of Kamaelia.
Easy Concurrency with Kamaelia - Part 1 (59:08)
Easy Concurrency with Kamaelia - Part 2 (18:15)
Regarding Kamaelia, the answer above doesn't really cover the benefit here. Kamaelia's approach provides a unified interface, which is pragmatic not perfect, for dealing with threads, generators & processes in a single system for concurrency.
Fundamentally it provides a metaphor of a running thing which has inboxes, and outboxes. You send messages to outboxes, and when wired together, messages flow from outboxes to inboxes. This metaphor/API remains the same whether you're using generators, threads or processes, or speaking to other systems.
The "not perfect" part is due to syntactic sugar not being added as yet for inboxes and outboxes (though this is under discussion) - there is a focus on safety/usability in the system.
Taking the producer consumer example using bare threading above, this becomes this in Kamaelia:
Pipeline(Producer(), Consumer() )
In this example it doesn't matter if these are threaded components or otherwise, the only difference is between them from a usage perspective is the baseclass for the component. Generator components communicate using lists, threaded components using Queue.Queues and process based using os.pipes.
The reason behind this approach though is to make it harder to make hard to debug bugs. In threading - or any shared memory concurrency you have, the number one problem you face is accidentally broken shared data updates. By using message passing you eliminate one class of bugs.
If you use bare threading and locks everywhere you're generally working on the assumption that when you write code that you won't make any mistakes. Whilst we all aspire to that, it's very rare that will happen. By wrapping up the locking behaviour in one place you simplify where things can go wrong. (Context handlers help, but don't help with accidental updates outside the context handler)
Obviously not every piece of code can be written as message passing and shared style which is why Kamaelia also has a simple software transactional memory (STM), which is a really neat idea with a nasty name - it's more like version control for variables - ie check out some variables, update them and commit back. If you get a clash you rinse and repeat.
Relevant links:
Europython 09 tutorial
Monthly releases
Mailing list
Examples
Example Apps
Reusable components (generator & thread)
Anyway, I hope that's a useful answer. FWIW, the core reason behind Kamaelia's setup is to make concurrency safer & easier to use in python systems, without the tail wagging the dog. (ie the big bucket of components
I can understand why the other Kamaelia answer was modded down, since even to me it looks more like an ad than an answer. As the author of Kamaelia it's nice to see enthusiasm though I hope this contains a bit more relevant content :-)
And that's my way of saying, please take the caveat that this answer is by definition biased, but for me, Kamaelia's aim is to try and wrap what is IMO best practice. I'd suggest trying a few systems out, and seeing which works for you. (also if this is inappropriate for stack overflow, sorry - I'm new to this forum :-)
I would use the Microthreads (Tasklets) of Stackless Python, if I had to use threads at all.
A whole online game (massivly multiplayer) is build around Stackless and its multithreading principle -- since the original is just to slow for the massivly multiplayer property of the game.
Threads in CPython are widely discouraged. One reason is the GIL -- a global interpreter lock -- that serializes threading for many parts of the execution. My experiance is, that it is really difficult to create fast applications this way. My example codings where all slower with threading -- with one core (but many waits for input should have made some performance boosts possible).
With CPython, rather use seperate processes if possible.
If you really want to get your hands dirty, you can try using generators to fake coroutines. It probably isn't the most efficient in terms of work involved, but coroutines do offer you very fine control of co-operative multitasking rather than pre-emptive multitasking you'll find elsewhere.
One advantage you'll find is that by and large, you will not need locks or mutexes when using co-operative multitasking, but the more important advantage for me was the nearly-zero switching speed between "threads". Of course, Stackless Python is said to be very good for that as well; and then there's Erlang, if it doesn't have to be Python.
Probably the biggest disadvantage in co-operative multitasking is the general lack of workaround for blocking I/O. And in the faked coroutines, you'll also encounter the issue that you can't switch "threads" from anything but the top level of the stack within a thread.
After you've made an even slightly complex application with fake coroutines, you'll really begin to appreciate the work that goes into process scheduling at the OS level.