I'll start with an example code, that "should never" fail:
counter1 = 0
counter2 = 0
def increment():
global counter1, counter2
counter1 += 1
counter2 += 1
while True:
try:
increment()
except:
pass
assert counter1 == counter2
The counters represent an internal structure that should keep its integrity no matter what. By a quick look, the assertion will never be False, and the structure will be intact.
One small signal however, that occurs in the middle of the function (for example SIGINT or KeyboardInterrupt in Python) causes the internal structure to break. In real-world scenario it may cause memory corruption, deadlocks, and all other types of mayhem.
Is there any way to make this function signal safe? If any arbitrary code can run anywhere and cause it to malfunction, how can we program library code that is safe and secure?
Even if we attempt to guard it using try...finally..., we might still receive a signal at the finally and prevent it from running.
While the example is in Python, I believe this question applies to all programming languages as a whole.
EDIT:
Do keep in mind the counters are just an example. In my actual use case it's different Locks and threading.Events. There's no real way (that I know of) to make the operation atomic.
One solution to your example is to update the data atomically, if possible:
counters = [0, 0]
def increment():
global counters
# "complex" preparation. not atomic at all
next0 = counters[0] + 1
next1 = counters[1] + 1
temp = [next0, next1]
# actual atomic update
counters = temp
try:
while True:
increment()
finally:
assert counters[0] == counters[1]
No matter where the execution of the code stops, counters will always be in a consistent state, even without any cleanup.
Of course, this is a special case, which can't be always applied. A more general concept would be transactions where multiple actions are bundled and either all get executed or none.
Errors during rollback/abort of a transaction can be still problematic, though.
There is a transaction package, but I did not look into that.
best way is to used the techniques here. https://julien.danjou.info/atomic-lock-free-counters-in-python/
unfortunately other answers have some complex concurrency issues.
Related
I am interacting with an external device, and I have to issue certain commands in order. Sometimes I have to jump back and redo steps. Pseudocode (the actual code has more steps and jumps):
enter_update_mode() # step 1
success = start_update()
if not success:
retry from step 1
leave_update_mode()
How do I handle this the cleanest way? What I did for now is to define an enum, and write a state machine. This works, but is pretty ugly:
class Step(Enum):
ENTER_UPDATE_MODE = 1
START_UPDATE = 2
LEAVE_UPDATE_MODE = 3
EXIT = 4
def main():
next_step = Step.ENTER_UPDATE_MODE
while True:
if next_step == Step.ENTER_UPDATE_MODE:
enter_update_mode()
next_step = Step.START_UPDATE
elif next_step == Step.START_UPDATE:
success = start_update()
if success:
next_step = Step.LEAVE_UPDATE_MODE
else:
next_step = Step.ENTER_UPDATE_MODE
....
I can imagine an alternative would be to just call the functions nested. As long as this is only a few levels deep, it should not be a problem:
def enter_update_mode():
# do stuff ...
# call next step:
perform_update()
def perform_update():
# ...
# call next step:
if success:
leave_update_mode()
else:
enter_update_mode()
I have looked into the python-statemachine module, but it seems to be there to model state machines. You can define states and query which state it is in, and you can attach behavior to states. But that is not what I'm looking for. I am looking for a way to write the behavior code in a very straightforward, imperative style, like you would use for pseudocode or instructions to a human.
There is also a module to add goto to Python, but I think it is a joke and would not like to use it in production :-).
Notes:
This code is synchronous, meaning it is a terminal app or a separate thread. Running concurrently with other code would be an added complication. If a solution allows that (e.g. by using yield) that would be a bonus, but not neccessary.
I left out a lot of retry logic. A step may be only retried a certain number of times.
Releated discussion of explicit state machine vs. imperative style: https://softwareengineering.stackexchange.com/q/147182/62069
I'm reading a book on Python3 (Introducing Python by Bill Lubanovic), and came across something I wasn't sure is a Python preference, or just a "simplification" due to being a book and trying to describe something else.
It's on how to write to a file using chunks instead of in one shot.
poem = '''There was a young lady named Bright,
Whose speed was far faster than light;
She started one day
In a relative way,
And returned on the previous night.'''
fout = open('relativity', 'wt')
size = len(poem)
offset = 0
chunk = 100
while True:
if offset > size:
break
fout.write(poem[offset:offset+chunk])
offset += chunk
fout.close()
I was about to ask why it has while True instead of while (offset > size), but decided to try it for myself, and saw that while (offset > size) doesn't actually do anything in my Python console.
Is that just a bug in the console, or does Python really require you to move the condition inside the while loop like that? With all of the changes to make it as minimal as possible, this seems very verbose.
(I'm coming from a background in Java, C#, and JavaScript where the condition as the definition for the loop is standard.)
EDIT
Thanks to xnx's comment, I realized that I had my logic incorrect in what I would have the condition be.
This brings me back to a clearer question that I originally wanted to focus on:
Does Python prefer to do while True and have the condition use a break inside the loop, or was that just an oversight on the author's part as he tried to explain a different concept?
I was about to ask why it has while True instead of while (offset <= size), but decided to try it for myself,
This is actually how I would have written it. It should be logically equivelent.
and saw that while (offset > size) doesn't actually do anything in my Python console.
You needed to use (offset <= size), not (offset > size). The current logic stops as soon as the offset is greater than size, so reverse the condition if you want to put it in the while statement.
does Python really require you to move the condition inside the while loop like that?
No, Python allows you write write the condition in the while loop directly. Both options are fine, and it really is more a matter of personal preference in how you write your logic. I prefer the simpler form, as you were suggesting, over the original author's version.
This should work fine:
while offset <= size:
fout.write(poem[offset:offset+chunk])
offset += chunk
For details, see the documentation for while, which specifically states that any expression can be used before the :.
Edit:
Does Python prefer to do while True and have the condition use a break inside the loop, or was that just an oversight on the author's part as he tried to explain a different concept?
Python does not prefer while True:. Either version is fine, and it's completely a matter of preference for the coder. I personally prefer keeping the expression in the while statement, as I find the code more clear, concise, and maintainable using while offset <= size:.
It is legal Python code to put the conditional in the loop. Personally I think:
while offset <= size:
is clearer than:
while True:
if offset < size:
break
I prefer the first form because there's one less branch to follow but the logic is not any more complex. All other things being equal, lower levels of indentation are easier to read.
If there were multiple different conditions that would break out of the loop then it might be preferable to go for the while True syntax.
As for the observed behavior with the incorrect loop logic, consider this snippet:
size = len(poem)
offset = 0
while offset > size:
#loop code
The while loop will never be entered as offset > size starts off false.
while True:
if offset > size:
break
func(x)
is exactly equivalent to
while offset <= size:
func(x)
They both run until offset > size. It is simply a different way of phrasing it -- both are acceptable, and I'm not aware of any performance differences. They would only run differently if you had the break condition at the bottom of the while loop (i.e. after func(x))
edit:
According to the Python wiki, in Python 2.* "it slows things down a lot" to put the condition inside the while loop: "this is due to first testing the True condition for the while, then again testing" the break condition. I don't know what measure they use for "a lot", but it seems miniscule enough.
When reading from a file, you usually do
output = []
while True:
chunk = f.read(chunksize)
if len(chunk) == 0:
break
output.append(chunk)
It seems to me like the author is more used to doing reading than writing, and in this case the reading idiom of using while True has leaked through to the writing code.
As most of the folks answering the question can attest to, using simply while offset <= size is probably more Pythonic and simpler, though even simpler might be just to do
f.write(poem)
since the underlying I/O library can handle the chunked writes for you.
Does Python prefer to do while True and have the condition use a break
inside the loop, or was that just an oversight on the author's part as
he tried to explain a different concept?
No it doesn't, this is a quirk or error of the author's own.
There are situations where typical Python style (and Guido van Rossum) actively advise using while True, but this isn't one of them. That said, they don't disadvise it either. I imagine there are cases where a test would be easier to read and understand as "say when to break" than as "say when to keep going". Even though they're just logical negations of each other, one or other might make express things a little more simply:
while not god_unwilling() and not creek_risen():
while not (god_unwilling() or creek_risen()):
vs.
while True:
if god_unwilling() or creek_risen():
break
I still sort of prefer the first, but YMMV. Even better introduce functions that correspond to the English idiom: god_willing() and creek_dont_rise()
The "necessary" use is when you want to execute the break test at the end or middle of the loop (that is to say, when you want to execute part or all of the loop unconditionally the first time through). Where other languages have a greater variety of more complex loop constructs, and other examples play games with a variable to decide whether to break or not, Guido says "just use while True". Here's an example from the FAQ, for a loop that starts with an assignment:
C code:
while (line = readline(f)) {
// do something with line
}
Python code:
while True:
line = f.readline()
if not line:
break
# do something with line
The FAQ remarks (and this relates to typical Python style):
An interesting phenomenon is that most experienced Python programmers
recognize the while True idiom and don’t seem to be missing the
assignment in expression construct much; it’s only newcomers who
express a strong desire to add this to the language.
It also points out that for this particular example, you can in fact avoid the whole problem anyway with for line in f.
In my opinion, while True is better than other ways, in big programs; which has long codes. Because you cant see actually that a variable may change because of some functions or etc. while True means start if its true which means start this loop whatever happens except closed the program. So that maybe writer of the book want you used to use while True, is a little less risky than others.
It's better used to while True and define the variable which is may stop this loop.
Your program just paused on a pdb.set_trace().
Is there a way to monkey patch the function that is currently running, and "resume" execution?
Is this possible through call frame manipulation?
Some context:
Oftentimes, I will have a complex function that processes large quantities of data, without having a priori knowledge of what kind of data I'll find:
def process_a_lot(data_stream):
#process a lot of stuff
#...
data_unit= data_stream.next()
if not can_process(data_unit)
import pdb; pdb.set_trace()
#continue processing
This convenient construction launches a interactive debugger when it encounters unknown data, so I can inspect it at will and change process_a_lot code to handle it properly.
The problem here is that, when data_stream is big, you don't really want to chew through all the data again (let's assume next is slow, so you can't save what you already have and skip on the next run)
Of course, you can replace other functions at will once in the debugger. You can also replace the function itself, but it won't change the current execution context.
Edit:
Since some people are getting side-tracked:
I know there are a lot of ways of structuring your code such that your processing function is separate from process_a_lot. I'm not really asking about ways to structure the code as much as how to recover (in runtime) from the situation when the code is not prepared to handle the replacement.
First a (prototype) solution, then some important caveats.
# process.py
import sys
import pdb
import handlers
def process_unit(data_unit):
global handlers
while True:
try:
data_type = type(data_unit)
handler = handlers.handler[data_type]
handler(data_unit)
return
except KeyError:
print "UNUSUAL DATA: {0!r}". format(data_unit)
print "\n--- INVOKING DEBUGGER ---\n"
pdb.set_trace()
print
print "--- RETURNING FROM DEBUGGER ---\n"
del sys.modules['handlers']
import handlers
print "retrying"
process_unit("this")
process_unit(100)
process_unit(1.04)
process_unit(200)
process_unit(1.05)
process_unit(300)
process_unit(4+3j)
sys.exit(0)
And:
# handlers.py
def handle_default(x):
print "handle_default: {0!r}". format(x)
handler = {
int: handle_default,
str: handle_default
}
In Python 2.7, this gives you a dictionary linking expected/known types to functions that handle each type. If no handler is available for a type, the user is dropped own into the debugger, giving them a chance to amend the handlers.py file with appropriate handlers. In the above example, there is no handler for float or complex values. When they come, the user will have to add appropriate handlers. For example, one might add:
def handle_float(x):
print "FIXED FLOAT {0!r}".format(x)
handler[float] = handle_float
And then:
def handle_complex(x):
print "FIXED COMPLEX {0!r}".format(x)
handler[complex] = handle_complex
Here's what that run would look like:
$ python process.py
handle_default: 'this'
handle_default: 100
UNUSUAL DATA: 1.04
--- INVOKING DEBUGGER ---
> /Users/jeunice/pytest/testing/sfix/process.py(18)process_unit()
-> print
(Pdb) continue
--- RETURNING FROM DEBUGGER ---
retrying
FIXED FLOAT 1.04
handle_default: 200
FIXED FLOAT 1.05
handle_default: 300
UNUSUAL DATA: (4+3j)
--- INVOKING DEBUGGER ---
> /Users/jeunice/pytest/testing/sfix/process.py(18)process_unit()
-> print
(Pdb) continue
--- RETURNING FROM DEBUGGER ---
retrying
FIXED COMPLEX (4+3j)
Okay, so that basically works. You can improve and tweak that into a more production-ready form, making it compatible across Python 2 and 3, et cetera.
Please think long and hard before you do it that way.
This "modify the code in real-time" approach is an incredibly fragile pattern and error-prone approach. It encourages you to make real-time hot fixes in the nick of time. Those fixes will probably not have good or sufficient testing. Almost by definition, you have just this moment discovered you're dealing with a new type T. You don't yet know much about T, why it occurred, what its edge cases and failure modes might be, etc. And if your "fix" code or hot patches don't work, what then? Sure, you can put in some more exception handling, catch more classes of exceptions, and possibly continue.
Web frameworks like Flask have debug modes that work basically this way. But those are debug modes, and generally not suited for production. Moreover, what if you type the wrong command in the debugger? Accidentally type "quit" rather than "continue" and the whole program ends, and with it, your desire to keep the processing alive. If this is for use in debugging (exploring new kinds of data streams, maybe), have at.
If this is for production use, consider instead a strategy that sets aside unhandled-types for asynchronous, out-of-band examination and correction, rather than one that puts the developer / operator in the middle of a real-time processing flow.
No.
You can't moneky-patch a currently running Python function and continue pressing as though nothing else had happened. At least not in any general or practical way.
In theory, it is possible--but only under limited circumstances, with much effort and wizardly skill. It cannot be done with any generality.
To make the attempt, you'd have to:
Find the relevant function source and edit it (straightforward)
Compile the changed function source to bytecode (straightforward)
Insert the new bytecode in place of the old (doable)
Alter the function housekeeping data to point at the "logically" "same point" in the program where it exited to pdb. (iffy, under some conditions)
"Continue" from the debugger, falling back into the debugged code (iffy)
There are some circumstances where you might achieve 4 and 5, if you knew a lot about the function housekeeping and analogous debugger housekeeping variables. But consider:
The bytecode offset at which your pdb breakpoint is called (f_lasti in the frame object) might change. You'd probably have to narrow your goal to "alter only code further down in the function's source code than the breakpoint occurred" to keep things reasonably simple--else, you'd have to be able to compute where the breakpoint is in the newly-compiled bytecode. That might be feasible, but again under restrictions (such as "will only call pdb_trace() once, or similar "leave breadcrumbs for post-breakpoint analysis" stipulations).
You're going to have to be sharp at patching up function, frame, and code objects. Pay special attention to func_code in the function (__code__ if you're also supporting Python 3); f_lasti, f_lineno, and f_code in the frame; and co_code, co_lnotab, and co_stacksize in the code.
For the love of God, hopefully you do not intend to change the function's parameters, name, or other macro defining characteristics. That would at least treble the amount of housekeeping required.
More troubling, adding new local variables (a pretty common thing you'd want to do to alter program behavior) is very, very iffy. It would affect f_locals, co_nlocals, and co_stacksize--and quite possibly, completely rearrange the order and way bytecode accesses values. You might be able to minimize this by adding assignment statements like x = None to all your original locals. But depending on how the bytecodes change, it's possible you'll even have to hot-patch the Python stack, which cannot be done from Python per se. So C/Cython extensions could be required there.
Here's a very simple example showing that bytecode ordering and arguments can change significantly even for small alterations of very simple functions:
def a(x): LOAD_FAST 0 (x)
y = x + 1 LOAD_CONST 1 (1)
return y BINARY_ADD
STORE_FAST 1 (y)
LOAD_FAST 1 (y)
RETURN_VALUE
------------------ ------------------
def a2(x): LOAD_CONST 1 (2)
inc = 2 STORE_FAST 1 (inc)
y = x + inc LOAD_FAST 0 (x)
return y LOAD_FAST 1 (inc)
BINARY_ADD
STORE_FAST 2 (y)
LOAD_FAST 2 (y)
RETURN_VALUE
Be equally sharp at patching some of the pdb values that track where it's debugging, because when you type "continue," those are what dictates where control flow goes next.
Limit your patchable functions to those that have rather static state. They must, for example, never have objects that might be garbage-collected before the breakpoint is resumed, but accessed after it (e.g. in your new code). E.g.:
some = SomeObject()
# blah blah including last touch of `some`
# ...
pdb.set_trace()
# Look, Ma! I'm monkey-patching!
if some.some_property:
# oops, `some` was GC'd - DIE DIE DIE
While "ensuring the execution environment for the patched function is same as it ever was" is potentially problematic for many values, it's guaranteed to crash and burn if any of them exit their normal dynamic scope and are garbage-collected before patching alters their dynamic scope/lifetime.
Assert you only ever want to run this on CPython, since PyPy, Jython, and other Python implementations don't even have standard Python bytecodes and do their function, code, and frame housekeeping differently.
I would love to say this super-dynamic patching is possible. And I'm sure you can, with a lot of housekeeping object twiddling, construct simple cases where it does work. But real code has objects that go out of scope. Real patches might want new variables allocated. Etc. Real world conditions vastly multiply the effort required to make the patching work--and in some cases, make that patching strictly impossible.
And at the end of the day, what have you achieved? A very brittle, fragile, unsafe way to extend your processing of a data stream. There is a reason most monkey-patching is done at function boundaries, and even then, reserved for a few very-high-value use cases. Production data streaming is better served adopting a strategy that sets aside unrecognized values for out-of-band examination and accommodation.
If I understand correctly:
you don't want to repeat all the work that has already been done
you need a way to replace the #continue processing as usual with the new code once you have figured out how to handle the new data
#user2357112 was on the right track: expected_types should be a dictionary of
data_type:(detect_function, handler_function)
and detect_type needs to go through that to find a match. If no match is found, pdb pops up, you can then figure out what's going on, write a new detect_function and handler_funcion, add them to expected_types, and continue from pdb.
What I wanted to know is if there's a way to monkey patch the function that is currently running (process_a_lot), and "resume" execution.
So you want to somehow, from within pdb, write a new process_a_lot function, and then transfer control to it at the location of the pdb call?
Or, do you want to rewrite the function outside pdb, and then somehow reload that function from the .py file and transfer control into the middle of the function at the location of the pdb call?
The only possibility I can think of is: from within pdb, import your newly written function, then replace the current process_a_lot byte-code with the byte-code from the new function (I think it's func.co_code or something). Make sure you change nothing in the new function (not even the pdb lines) before the pdb lines, and it might work.
But even if it does, I would imagine it is a very brittle solution.
new to programming in threads and using locks
So I've 2 threads - one reading vlaues from COM port - the other one controlling a stepper motor
in one thread
if stepperb_value != 0: #if stepperb non-zero
if stepperb_value > 0: # if positive value
self.step_fine(10,11,12,13,step_delay) #step forward
else:
self.step_fine(13,12,11,10,step_delay) #step backwards
if abs(stepperb_value) != 100:
time.sleep(10*step_delay*((100/abs(stepperb_value))-1))
(I need to prevent a change to stepperb causing a divide by zero error on last line)
in the other thread thats reading the values from the COM port
if 'stepperb' in dataraw:
outputall_pos = dataraw.find('stepperb')
sensor_value = dataraw[(1+outputall_pos+len('stepperb')):].split()
print "stepperb" , sensor_value[0]
if isNumeric(sensor_value[0]):
stepperb_value = int(max(-100,min(100,int(sensor_value[0]))))
Where (and what sort of locks) do I need - the first thread is time sensitive so that needs to have priority
regards
Simon
If you're using only CPython, you have another possibility (for now).
As CPython has the Global Interpreter Lock, assignments are atomic. So you can pull stepperb_value into a local variable in your motor control thread before using it and use the local variable instead of the global one.
Note that you'd be using an implementation detail here. Your code possibly won't run safely on other python implementations such as Jython or IronPython. Even on CPython, the behaviour might change in the future, but I don't see the GIL go away before Python 5 or something.
You're going to have to lock around any series of accesses that expects stepperb_value not to change in the series. In your case, that would be the entire motor control block you posted. Similarly, you should lock around the entire assignment to stepperb_value in the COM port thread, to ensure that the write is atomic, and does not happen while the first thread holds the lock (and expects the value not to change).
Well, your critical variable is stepperb_value so that's what should be "guarded":
I suggest you to use Queue which takes care for all the synchronization issues (consumer/producer pattern). Also events and conditions can be a suitable solution:
http://www.laurentluce.com/posts/python-threads-synchronization-locks-rlocks-semaphores-conditions-events-and-queues/
I've made a script to get inventory data from the Steam API and I'm a bit unsatisfied with the speed. So I read a bit about multiprocessing in python and simply cannot wrap my head around it. The program works as such: it gets the SteamID from a list, gets the inventory and then appends the SteamID and the inventory in a dictionary with the ID as the key and inventory contents as the value.
I've also understood that there are some issues involved with using a counter when multiprocessing, which is a small problem as I'd like to be able to resume the program from the last fetched inventory rather than from the beginning again.
Anyway, what I'm asking for is really a concrete example of how to do multiprocessing when opening the URL that contains the inventory data so that the program can fetch more than one inventory at a time rather than just one.
onto the code:
with open("index_to_name.json", "r", encoding=("utf-8")) as fp:
index_to_name=json.load(fp)
with open("index_to_quality.json", "r", encoding=("utf-8")) as fp:
index_to_quality=json.load(fp)
with open("index_to_name_no_the.json", "r", encoding=("utf-8")) as fp:
index_to_name_no_the=json.load(fp)
with open("steamprofiler.json", "r", encoding=("utf-8")) as fp:
steamprofiler=json.load(fp)
with open("itemdb.json", "r", encoding=("utf-8")) as fp:
players=json.load(fp)
error=list()
playerinventories=dict()
c=127480
while c<len(steamprofiler):
inventory=dict()
items=list()
try:
url=urllib.request.urlopen("http://api.steampowered.com/IEconItems_440/GetPlayerItems/v0001/?key=DD5180808208B830FCA60D0BDFD27E27&steamid="+steamprofiler[c]+"&format=json")
inv=json.loads(url.read().decode("utf-8"))
url.close()
except (urllib.error.HTTPError, urllib.error.URLError, socket.error, UnicodeDecodeError) as e:
c+=1
print("HTTP-error, continuing")
error.append(c)
continue
try:
for r in inv["result"]["items"]:
inventory[r["id"]]=r["quality"], r["defindex"]
except KeyError:
c+=1
error.append(c)
continue
for key in inventory:
try:
if index_to_quality[str(inventory[key][0])]=="":
items.append(
index_to_quality[str(inventory[key][0])]
+""+
index_to_name[str(inventory[key][1])]
)
else:
items.append(
index_to_quality[str(inventory[key][0])]
+" "+
index_to_name_no_the[str(inventory[key][1])]
)
except KeyError:
print("keyerror, uppdate def_to_index")
c+=1
error.append(c)
continue
playerinventories[int(steamprofiler[c])]=items
c+=1
if c % 10==0:
print(c, "inventories downloaded")
I hope my problem was clear, otherwise just say so obviously. I would optimally avoid using 3rd party libraries but if it's not possible it's not possible. Thanks in advance
So you're assuming the fetching of the URL might be the thing slowing your program down? You'd do well to check that assumption first, but if it's indeed the case using the multiprocessing module is a huge overkill: for I/O bound bottlenecks threading is quite a bit simpler and might even be a bit faster (it takes a lot more time to spawn another python interpreter than to spawn a thread).
Looking at your code, you might get away with sticking most of the content of your while loop in a function with c as a parameter, and starting a thread from there using another function, something like:
def process_item(c):
# The work goes here
# Replace al those 'continue' statements with 'return'
for c in range(127480, len(steamprofiler)):
thread = threading.Thread(name="inventory {0}".format(c), target=process_item, args=[c])
thread.start()
A real problem might be that there's no limit to the amount of threads being spawned, which might break the program. Also the guys at Steam might not be amused at getting hammered by your script, and they might decide to un-friend you.
A better approach would be to fill a collections.deque object with your list of c's and then start a limited set of threads to do the work:
def process_item(c):
# The work goes here
# Replace al those 'continue' statements with 'return'
def process():
while True:
process_item(work.popleft())
work = collections.deque(range(127480, len(steamprofiler)))
threads = [threading.Thread(name="worker {0}".format(n), target=process)
for n in range(6)]
for worker in threads:
worker.start()
Note that I'm counting on work.popleft() to throw an IndexError when we're out of work, which will kill the thread. That's a bit sneaky, so consider using a try...except instead.
Two more things:
Consider using the excellent Requests library instead of urllib (which, API-wise, is by far the worst module in the entire Python standard library that I've worked with).
For Requests, there's an add-on called grequests which allows you to do fully asynchronous HTTP-requests. That would have made for even simpler code.
I hope this helps, but please keep in mind this is all untested code.
The outermost while loop seems to be distributed over a few processes(or tasks).
When you break the loop into tasks, note that you are sharing playerinventories and error object between processes. You will need to use multiprocessing.Manager for the sharing issue.
I recommend you to start modifying your code from this snippet.