Optimization for Python code - python

I have a small function (see below) that returns a list of names that are mapped from a list of integers (eg [1,2,3,4]) which can be of length up to a thousand.
This function can potentially get called tens of thousands of times at a time and I want to know if I can do anything to make it run faster.
The graph_hash is a large hash that maps keys to sets of length 1000 or less. I am iterating over a set and mapping the values to names and returning a list. The u.get_name_from_id() queries an sqlite database.
Any thoughts to optimize any part of this function?
def get_neighbors(pid):
names = []
for p in graph_hash[pid]:
names.append(u.get_name_from_id(p))
return names

Caching and multithreading are only going to get you so far, you should create a new method that uses executemany under the hood to retrieve multiple names from the database in bulk.
Something like names = u.get_names_from_ids(graph_hash[pid]).

You're hitting the database sequentially here:
for p in graph_hash[pid]:
names.append(u.get_name_from_id(p))
I would recommend doing it concurrently using threads. Something like this should get you started:
def load_stuff(queue, p):
q.put(u.get_name_from_id(p))
def get_neighbors(pid):
names = Queue.Queue()
# we'll keep track of the threads with this list
threads = []
for p in graph_hash[pid]:
thread = threading.Thread(target=load_stuff, args=(names,p))
threads.append(thread)
# start the thread
thread.start()
# wait for them to finish before you return your Queue
for thread in threads:
thread.join()
return names
You can turn the Queue back into a list with [item for item in names.queue] if needed.
The idea is that the database calls are blocking until they're done, but you can make multiple SELECT statements on a database without locking. So, you should use threads or some other concurrency method to avoid waiting unnecessarily.

I would recommend to use deque instead of list if you doing thousands of appends. So, names should be names = deque().

A list comprehension is a start (similar to #cricket_007's generator suggestion), but you are limited by function calls:
def get_neighbors(pid):
return [u.get_name_from_id(p) for p in graph_hash[pid]]
As #salparadise suggested, consider memoization to speed up get_name_from_id().

Related

traverse two lists asynchronously?

I have two lists, lzma2_list and rar_list. both have a random number of objects names that vary daily. there is a directory where these objects are, called "O:", there are 2 methods that should handle this data.
bkp.zipto_rar(path,object_name)
bkp.zipto_lzma(path,object_name)
how could i get all items from lists asynchronously without waiting for one to finish?
speed up compression using list asynchronously and threads
i tried using the answers to this question but in my case the methods receive 2 parameters, one fixed, referring to the directory, and another that will change constantly, referring to the items in the list.
As your functions take parameters, you should use functools.partial to convert them to the signature without arguments.
Then you can use asyncio.new_event_loop().run_in_executor to process each item in background threads if the functions are IO-bound, or multiprocessing.Pool to process items in background processes if they are CPU-bound.
You can even combine two approaches and use many theads in each background process but it's hard write useful example not knowing specifics or your functions and lists. Gathering results after may also be not trivial.
import asyncio
import functools
lzma2_list = []
rar_list = []
def process_lzma2_list():
path = 'CONST'
for item in lzma2_list:
func = functools.partial(bkp.zipto_lzma, *(path, item))
asyncio.new_event_loop().run_in_executor(executor=None, func=func)
def process_rar_list():
path = 'CONST'
for item in rar_list:
func = functools.partial(bkp.zipto_rar, *(path, item))
asyncio.new_event_loop().run_in_executor(executor=None, func=func)
if __name__ == '__main__':
# it's ok to run these 2 functions sequentially as they just create tasks, actual processing is done in background
process_lzma2_list()
process_rar_list()

Python multi function multithreading with threading.Thread? (variable number of threads)

I'm trying to start a variable number of threads to compute the results of functions for one of my automated trading modules. I have about 14 functions all of which are computationally expensive. I've been calculating each function sequentially, but it takes around 3 minutes to complete, and my platform is high frequency, I have the need to cut that computation time down to 1 minute or less.
I've read up on multiprocessing and multithreading, but I can't find a solution that fits my need.
What I'm trying to do is define "n" number of threads to use, then divide my list of functions into "n" groups, then compute each group of functions in a separate thread. Essentially:
functionList = [func1,func2,func3,func4]
outputList = [func1out,func2out,func3out,func4out]
argsList = [func1args,func2args,func3args,func4args]
# number of threads
n = 3
functionSplit = np.array_split(np.array(functionList),n)
outputSplit = np.array_split(np.array(outputList),n)
argSplit = np.array_split(np.array(argsList),n)
Now I'd like to start "n" seperate threads, each processing the functions according to the split lists. Then I'd like to name the output of each function according to the outputList and create a master dict of the outputs from each function. I then will loop through the output dict and create a dataframe with column ID numbers according to the information in each column (already have this part worked out, just need the multithreading).
Is there any way to do something like this? I've been looking into creating a subclass of the threading.Thread class and passing the functions, output names, and arguments into the run() method, but I don't know how to name and output the results of the functions from each thread! Nor do I know how to call functions in a list according to their corresponding arguments!
The reason that I'm doing this is to discover the optimum thread number balance between computational efficiency and time. Like I said, this will be integrated into a high frequency trading platform I'm developing where time is my major constraint!
Any ideas?
You can use multiprocessing library like below
import multiprocessing
def callfns(fnList, argList, outList, d):
for i in range(len(fnList)):
d[somekey] = fnList[i](argList, outList)
...
manager = multiprocessing.Manager()
d = manager.dict()
processes = []
for i in range(len(functionSplit)):
process = multiprocessing.Process(target=callfns, args=(functionSplit[i], argSplit[i], outputSplit[i], d))
processes.append(process)
for j in processes:
j.start()
for j in processes:
j.join()
# use d here
You can use a server process to share the dictionary between these processes. To interact with the server process you need Manager. Then you can create a dictionary in server process manager.dict(). Once all process join back to the main process, you can use the dictionary d.
I hope this help you solve your problem.
You should use multiprocessing instead of threading for cpu bound tasks.
Manually creating and managing processes can be difficult and require more efforts. Do checkout the concurrent.futures and try the ProcessPool for maintaining a pool of processes. You can submit tasks to them and retrieve results.
The Pool.map method from multiprocessing module can take a function and iterable and then process them in chunks in parallel to compute faster. The iterable is broken into separate chunks. These chunks are passed to the function in separate processes. Then the results are then put back together.

Is modifying a pre-allocated Python list thread-safe?

I have a function that dispatches calls to several Redis shards and stores the result in a pre-allocated Python list.
Basically, the code goes as follow:
def myfunc(calls):
results = [None] * len(calls)
for index, (connection, call) in enumerate(calls.iteritems()):
results[index] = call(connection)
return results
Obviously, as of now this calls the various Redis shards in sequence. I intend to use a thread pool and to make those calls happen in parallel as they can take quite some time.
My question is: given that the results list is preallocated and that each calls has a dedicated slot, do I need to lock it to put the results from each thread or is there any guarantee that it will work without ?
I will obviously profile the result in the end but I wouldn't want to lock if I don't really need to.

Multi threading read and write file using python

So, I have a 9000 lines csv file. I have read it and stored it in a dictionary list with string key, m. What I want to do is to loop for every item list[m] and process it into a function processItem(item). This processItem will return a string with csv-like format. My aim is to write the result of processItem function for every item in list. Is there any idea how to do this multi thread way?
I think I should divide the list to N sub-lists and then process these sub-lists in multi thread way. Every thread will return the string processed from sub-lists and then merge it. Finally write it to a file. How to implement that?
This is a perfect example for using the multiprocessing module and the Pool() function (be aware that threading module can not be used for speed).
You have to apply a function on each element of your list, so this can be easily parallelized.
with Pool() as p:
processed = p.map(processItem, lst)
If you are using Python 2, Pool() cannot be used as a context manager, but you can use it like this:
p = Pool()
processed = p.map(processItem, lst)
Your function processItem() will be call for each element in your lst, and the result will create a new list processed (order is preserved).
The function Pool() spawn as many process workers that your CPU has cores, and it executes new task as soon as the previous one is finished, until every elements has been processed.

Can I have two multithreaded functions running at the same time?

I'm very new to multi-threading. I have 2 functions in my python script. One function enqueue_tasks iterates through a large list of small items and performs a task on each item which involves appending an item to a list (lets call it master_list). This I already have multi-threaded using futures.
executor = concurrent.futures.ThreadPoolExecutor(15) # Arbitrarily 15
futures = [executor.submit(enqueue_tasks, group) for group in grouper(key_list, 50)]
concurrent.futures.wait(futures)
I have another function process_master that iterates through the master_list above and checks the status of each item in the list, then does some operation.
Can I use the same method above to use multi-threading for process_master? Furthermore, can I have it running at the same time as enqueue_tasks? What are the implications of this? process_master is dependent on the list from enqueue_tasks, so will running them at the same time be a problem? Is there a way I can delay the second function a bit? (using time.sleep perhaps)?
No, this isn't safe. If enqueue_tasks and process_master are running at the same time, you could potentially be adding items to master_list inside enqueue_tasks at the same time process_master is iterating over it. Changing the size of an iterable while you iterate over it causes undefined behavior in Python, and should always be avoided. You should use a threading.Lock to protect the code that adds items to master_list, as well as the code that iterates over master_list, to ensure they never run at the same time.
Better yet, use a Queue.Queue (queue.Queue in Python 3.x) instead of a list, which is a thread-safe data structure. Add items to the Queue in enqueue_tasks, and get items from the Queue in process_master. That way process_master can safely run a the same time as enqueue_tasks.

Categories