Gensim word2vec / doc2vec multi-threading parallel queries - python

I would like to call model.wv.most_similar_cosmul, on the same copy of model object, using multiple cores, on batches of input pairs.
The multiprocessing module requires multiple copies of model, which will require too much RAM because my model is 30+ GB in RAM.
I have tried to evaluate my query pairs. It took me ~12 hours for the first round. There may be more rounds coming. That's why I am looking for a threading solution. I understand Python has Global Interpreter Lock issue.
Any suggestions?

Forking off processes using multiprocessing after your text-vector model is in memory and unchanging might work to let many processes share the same object-in-memory.
In particular, you'd want to be sure that the automatic generation of unit-normed vectors (into a syn0norm or doctag_syn0norm) has already happened. It'll be automatically triggered the first time it's needed by a most_similar() call, or you can force it with the init_sims() method on the relevant object. If you'll only be doing most-similar queries between unit-normed vectors, never needing the original raw vectors, use init_sims(replace=True) to clobber the raw mixed-magnitude syn0 vectors in-place and thus save a lot of addressable memory.
Gensim also has options to use memory-mapped files as the sources of model giant arrays, and when multiple processes use the same read-only memory-mapped file, the OS will be smart enough to only map that file into physical memory once, providing both processes pointers to the shared array.
For more discussion of the tricky parts of using this technique in a similar-but-not-identical use case, see my answer at:
How to speed up Gensim Word2vec model load time?

Gensim v4.x.x simplified a lot of what #gojomo described above, as he also explained in his other answer here. Based on those answers, here's an example of how you can multiprocess most_similar in a memory-efficient way, including logging of progress with tqdm. Swap in your own model/dataset to see how this works at scale.
import multiprocessing
from functools import partial
from typing import Dict, List, Tuple
import tqdm
from gensim.models.word2vec import Word2Vec
from gensim.models.keyedvectors import KeyedVectors
from gensim.test.utils import common_texts
def get_most_similar(
word: str, keyed_vectors: KeyedVectors, topn: int
) -> List[Tuple[str, float]]:
try:
return keyed_vectors.most_similar(word, topn=topn)
except KeyError:
return []
def get_most_similar_batch(
word_batch: List[str], word_vectors_path: str, topn: int
) -> Dict[str, List[Tuple[str, float]]]:
# Load the keyedvectors with mmap, so memory isn't duplicated
keyed_vectors = KeyedVectors.load(word_vectors_path, mmap="r")
return {word: get_most_similar(word, keyed_vectors, topn) for word in word_batch}
def create_batches_from_iterable(iterable, batch_size=1000):
return [iterable[i : i + batch_size] for i in range(0, len(iterable), batch_size)]
if __name__ == "__main__":
model = Word2Vec(
sentences=common_texts, vector_size=100, window=5, min_count=1, workers=4
)
# Save wv, so it can be reloaded with mmap later
word_vectors_path = "word2vec.wordvectors"
model.wv.save(word_vectors_path)
# Dummy set of words to find most similar words for
words_to_match = list(model.wv.key_to_index.keys())
# Multiprocess
batches = create_batches_from_iterable(words_to_match, batch_size=2)
partial_func = partial(
get_most_similar_batch,
word_vectors_path=word_vectors_path,
topn=5,
)
words_most_similar = dict()
num_workers = multiprocessing.cpu_count()
with multiprocessing.Pool(num_workers) as pool:
max_ = len(batches)
with tqdm.tqdm(total=max_) as pbar:
# imap required for tqdm to function properly
for result in pool.imap(partial_func, batches):
words_most_similar.update(result)
pbar.update()

Related

Python genetic optimisation multiprocessing with a global constant variable, how to speed up?

I am writing a genetic optimization algorithm based on the deap package in python 2.7 (goal is to migrate to python 3 soon). As it is a pretty heavy process, some parts of the optimisation are processed using the multiprocessing package. Here is a summary outline of my program:
Configurations are read in and saved in a config object
Some additional pre-computations are made and saved as well in the config object
The optimisation starts (population is initialized randomly and mutations, crossover is applied to find a better solution) and some parts of it (evaluation function) are executed in multiprocessing
The results are saved
For the evaluation function, we need to have access to some parts of the config object (which after phase 2 stays a constant). Therefore we make it accessible to the different cores using a global (constant) variable:
from deap import base
import multiprocessing
toolbox = base.Toolbox()
def evaluate(ind):
# compute evaluation using config object
return(obj1,obj2)
toolbox.register('evaluate',evaluate)
def init_pool_global_vars(self, _config):
global config
config = _config
...
# setting up multiprocessing
pool = multiprocessing.Pool(processes=72, initializer=self.init_pool_global_vars,
initargs=[config])
toolbox.register('map', pool.map_async)
...
while tic < max_time:
# creating new individuals
# computing in optimisation the objective function on the different individuals
jobs = toolbox.map(toolbox.evaluate, ind)
fits = jobs.get()
# keeping best individuals
We basically make different iterations (big for loop) until a maximum time is reached. I have noticed that if I make the config object bigger (i.e. add big attributes to it, like a big numpy array) even if the code is still same it runs much slower (fewer iterations for the same timespan). So I thought I would make a specific config_multiprocessing object that contains only the attributes needed in the multiprocessing part and pass that as a global variable, but when I run it on 3 cores it is slower than with the big config object and on 72 cores, it is slightly faster, but not much.
What should I do in order to make sure my loops don't suffer in speed from the config object or from any other data manipulations I make before launching the multiprocessing loops?
Running in a Linux docker image on a linux VM in the cloud.
The joblib package is designed to handle cases where you have large numpy arrays to distribute to workers with shared memory. This is especially useful if you are treating the data in shared memory as "read-only" like what you describe in your scenario. You can also create writable shared memory as described in the docs.
Your code might look something like:
import os
import numpy as np
from joblib import Parallel, delayed
from joblib import dump, load
folder = './joblib_memmap'
try:
os.mkdir(folder)
except FileExistsError:
pass
def evaluate(ind, data):
# compute evaluation using shared memory data
return(obj1, obj2)
# just used to initialize memory mapped data
def init_memmap_data(original_data):
data_filename_memmap = os.path.join(folder, 'data_memmap')
dump(original_data, data_filename_memmap)
shared_data = load(data_filename_memmap, mmap_mode='r')
return shared_data
...
# however you set up indices needs to be changed here
indexes = range(10)
# however you load your numpy data needs to be done here
shared_data = init_memmap_data(numpy_array_to_share)
# change n_jobs as appropriate
results = Parallel(n_jobs=2)(delayed(evaluate)(ind, shared_data) for ind in indexes)
# get index of the maximum as the "best" individual
best_fit_individual = indexes[results.argmax()]
Additionally, joblib supports a threading backend that may be faster than the process based one. It will be easy to test both with joblib.

How do you feed a tf.data.Dataset dynamically in eager execution mode where initializable_iterator isn't available?

What is the new approach (under eager execution) to feeding data through a dataset pipeline in a dynamic fashion, when we need to feed it sample by sample?
I have a tf.data.Dataset which performs some preprocessing steps and reads data from a generator, drawing from a large dataset during training.
Let's say that dataset is represented as:
ds = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
ds = ds.map(tf.square).shuffle(2).batch(2)
iterator = tf.data.make_one_shot_iterator(ds)
After training I want to produce various visualizations which require that I feed one sample at a time through the network for inference. I've now got this dataset preprocessing pipeline that I need to feed my raw sample through to be sized and shaped appropriately for the network input.
This seems like a use case for the initializable iterator:
placeholder = tf.placeholder(tf.float32, shape=None)
ds = tf.data.Dataset.from_tensor_slices(placeholder)
ds = ds.map(tf.square).shuffle(2).batch(2)
iterator = tf.data.make_initializable_iterator(ds)
# now re-initialize for each sample
Keep in mind that the map operation in this example represents a long sequence of preprocessing operations that can't be duplicated for each new data sample being feed in.
This doesn't work with eager execution, you can't use the placeholder. The documentation examples all seem to assume a static input such as in the first example here.
The only way I can think of doing this is with a queue and tf.data.Dataset.from_generator(...) which reads from the queue that I push to before predicting on the data. But this feels both hacky, and appears prone to deadlocks that I've yet to solve.
TF 1.14.0
I just realized that the answer to this question is trivial:
Just create a new dataset!
In non-eager mode the code below would have degraded in performance because each dataset operation would have been added to the graph and never released, and in non-eager mode we have the initializable iterator to resolve that issue.
However, in eager execution mode tensorflow operations like this are ephemeral, added iterators aren't being added to a global graph, they just get created and die when no longer referenced. Win one for TF2.0!
The code below (copy/paste runnable) demonstrates:
import tensorflow as tf
import numpy as np
import time
tf.enable_eager_execution()
inp = np.ones(shape=5000, dtype=np.float32)
t = time.time()
while True:
ds = tf.data.Dataset.from_tensors(inp).batch(1)
val = next(iter(ds))
assert np.all(np.squeeze(val, axis=0) == inp)
print('Processing time {:.2f}'.format(time.time() - t))
t = time.time()
The motivation for the question came on the heels of this issue in 1.14 where creating multiple dataset operations in graph mode under Keras constitutes a memory leak.
https://github.com/tensorflow/tensorflow/issues/30448

Loading library on each map() execution

Library called spaCy has some problems when being shared across executors (problem with pickling). One of the workarounds is to import it independently on each map execution but the load takes a while.
I'm new to Spark and so I don't understand what's the exact mechanism behind map. What will happen in example case below?
I'm afraid of the worst case scenario, where individual lines of text are processed independently and for each one it will import spacy. Fresh import can take good 10+ s and we have 1,000,000+ lines of text.
class SpacyMagic(object):
_spacys = {}
#classmethod
def get(cls, lang):
if lang not in cls._spacys:
import spacy
cls._spacys[lang] = spacy.load(lang)
return cls._spacys[lang]
def run_spacy(sent):
nlp = SpacyMagic.get('en')
return [wrd.text for wrd in nlp(sent)]
sc = SparkContext(appName="LineTokenizer")
data = sc.textFile(s3in)
res = data.map(process_line)
print res.take(100)
sc.stop()

How does one train multiple models in a single script in TensorFlow when there are GPUs present?

Say I have access to a number of GPUs in a single machine (for the sake of argument assume 8GPUs each with max memory of 8GB each in one single machine with some amount of RAM and disk). I wanted to run in one single script and in one single machine a program that evaluates multiple models (say 50 or 200) in TensorFlow, each with a different hyper parameter setting (say, step-size, decay rate, batch size, epochs/iterations, etc). At the end of training assume we just record its accuracy and get rid of the model (if you want assume the model is being check pointed every so often, so its fine to just throw away the model and start training from scratch. You may also assume some other data may be recorded like the specific hyper params, train, validation, train errors are recorded as we train etc).
Currently I have a (pseudo-)script that looks as follow:
def train_multiple_modles_in_one_script_with_gpu(arg):
'''
trains multiple NN models in one session using GPUs correctly.
arg = some obj/struct with the params for trianing each of the models.
'''
#### try mutliple models
for mdl_id in range(100):
#### define/create graph
graph = tf.Graph()
with graph.as_default():
### get mdl
x = tf.placeholder(float_type, get_x_shape(arg), name='x-input')
y_ = tf.placeholder(float_type, get_y_shape(arg))
y = get_mdl(arg,x)
### get loss and accuracy
loss, accuracy = get_accuracy_loss(arg,x,y,y_)
### get optimizer variables
opt = get_optimizer(arg)
train_step = opt.minimize(loss, global_step=global_step)
#### run session
with tf.Session(graph=graph) as sess:
# train
for i in range(nb_iterations):
batch_xs, batch_ys = get_batch_feed(X_train, Y_train, batch_size)
sess.run(fetches=train_step, feed_dict={x: batch_xs, y_: batch_ys})
# check_point mdl
if i % report_error_freq == 0:
sess.run(step.assign(i))
#
train_error = sess.run(fetches=loss, feed_dict={x: X_train, y_: Y_train})
test_error = sess.run(fetches=loss, feed_dict={x: X_test, y_: Y_test})
print( 'step %d, train error: %s test_error %s'%(i,train_error,test_error) )
essentially it tries lots of models in one single run but it builds each model in a separate graph and runs each one in a separate session.
I guess my main worry is that its unclear to me how tensorflow under the hood allocates resources for the GPUs to be used. For example, does it load the (part of the) data set only when a session is ran? When I create a graph and a model, is it brought in the GPU immediately or when is it inserted in the GPU? Do I need to clear/free the GPU each time it tries a new model? I don't actually care too much if the models are ran in parallel in multiple GPU (which can be a nice addition), but I want it to first run everything serially without crashing. Is there anything special I need to do for this to work?
Currently I am getting an error that starts as follow:
I tensorflow/core/common_runtime/bfc_allocator.cc:702] Stats:
Limit: 340000768
InUse: 336114944
MaxInUse: 339954944
NumAllocs: 78
MaxAllocSize: 335665152
W tensorflow/core/common_runtime/bfc_allocator.cc:274] ***************************************************xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
W tensorflow/core/common_runtime/bfc_allocator.cc:275] Ran out of memory trying to allocate 160.22MiB. See logs for memory state.
W tensorflow/core/framework/op_kernel.cc:975] Resource exhausted: OOM when allocating tensor with shape[60000,700]
and further down the line it says:
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[60000,700]
[[Node: standardNN/NNLayer1/Z1/add = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](standardNN/NNLayer1/Z1/MatMul, b1/read)]]
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla P100-SXM2-16GB, pci bus id: 0000:06:00.0)
however further down the output file (where it prints) it seems to print fine the errors/messages that should show as training proceeds. Does this mean that it didn't run out of resources? Or was it actually able to use the GPU? If it was able to use the CPU instead of the CPU, when why is this an error only happening when GPU are about to be used?
The weird thing is that the data set is really not that big (all 60K points are 24.5M) and when I run a single model locally in my own computer it seems that the process uses less than 5GB. The GPUs have at least 8GB and the computer with them has plenty of RAM and disk (at least 16GB). Thus, the errors that tensorflow is throwing at me are quite puzzling. What is it trying to do and why are they occurring? Any ideas?
After reading the answer that suggests to use the multiprocessing library I came up with the following script:
def train_mdl(args):
train(mdl,args)
if __name__ == '__main__':
for mdl_id in range(100):
# train one model with some specific hyperparms (assume they are chosen randomly inside the funciton bellow or read from a config file or they could just be passed or something)
p = Process(target=train_mdl, args=(args,))
p.start()
p.join()
print('Done training all models!')
honestly I am not sure why his answer suggests to use pool, or why there are weird tuple brackets but this is what would make sense for me. Would the resources for tensorflow be re-allocated every time a new process is created in the above loop?
I think that running all models in one single script can be bad practice in the long term (see my suggestion below for a better alternative). However, if you would like to do it, here is a solution: You can encapsulate your TF session into a process with the multiprocessing module, this will make sure TF releases the session memory once the process is done. Here is a code snippet:
from multiprocessing import Pool
import contextlib
def my_model((param1, param2, param3)): # Note the extra (), required by the pool syntax
< your code >
num_pool_worker=1 # can be bigger than 1, to enable parallel execution
with contextlib.closing(Pool(num_pool_workers)) as po: # This ensures that the processes get closed once they are done
pool_results = po.map_async(my_model,
((param1, param2, param3)
for param1, param2, param3 in params_list))
results_list = pool_results.get()
Note from OP: The random number generator seed does not reset automatically with the multi-processing library if you choose to use it. Details here: Using python multiprocessing with different random seed for each process
About TF resource allocation: Usually TF allocates much more resources than it needs. Many times you can restrict each process to use a fraction of the total GPU memory, and discover through trial and error the fraction your script requires.
You can do it with the following snippet
gpu_memory_fraction = 0.3 # Choose this number through trial and error
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_memory_fraction,)
session_config = tf.ConfigProto(gpu_options=gpu_options)
sess = tf.Session(config=session_config, graph=graph)
Note that sometimes TF increases the memory usage in order to accelerate the execution. Therefore, reducing the memory usage might make your model run slower.
Answers to the new questions in your edit/comments:
Yes, Tensorflow will be re-allocated every time a new process is created, and cleared once a process ends.
The for-loop in your edit should also do the job. I suggest to use Pool instead, because it will enable you to run several models concurrently on a single GPU. See my notes about setting gpu_memory_fraction and "choosing the maximal number of processes". Also note that: (1) The Pool map runs the loop for you, so you don't need an outer for-loop once you use it. (2) In your example, you should have something like mdl=get_model(args) before calling train()
Weird tuple parenthesis: Pool only accepts a single argument, therefore we use a tuple to pass multiple arguments. See multiprocessing.pool.map and function with two arguments for more details. As suggested in one answer, you can make it more readable with
def train_mdl(params):
(x,y)=params
< your code >
As #Seven suggested, you can use CUDA_VISIBLE_DEVICES environment variable to choose which GPU to use for your process. You can do it from within your python script using the following on the beginning of the process function (train_mdl).
import os # the import can be on the top of the python script
os.environ["CUDA_VISIBLE_DEVICES"] = "{}".format(gpu_id)
A better practice for executing your experiments would be to isolate your training/evaluation code from the hyper parameters/ model search code.
E.g. have a script named train.py, which accepts a specific combination of hyper parameters and references to your data as arguments, and executes training for a single model.
Then, to iterate through the all the possible combinations of parameters you can use a simple task (jobs) queue, and submit all the possible combinations of hyper-parametrs as separate jobs. The task queue will feed your jobs one at a time to your machine. Usually, you can also set the queue to execute number of processes concurrently (see details below).
Specifically, I use task spooler, which is super easy to install and handful (doesn't requires admin privileges, details below).
Basic usage is (see notes below about task spooler usage):
ts <your-command>
In practice, I have a separate python script that manages my experiments, set all the arguments per specific experiment and send the jobs to the ts queue.
Here are some relevant snippets of python code from my experiments manager:
run_bash executes a bash command
def run_bash(cmd):
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, executable='/bin/bash')
out = p.stdout.read().strip()
return out # This is the stdout from the shell command
The next snippet sets the number of concurrent processes to be run (see note below about choosing the maximal number of processes):
max_job_num_per_gpu = 2
run_bash('ts -S %d'%max_job_num_per_gpu)
The next snippet iterates through a list of all combinations of hyper params / model params. Each element of the list is a dictionary, where the keys are the command line arguments for the train.py script
for combination_dict in combinations_list:
job_cmd = 'python train.py ' + ' '.join(
['--{}={}'.format(flag, value) for flag, value in combination_dict.iteritems()])
submit_cmd = "ts bash -c '%s'" % job_cmd
run_bash(submit_cmd)
A note about about choosing the maximal number of processes:
If you are short on GPUs, you can use gpu_memory_fraction you found, to set the number of processes as max_job_num_per_gpu=int(1/gpu_memory_fraction)
Notes about task spooler (ts):
You could set the number of concurrent processes to run ("slots") with:
ts -S <number-of-slots>
Installing ts doesn't requires admin privileges. You can download and compile it from source with a simple make, add it to your path and you're done.
You can set up multiple queues (I use it for multiple GPUs), with
TS_SOCKET=<path_to_queue_name> ts <your-command>
e.g.
TS_SOCKET=/tmp/socket-ts.gpu_queue_1 ts <your-command>
TS_SOCKET=/tmp/socket-ts.gpu_queue_2 ts <your-command>
See here for further usage example
A note about automatically setting the path names and file names:
Once you separate your main code from the experiment manager, you will need an efficient way to generate file names and directory names, given the hyper-params. I usually keep my important hyper params in a dictionary and use the following function to generate a single chained string from the dictionary key-value pairs.
Here are the functions I use for doing it:
def build_string_from_dict(d, sep='%'):
"""
Builds a string from a dictionary.
Mainly used for formatting hyper-params to file names.
Key-value pairs are sorted by the key name.
Args:
d: dictionary
Returns: string
:param d: input dictionary
:param sep: key-value separator
"""
return sep.join(['{}={}'.format(k, _value2str(d[k])) for k in sorted(d.keys())])
def _value2str(val):
if isinstance(val, float):
# %g means: "Floating point format.
# Uses lowercase exponential format if exponent is less than -4 or not less than precision,
# decimal format otherwise."
val = '%g' % val
else:
val = '{}'.format(val)
val = re.sub('\.', '_', val)
return val
As I understand, firstly tensorflow constructs a symbolic graph and infers the derivatives based on chain rule. Then allocates memory for all (necessary) tensors, including some inputs and outputs of layers for efficiency. When running a session, data will be loaded into the graph but in general, memory use will not change any more.
The error you met, I guess, may be caused by constructing several models in one GPU.
Isolating your training/evaluation code from the hyper parameters is a good choice, as #user2476373 proposed. But I am using bash script directly, not task spooler (may be it's more convenient), e.g.
CUDA_VISIBLE_DEVICES=0 python train.py --lrn_rate 0.01 --weight_decay_rate 0.001 --momentum 0.9 --batch_size 8 --max_iter 60000 --snapshot 5000
CUDA_VISIBLE_DEVICES=0 python eval.py
Or you can write a 'for' loop in the bash script, not necessarily in python script. Noting that I used CUDA_VISIBLE_DEVICES=0 at beginning of the script (the index could be 7 if you have 8 GPUs in one machine). Because based on my experience, I've found that tensorflow uses all GPUs in one machine if I didn't specify operations use which GPU with the code like this
with tf.device('/gpu:0'):
If you want to try multi-GPU implementation, there is some example.
Hope this could help you.
An easy solution: Give each model a unique session and graph.
It works for this platform: TensorFlow 1.12.0, Keras 2.1.6-tf, Python 3.6.7, Jupyter Notebook.
Key code:
with session.as_default():
with session.graph.as_default():
# do something about an ANN model
Full code:
import tensorflow as tf
from tensorflow import keras
import gc
def limit_memory():
""" Release unused memory resources. Force garbage collection """
keras.backend.clear_session()
keras.backend.get_session().close()
tf.reset_default_graph()
gc.collect()
#cfg = tf.ConfigProto()
#cfg.gpu_options.allow_growth = True
#keras.backend.set_session(tf.Session(config=cfg))
keras.backend.set_session(tf.Session())
gc.collect()
def create_and_train_ANN_model(hyper_parameter):
print('create and train my ANN model')
info = { 'result about this ANN model' }
return info
for i in range(10):
limit_memory()
session = tf.Session()
keras.backend.set_session(session)
with session.as_default():
with session.graph.as_default():
hyper_parameter = { 'A set of hyper-parameters' }
info = create_and_train_ANN_model(hyper_parameter)
limit_memory()
Inspired by this link: Keras (Tensorflow backend) Error - Tensor input_1:0, specified in either feed_devices or fetch_devices was not found in the Graph
I have the same issue. My solution is to run from another script doing the following as many times and in as many hyperparameter configurations as you want.
cmd = "python3 ./model_train.py hyperparameters"
os.system(cmd)
You probably don't want to do this.
If you run thousands and thousands of models on your data, and pick the one that evaluates best, you are not doing machine learning; instead you are memorizing your data set, and there is no guarantee that the model you pick will perform at all outside that data set.
In other words, that approach is similar to having a single model, which has thousands of degrees of liberty. Having a model with such high order of complexity is problematic, since it will be able to fit your data better than is actually warranted; such a model is annoyingly able to memorize any noise (outliers, measurement errors, and such) in your training data, which causes the model to perform poorly when the noise is even slightly different.
(Apologies for posting this as an answer, the site wouldn't let me add a comment.)

multiple cpu usage when accessing data attached to traited classes

I have an application that uses a number of classes inheriting from HasTraits. Some of these classes manage access to data and others provide functions for analyzing that data. This works wonderfully for a gui -- I can check that the data and analysis code is doing what it should. However, I've noticed that when I use these classes for gui-less computations, all the cpus on the system end up getting used.
Here is a small example that shows the cpu usage:
from traits.api import HasTraits, List, Int, Enum, Instance
import numpy as np
import psutil
from itertools import combinations
"""
Small example of high CPU usage by traited classes
"""
class DataStorage(HasTraits):
nsamples = Int(2000)
samples = List
def _samples_default(self):
return np.random.randn(self.nsamples,2000).tolist()
def sample_samples(self,indices):
""" return a 2D array of data at indices """
return np.array(
[self.samples[i] for i in indices])
class DataAccessor(HasTraits):
""" Class that grabs data and computes something """
measure = Enum("correlation","covariance")
data_source = Instance(DataStorage,())
def compute_measure(self,indices):
""" example of some computation """
samples = self.data_source.sample_samples(indices)
percentage = psutil.cpu_percent(interval=0, percpu=True)
if self.measure == "correlation":
result = np.corrcoef(samples)
elif self.measure == "covariance":
result = np.cov(samples)
return percentage
# Run a simulation to see cpu usage
analyzer = DataAccessor()
usage = []
n_iterations = 0
max_iterations = 500
for combo in combinations(np.arange(2000),500):
# evaluate the measurement on a subset of the data
usage.append(analyzer.compute_measure(combo))
n_iterations += 1
if n_iterations > max_iterations:
break
print n_iterations
use_percents = np.array(usage).T
When I run this on an 8-cpu machine running CentOS, top reports the python process at roughly 600%.
>>> use_percents.mean(1)
shows
array([ 67.05548902, 67.06906188, 66.89041916, 67.28942116,
66.69421158, 67.61437126, 99.8007984 , 67.31996008])
Question:
My computation is embarrassingly parallel, so it would be great to have the other cpus available to split up the job. Does anyone know what's happening here? A plain python version of this uses 100% on a single cpu.
Is there a way to keep everything local to a single cpu without rewriting all my classes without traits?
Traits is not causing the CPU usage. It's easy to rewrite this bit of code without Traits, and you will see that you get the same pattern of CPU usage (at least, I do).
Instead, what you are probably seeing is the CPU usage of the BLAS library that your build of numpy is linked against. numpy.corrcoeff() calls numpy.cov(), and much of the computation of numpy.cov() is taken up by a numpy.dot() call, which does a matrix-matrix multiplication using BLAS. If it is an optimized BLAS library, then it will usually use non-Python threads internally to split up these computations among your CPUs. You will have to consult the documentation of your optimized BLAS library to find out how to change this.

Categories