I am trying to use optuna for searching hyper parameter spaces.
In one particular scenario I train a model on a machine with a few GPUs.
The model and batch size allows me to run 1 training per 1 GPU.
So, ideally I would like to let optuna spread all trials across the available GPUs
so that there is always 1 trial running on each GPU.
In the docs it says, I should just start one process per GPU in a separate terminal like:
CUDA_VISIBLE_DEVICES=0 optuna study optimize foo.py objective --study foo --storage sqlite:///example.db
I want to avoid that because the whole hyper parameter search continues in multiple rounds after that. I don't want to always manually start a process per GPU, check when all are finished, then start the next round.
I saw study.optimize has a n_jobs argument.
At first glance this seems to be perfect.
E.g. I could do this:
import optuna
def objective(trial):
# the actual model would be trained here
# the trainer here would need to know which GPU
# it should be using
best_val_loss = trainer(**trial.params)
return best_val_loss
study = optuna.create_study()
study.optimize(objective, n_trials=100, n_jobs=8)
This starts multiple threads each starting a training.
However, the trainer within objective somehow needs to know which GPU it should be using.
Is there a trick to accomplish that?
After a few mental breakdowns I figured out that I can do what I want using a multiprocessing.Queue. To get it into the objective function I need to define it as a lambda function or as a class (I guess partial also works). E.g.
from contextlib import contextmanager
import multiprocessing
N_GPUS = 2
class GpuQueue:
def __init__(self):
self.queue = multiprocessing.Manager().Queue()
all_idxs = list(range(N_GPUS)) if N_GPUS > 0 else [None]
for idx in all_idxs:
self.queue.put(idx)
#contextmanager
def one_gpu_per_process(self):
current_idx = self.queue.get()
yield current_idx
self.queue.put(current_idx)
class Objective:
def __init__(self, gpu_queue: GpuQueue):
self.gpu_queue = gpu_queue
def __call__(self, trial: Trial):
with self.gpu_queue.one_gpu_per_process() as gpu_i:
best_val_loss = trainer(**trial.params, gpu=gpu_i)
return best_val_loss
if __name__ == '__main__':
study = optuna.create_study()
study.optimize(Objective(GpuQueue()), n_trials=100, n_jobs=8)
If you want a documented solution of passing arguments to objective functions used by multiple jobs, then Optuna docs present two solutions:
callable classes (it can be combined with multiprocessing),
lambda function wrapper (caution: simpler, but does not work with multiprocessing).
If you are prepared to take a few shortcuts, then you can skip some boilerplate by passing global values (constants such as number of GPUs used) directly (via python environment) to the __call__() method (rather than as arguments of __init__()).
The callable classes solution was tested to work (in optuna==2.0.0) with the two multiprocessing backends (loky/multiprocessing) and remote database backends (mariadb/postgresql).
To overcome the problem if introduced a global variable that tracks, which GPU is currently in use, which can then be read out in the objective function. The code looks like this.
EPOCHS = n
USED_DEVICES = []
def objective(trial):
time.sleep(random.uniform(0, 2)) #used because all n_jobs start at the same time
gpu_list = list(range(torch.cuda.device_count())
unused_gpus = [x for x in gpu_list if x not in USED_DEVICES]
idx = random.choice(unused_gpus)
USED_DEVICES.append(idx)
unused_gpus.remove(idx)
DEVICE = f"cuda:{idx}"
model = define_model(trial).to(DEVICE)
#... YOUR CODE ...
for epoch in range(EPOCHS):
# ... YOUR CODE ...
if trial.should_prune():
USED_DEVICES.remove(idx)
raise optuna.exceptions.TrialPruned()
#remove idx from list to reuse in next trial
USED_DEVICES.remove(idx)
Related
TL;DR: using PyTorch with Optuna with multiprocessing done with Queue(), a GPU (out of 4) can hang. Probably not a deadlock. Any ideas?
Normal version:
I am using PyTorch in combination with Optuna (a hyperparameter optimization framework; basically starts different trials for one model with different parameters, see: https://optuna.readthedocs.io/en/stable/) for my model training on a setup with 4 GPUs. Here, I've been looking for a way to distribute the workload more efficiently on the GPUs, hence I explored the multiprocessing library.
The core of the multiprocessing code looks like following:
class GpuQueue:
def __init__(self):
self.queue = multiprocessing.Manager().Queue()
all_idxs = list(range(N_GPUS)) if N_GPUS > 0 else [None]
for idx in all_idxs:
self.queue.put(idx)
#contextmanager
def one_gpu_per_process(self):
current_idx = self.queue.get()
yield current_idx
self.queue.put(current_idx)
class Objective:
def __init__(self, gpu_queue: GpuQueue, params, signals):
self.gpu_queue = gpu_queue
# create dataset
# ...
def __call__(self, trial: optuna.Trial):
with self.gpu_queue.one_gpu_per_process() as gpu_i:
val = trainer(trial, gpu=gpu_i, ...)
return val
And in main, optuna study and optuna optimize are initiated with:
study = optuna.create_study(direction="minimize", sampler = optuna.samplers.TPESampler(seed=17)) # storage = "sqlite:///trials.db")
study.optimize(Objective(GpuQueue(), ..., n_jobs=4))
Same implementation can be found in this StackOverflow post (used as inspiration): Is there a way to pass arguments to multiple jobs in optuna?
What this code does is that every trial gets its own GPU, hence the GPU usage and distribution is better than other methods. However it happens often that a GPU is stuck and just 'shuts itself off' and does not finish the trial, hence the code actually never finishes running and that GPU is never freed.
Say, for example, that I am running 100 trials, then trial 1,2,3,4 get assigned GPUs 0,1,2,3 (not always in that order), and whenever a GPU is freed, say GPU 2, it takes on trial 5, etc. The issue is, it can happen that the trial that the GPU is assigned to 'quits' in the process and never finishes the trial, hence not taking on another trial and resulting in the run with many trials not completing.
I suspected a deadlock, but apparently Queue() is thread-safe (see: Is Python multiprocessing.Queue thread safe?).
Any clue on what can cause the hang and what I can look for?
I've encountered a mysterious bug while trying to implement Hogwild with torch.multiprocessing. In particular, one version of the code runs fine, but when I add in a seemingly unrelated bit of code before the multiprocessing step, this somehow causes an error during the multiprocessing step: RuntimeError: Unable to handle autograd's threading in combination with fork-based multiprocessing. See https://github.com/pytorch/pytorch/wiki/Autograd-and-Fork
I reproduced the error in a minimal code sample, pasted below. If I comment out the two lines of code m0 = Model(); train(m0) which carry out a non-parallel training run on a separate model instance, then everything runs fine. I can't figure out how these lines could be causing a problem.
I'm running PyTorch 1.5.1 and Python 3.7.6 on a Linux machine, training on CPU only.
import torch
import torch.multiprocessing as mp
from torch import nn
def train(model):
opt = torch.optim.Adam(model.parameters(), lr=1e-5)
for _ in range(10000):
opt.zero_grad()
# We train the model to output the value 4 (arbitrarily)
loss = (model(0) - 4)**2
loss.backward()
opt.step()
# Toy model with one parameter tensor of size 3.
# Output is always the sum of the elements in the tensor,
# independent of the input
class Model(nn.Module):
def __init__(self):
super().__init__()
self.x = nn.Parameter(torch.ones(3))
def forward(self, x):
return torch.sum(self.x)
############################################
# Create a separate Model instance and run
# a non-parallel training run.
# For some reason, this code causes the
# subsequent parallel run to fail.
m0 = Model()
train(m0)
print ('Done with preliminary run')
############################################
num_processes = 2
model = Model()
model.share_memory()
processes = []
for rank in range(num_processes):
p = mp.Process(target=train, args=(model,))
p.start()
processes.append(p)
for p in processes:
p.join()
print(model.x)
If you modify your code to create new processes like this:
processes = []
ctx = mp.get_context('spawn')
for rank in range(num_processes):
p = ctx.Process(target=train, args=(model,))
it seems to run fine (rest of code same as yours, tested on pytorch 1.5.0 / python 3.6 / NVIDIA T4 GPU).
I'm not completely sure what is carried over from the non-parallel run to the parallel run; I tried creating a completely new model for the two runs (with its own class), and/or deleting anything from the original, and/or making sure to delete any tensors and free up memory, and none of that made any difference.
What did make a difference was making sure that .backward() never got called outside of mp.Process() before it was called by a function within mp.Process(). I think what may be carried over is an autograd thread; if the thread exists before multiprocessing with the default fork method it fails, if the thread is created after fork it seems to work okay, and if using spawn it also works okay.
Btw: That's a really interesting question - thank you especially for digesting it to a minimal example!
You missed this:
if __name__ == '__main__':
which is very important for multi-processing!
I am trying to build a kubeflow pipeline where I run two components (with a GPU constraint) in parallel. It seemed like a non-issue, but every time I tried it, one component would get stuck at "pending" until the other component is done.
Example run
The two components I am testing are simple while loops with a GPU constraint:
while_op1 = while_loop_op(image_name='tensorflow/tensorflow:1.15.2-py3')
while_op1.name = 'while-1-gpu'
while_op1.set_security_context(V1SecurityContext(privileged=True))
while_op1.apply(gcp.use_gcp_secret('user-gcp-sa'))
while_op1.add_pvolumes({pv_base_path: _volume_op.volume})
while_op1.add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-p100')
while_op1.set_gpu_limit(1)
while_op1.after(init_op)
Where while_loop_op:
import kfp.components as comp
def while_loop_op(image_name):
def while_loop():
import time
max_count = 300
count = 0
while True:
if count >= max_count:
print('Done.')
break
time.sleep(10)
count += 10
print("{} seconds have passed...".format(count))
op = comp.func_to_container_op(while_loop, base_image=image_name)
return op()
the issue might be related to your use of volumes. Have you tried to use the more supported data passing mechanisms?
For example, take this pipeline: https://github.com/kubeflow/pipelines/blob/091316b8bf3790e14e2418843ff67a3072cfadc0/components/XGBoost/_samples/sample_pipeline.py
Apply the GPU-related customizations to the trainer:
some_task.add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-p100')
some_task.set_gpu_limit(1)
Put the trainer and predictor inside a for _ in range(10): loop so that you have 10 parallel copies.
Check whether the trainers run in parallel.
P.S. It's better to create issues in the official repo: https://github.com/kubeflow/pipelines/issues
I'm trying to write some code to parallelize a bunch of tasks. Basically, the script is organized as the following.
import multiprocessing as mp
def obj_train(x):
return x.train()
class ServerModel(nn.Module):
self.S = nn.Parameter(torch.rand(x, y), requires_grad=True)
class ClientModel(nn.Module):
self.S = nn.Parameter(torch.rand(x, y), requires_grad=True)
self.U = nn.Parameter(torch.rand(x, y), requires_grad=True)
class Server:
def __init__(self, model):
self.model = model
...
def train(clients):
for i, c in enumerate(clients):
sd = c.model.state_dict()
sd['S'] = self.model.S
c.model.load_state_dict(sd)
self.c_list = random.sample(clients, 200)
pool = mp.Pool(mp.cpu_count()-1)
results = pool.map(obj_train, self.c_list)
pool.close()
pool.join()
print("Training complete")
class Client:
def __init__(self, client_id, model, train_set):
self.id = client_id
self.model = model
self.train_set = train_set
def train(self):
self.optimizer = optim.SGD([self.model.S, self.model.U])
for i in self.train_set:
loss = self.model(i)
loss.backward()
self.optimizer.step()
print("Trained client %d", self.id)
return self.model.S
if __name__ == '__main__':
...
server = Server(server_model)
clients = [Client(u, ClientModel(), train_set[u]) for u in range(n_clients)]
server.train(clients)
Ok, the problem is in multiprocessing. I tried with a lot of approaches but all of them gives me the same problem. Server should manage the training of 200 clients, but after a certain number of trainings (it depends on the approach, but approx 50-100), the script completely stucks and cores of the CPU stop working.
Have you any ideas? Other approaches I tried are for example mp.Pool and with ProcessPoolExecutor.
Thank you for your help.
Could it be that you hit the maximum number of processes/threads your machine is able to handle?
It is common, for example, when moving a web crawler from development to production that the machine does not allow more processes.
I would give a look at the file
/etc/sysctl.d
and in case increase the number of possible processes for the machine to handle.
Another reason might be that you capped RAM limit or something similar, try to give another quick look at the command
htop
followed by
free -m
and see what they tell you. It might be a hardware problem. While from a software it might be that the library you are using https://docs.python.org/2/library/multiprocessing.html has a hard-coded limit. Also here you can easily set it higher within the library parameters.
Last but not least, try to find the problem incrementally. I would test it with with 2 processes and increment slowly to see when the application starts having issues. And at that point it would probably be even clearer what the issue was. Good luck!
My mxnet script is likely limited by i/o of data loading into the GPU, and I am trying to speed this up by prefetching. The trouble is I can't figure out how to prefetch with a custom data iterator.
My first hypothesis/hope was that it would be enough to set the values of self.preprocess_threads and self.prefetch_buffer, as I had seen here for iterators such as mxnet.io.ImageRecordUInt8Iter. However, when I did this I saw no performance change relative to the script before I had set these variables, so clearly setting these did not work.
Then I noticed, the existence of a class mx.io.PrefetchingIter in addition to the base class for which I had implemented a child class mx.io.DataIter. I found this documentation, but I have not been able to find any examples, and I am a little confused about what needs to happen where/when. However, I am not clear on how to use this. For example. I see that in addition to next() it has an iter_next() method, which simply says "move to the next batch". What does this mean exactly? What does it mean to "move" to the next batch without producing it? I found the source code for this class, and based on a brief reading, it seems as though it takes multiple iterators and creates one thread per iterator. This likely would not work for my current design, as I really want multiple threads used to prefetch from the same iterator.
Here is what I am trying to do via a custom data iterator
I maintain a global multiprocessing.Queue on which I pop data as it becomes available
I produce that data by running (via multiprocessing) a command line script that executes a c++ binary which produces a numpy file
I open the numpy file and load its contents into memory, process them, and put the processed bits on the global multiprocessing.Queue
My custom iterator pulls off this queue and also kicks off more jobs to produce more data when the queue is empty.
Here is my code:
def launchJobForDate(date_str):
### this is a function that gets called via multiprocessing
### to produce new data by calling a c++ binary
### whenever data queue is empty so that we need to produce more data
try:
f = "testdata/data%s.npy"%date_str
if not os.path.isfile(f):
cmd = CMD % ( date_str, JSON_FILE, date_str, date_str, date_str)
while True:
try:
output = subprocess.check_output(cmd, shell=True)
break
except:
pass
while True:
try:
d = np.load(f)
break
except:
pass
data_queue.put((d, date_str))
except Exception as ex:
print("launchJobForDate: ERROR ", ex)
class ProduceDataIter(mx.io.DataIter):
#staticmethod
def processData(d, time_steps, num_inputs):
try:
...processes data...
return [z for z in zip(bigX, bigY, bigEvalY, dates)]
except Exception as ex:
print("processData: ERROR ", ex)
def __init__(self, num_mgrs, end_date_str):
## iter stuff
self.preprocess_threads = 4
self.prefetch_buffer = 1
## set up internal data to preserve state
## and make a list of dates for which to run binary
#property
def provide_data(self):
return [mx.io.DataDesc(name='seq_var',
shape=(args_batch_size * GPU_COUNT,
self.time_steps,
self.num_inputs),
layout='NTC')]
#property
def provide_label(self):
return [mx.io.DataDesc(name='bd_return',
shape=(args_batch_size * GPU_COUNT)),
mx.io.DataDesc(name='bd_return',
shape=(args_batch_size * GPU_COUNT, num_y_cols)),
mx.io.DataDesc(name='date',
shape=(args_batch_size * GPU_COUNT))]
def __next__(self):
try:
z = self.z.pop(0)
data = z[0:1]
label = z[1:]
return mx.io.DataBatch(data, label)
except Exception as ex:
### if self.z (a list) has no elements to pop we need
### to get more data off the queue, process it, and put it
### on self.x so it's ready for calls to __next__()
while True:
try:
d = data_queue.get_nowait()
processedData = ProduceDataIter.processData(d,
self.time_steps,
self.num_inputs)
self.z.extend(processedData)
counter_queue.put(counter_queue.get() - 1)
z = self.z.pop(0)
data = z[0:1]
label = z[1:]
return mx.io.DataBatch(data, label)
except queue.Empty:
...this is where new jobs to produce new data and put them
...on the queue would happen if nothing is left on the queue
I have then tried making one of these iterators as well as a prefetch iterator like so:
mgr = ProcessMgr(2, end_date_str)
mgrOuter = mx.io.PrefetchingIter([mgr])
The problem is that mgrOuter immediately throws a StopIteration as soon as __next__() is called the first time, and without invoking mgr.__next__() as I thought it might.
Finally, I also noticed that gluon has a DataLoader object which seems like it might handle prefetching, however in this case it also seems to assume that the underlying data is from a Dataset which has a finite and unchanging layout (based on the fact that it is implemented in terms of getitem, which takes an index). So I have not pursued this option as it seem unpromising given the dynamic queue-like nature of the data I am generating as training input.
My questions are:
How do I need to modify my code above so that there will be prefetching for my custom iterator?
Where might I find an example or more detailed documentation of how mx.io.PrefetchingIter works?
Are there other strategies I should be aware of for getting more performance out of my GPUs via a custom iterator? Right now they are only operating at around 50% capacity, and upping (or lowering) the batch size doesn't change this. What other knobs might I be able to turn to increase GPU use efficiency?
Thanks for any feedback and advice.
As you already mentioned, gluon DataLoader is providing prefetching. In your custom DataIterator, you are using Numpy arrays as input. So you could do the following:
f = "testdata/data%s.npy"%date_str
data = np.load(f)
train = gluon.data.ArrayDataset(mx.nd.array(data))
train_iter = gluon.data.DataLoader(train, shuffle=True, num_workers=4, batch_size=batch_size, last_batch='rollover')
Since you are creating your data dynamically, you could try resetting the DataLoader in every epoch and load a new Numpy array.
If GPU utilization is still low, then try to increase the batch_size and the num_workers. Another problem could be also the size of your dataset. Resetting the DataLoader will impact the performance, so having a larger dataset will increase the time of an epoch and as such increase performance.