Avoid tensorflow print on standard error - python

anyone knows if there is a method to prevent tensorflow from polluting standard error with gpus' memory allocation log?.
I noted that when the following command is executed:
with tf.Session() as sess:
tensorflow prints on standard error a log about memory and gpu resources allocation. Something like:
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 48
Graphics Device pciBusID 0000:02:00.0
Free memory: 11.75GiB
...
For important reasons, I wanna avoid this printing.

This was recently fixed, and should be available if you upgrade to TensorFlow 0.12 or later.
To disable all logging output from TensorFlow, set the following environment variable before launching Python:
$ export TF_CPP_MIN_LOG_LEVEL=3
$ python ...
You can also adjust the verbosity by changing the value of TF_CPP_MIN_LOG_LEVEL:
0 = all messages are logged (default behavior)
1 = INFO messages are not printed
2 = INFO and WARNING messages are not printed
3 = INFO, WARNING, and ERROR messages are not printed

You can set an environment variable before launching Python as described in the first answer, or you can add the following lines to your Python code:
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
Change 3 to values (0, 1, 2, 3) according to the messages you want avoid.
If using TensorFlow 2.0+, make sure to put those lines before import tensorflow to be effective.

Defaults to 0, so all logs are shown. Set TF_CPP_MIN_LOG_LEVEL to 1 to filter out INFO logs, 2 to additionall filter out WARNING, 3 to additionally filter out ERROR.

In TensorFlow 2.0, this does not work for all logging messages (I still get tf.function retracing warnings, for instance).
To make sure everything is turned off, do this
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
import logging
import tensorflow as tf
logger = tf.get_logger()
logger.setLevel(logging.ERROR) # or logging.INFO, logging.WARNING, etc.
Note 1: If you leave out the os.environ["TF_CPP_MIN_LOG_LEVEL"] line, some info messages are still printed (like the This TensorFlow binary is optimized with... that runs on startup.)
Note 2: in TF 1.0 you can also manually set the verbosity with tf.logging.set_verbosity(tf.logging.ERROR) but TF 2.0 does not have tf.logging.

Related

Tensorflow Keras logging steps in for loop

I'm running someone else's program so I can't be 100% open about the code.
I'm using load_model from tensorflow.keras.model to load an h5 model, then model.predict(<data>) in a for loop.
I'm also using tqdm to display a progress Bar. Now the problem is that as soon as I get to the loop, my console start printing output every new line:
I tried to block this output:
import absl.logging
import logging
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
absl.logging.set_verbosity(absl.logging.ERROR)
logging.getLogger("tensorflow").setLevel(logging.ERROR)
logging.getLogger("tensorflow").addHandler(logging.NullHandler(logging.ERROR))
But nothing seems to work.
Solved it: according to tensorflow.keras.model documentation, the method predict has attributes verbose and steps: https://www.tensorflow.org/api_docs/python/tf/keras/Model#predict
However, the same documentation suggest to use the __call__ method when used in a loop: model(<data>, training=False) which does not output any steps.

automatically choose a device keras tensorflow [duplicate]

I have access through ssh to a cluster of n GPUs. Tensorflow automatically gave them names gpu:0,...,gpu:(n-1).
Others have access too and sometimes they take random gpus.
I did not place any tf.device() explicitely because that is cumbersome and even if I selected gpu number j and that someone is already on gpu number j that would be problematic.
I would like to go throuh the gpus usage and find the first that is unused and use only this one.
I guess someone could parse the output of nvidia-smi with bash and get a variable i and feed that variable i to the tensorflow script as the number of the gpu to use.
I have never seen any example of this. I imagine it is a pretty common problem. What would be the simplest way to do that ? Is a pure tensorflow one available ?
I'm not aware of pure-TensorFlow solution. The problem is that existing place for TensorFlow configurations is a Session config. However, for GPU memory, a GPU memory pool is shared for all TensorFlow sessions within a process, so Session config would be the wrong place to add it, and there's no mechanism for process-global config (but there should be, to also be able to configure process-global Eigen threadpool). So you need to do on on a process level by using CUDA_VISIBLE_DEVICES environment variable.
Something like this:
import subprocess, re
# Nvidia-smi GPU memory parsing.
# Tested on nvidia-smi 370.23
def run_command(cmd):
"""Run command, return output as string."""
output = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True).communicate()[0]
return output.decode("ascii")
def list_available_gpus():
"""Returns list of available GPU ids."""
output = run_command("nvidia-smi -L")
# lines of the form GPU 0: TITAN X
gpu_regex = re.compile(r"GPU (?P<gpu_id>\d+):")
result = []
for line in output.strip().split("\n"):
m = gpu_regex.match(line)
assert m, "Couldnt parse "+line
result.append(int(m.group("gpu_id")))
return result
def gpu_memory_map():
"""Returns map of GPU id to memory allocated on that GPU."""
output = run_command("nvidia-smi")
gpu_output = output[output.find("GPU Memory"):]
# lines of the form
# | 0 8734 C python 11705MiB |
memory_regex = re.compile(r"[|]\s+?(?P<gpu_id>\d+)\D+?(?P<pid>\d+).+[ ](?P<gpu_memory>\d+)MiB")
rows = gpu_output.split("\n")
result = {gpu_id: 0 for gpu_id in list_available_gpus()}
for row in gpu_output.split("\n"):
m = memory_regex.search(row)
if not m:
continue
gpu_id = int(m.group("gpu_id"))
gpu_memory = int(m.group("gpu_memory"))
result[gpu_id] += gpu_memory
return result
def pick_gpu_lowest_memory():
"""Returns GPU with the least allocated memory"""
memory_gpu_map = [(memory, gpu_id) for (gpu_id, memory) in gpu_memory_map().items()]
best_memory, best_gpu = sorted(memory_gpu_map)[0]
return best_gpu
You can then put it in utils.py and set GPU in your TensorFlow script before first tensorflow import. IE
import utils
import os
os.environ["CUDA_VISIBLE_DEVICES"] = str(utils.pick_gpu_lowest_memory())
import tensorflow
An implementation along the lines of Yaroslav Bulatov's solution is available on https://github.com/bamos/setGPU.

TensorFlow 1.5.0-rc0: error using `tf.app.flags`

The following flags were defined in a misc_fun.py file to include machine and directories info:
import tensorflow as tf
flags = tf.app.flags
FLAGS = flags.FLAGS
# definitions
flags.DEFINE_string(
'DEFAULT_IN',
'~/PycharmProjects/myNN/Data/',
"""Default input folder.""")
...
It worked fine in TensorFlow 1.0 - 1.4 versions (with Pycharm). After updating to TensorFlow 1.5.-rc0, the following error occurred:
Usage:
from misc_fun import FLAGS
FLAGS.DEFAULT_IN = FLAGS.DEFAULT_DOWNLOAD # change default input folder
Error:
UnparsedFlagAccessError: Trying to access flag --DEFAULT_DOWNLOAD before flags were parsed.
However print(FLAGS) worked fine, which gives:
misc_fun:
--DEFAULT_DOWNLOAD: default download folder for large datasets.
(default: '/home/username/Downloads/Data/')
--DEFAULT_IN: default input folder.
(default: '~/PycharmProjects/myNN/Data/')
...
I tried FLAGS = flags.FLAGS(sys.argv), resulting in the following error:
UnrecognizedFlagError: Unknown command line flag 'f'
Although there is a workaround using the class object, I wonder what could be the problem here.
I have tried adding the following line below.
tf.app.flags.DEFINE_string('f', '', 'kernel')
This solution is different from others in that it is simple and easy to try. You just need to add this into your code, and it doesn't change your system. Please let me know if this solution helps solve other people's problems.
The reference for this solution is from a Chinese website: https://blog.csdn.net/qq_39956625/article/details/80500291
With 1.5.0-rc0 the Tensorflow maintainers have switched tf.app.flags to the flags module from abseil. Unfortunately, it is not 100% API compatible to the previous implementation. I worked around your problem with something like
remaining_args = FLAGS([sys.argv[0]] + [flag for flag in sys.argv if flag.startswith("--")])
assert(remaining_args == [sys.argv[0]])
before accessing the FLAGS object the first time.
Alternatively you can use FLAGS(sys.argv, known_only=True) to parse all related flags (the ones defined using tf.app.flags.DEFINE_xxx). This will release any other args that are not known. Useful if you have some command line arguments that are not related to TF.

How do I check if PyTorch is using the GPU?

How do I check if PyTorch is using the GPU? The nvidia-smi command can detect GPU activity, but I want to check it directly from inside a Python script.
These functions should help:
>>> import torch
>>> torch.cuda.is_available()
True
>>> torch.cuda.device_count()
1
>>> torch.cuda.current_device()
0
>>> torch.cuda.device(0)
<torch.cuda.device at 0x7efce0b03be0>
>>> torch.cuda.get_device_name(0)
'GeForce GTX 950M'
This tells us:
CUDA is available and can be used by one device.
Device 0 refers to the GPU GeForce GTX 950M, and it is currently chosen by PyTorch.
As it hasn't been proposed here, I'm adding a method using torch.device, as this is quite handy, also when initializing tensors on the correct device.
# setting device on GPU if available, else CPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
print()
#Additional Info when using cuda
if device.type == 'cuda':
print(torch.cuda.get_device_name(0))
print('Memory Usage:')
print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
print('Cached: ', round(torch.cuda.memory_reserved(0)/1024**3,1), 'GB')
Edit: torch.cuda.memory_cached has been renamed to torch.cuda.memory_reserved. So use memory_cached for older versions.
Output:
Using device: cuda
Tesla K80
Memory Usage:
Allocated: 0.3 GB
Cached: 0.6 GB
As mentioned above, using device it is possible to:
To move tensors to the respective device:
torch.rand(10).to(device)
To create a tensor directly on the device:
torch.rand(10, device=device)
Which makes switching between CPU and GPU comfortable without changing the actual code.
Edit:
As there has been some questions and confusion about the cached and allocated memory I'm adding some additional information about it:
torch.cuda.max_memory_cached(device=None) Returns the maximum GPU memory managed by the caching allocator in bytes for a
given device.
torch.cuda.memory_allocated(device=None) Returns the current GPU memory usage by tensors in bytes for a given device.
You can either directly hand over a device as specified further above in the post or you can leave it None and it will use the current_device().
Additional note: Old graphic cards with Cuda compute capability 3.0 or lower may be visible but cannot be used by Pytorch! Thanks to hekimgil for pointing this out! - "Found GPU0 GeForce GT 750M which is of cuda capability 3.0. PyTorch no longer supports this GPU because it is too old. The minimum cuda capability that we support is 3.5."
After you start running the training loop, if you want to manually watch it from the terminal whether your program is utilizing the GPU resources and to what extent, then you can simply use watch as in:
$ watch -n 2 nvidia-smi
This will continuously update the usage stats for every 2 seconds until you press ctrl+c
If you need more control on more GPU stats you might need, you can use more sophisticated version of nvidia-smi with --query-gpu=.... Below is a simple illustration of this:
$ watch -n 3 nvidia-smi --query-gpu=index,gpu_name,memory.total,memory.used,memory.free,temperature.gpu,pstate,utilization.gpu,utilization.memory --format=csv
which would output the stats something like:
Note: There should not be any space between the comma separated query names in --query-gpu=.... Else those values will be ignored and no stats are returned.
Also, you can check whether your installation of PyTorch detects your CUDA installation correctly by doing:
In [13]: import torch
In [14]: torch.cuda.is_available()
Out[14]: True
True status means that PyTorch is configured correctly and is using the GPU although you have to move/place the tensors with necessary statements in your code.
If you want to do this inside Python code, then look into this module:
https://github.com/jonsafari/nvidia-ml-py or in pypi here: https://pypi.python.org/pypi/nvidia-ml-py/
From practical standpoint just one minor digression:
import torch
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
This dev now knows if cuda or cpu.
And there is a difference in how you deal with models and with tensors when moving to cuda. It is a bit strange at first.
import torch
import torch.nn as nn
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
t1 = torch.randn(1,2)
t2 = torch.randn(1,2).to(dev)
print(t1) # tensor([[-0.2678, 1.9252]])
print(t2) # tensor([[ 0.5117, -3.6247]], device='cuda:0')
t1.to(dev)
print(t1) # tensor([[-0.2678, 1.9252]])
print(t1.is_cuda) # False
t1 = t1.to(dev)
print(t1) # tensor([[-0.2678, 1.9252]], device='cuda:0')
print(t1.is_cuda) # True
class M(nn.Module):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(1,2)
def forward(self, x):
x = self.l1(x)
return x
model = M() # not on cuda
model.to(dev) # is on cuda (all parameters)
print(next(model.parameters()).is_cuda) # True
This all is tricky and understanding it once, helps you to deal fast with less debugging.
From the official site's get started page, you can check if the GPU is available for PyTorch like so:
import torch
torch.cuda.is_available()
Reference: PyTorch | Get Started
Query
Command
Does PyTorch see any GPUs?
torch.cuda.is_available()
Are tensors stored on GPU by default?
torch.rand(10).device
Set default tensor type to CUDA:
torch.set_default_tensor_type(torch.cuda.FloatTensor)
Is this tensor a GPU tensor?
my_tensor.is_cuda
Is this model stored on the GPU?
all(p.is_cuda for p in my_model.parameters())
To check if there is a GPU available:
torch.cuda.is_available()
If the above function returns False,
you either have no GPU,
or the Nvidia drivers have not been installed so the OS does not see the GPU,
or the GPU is being hidden by the environmental variable CUDA_VISIBLE_DEVICES. When the value of CUDA_VISIBLE_DEVICES is -1, then all your devices are being hidden. You can check that value in code with this line: os.environ['CUDA_VISIBLE_DEVICES']
If the above function returns True that does not necessarily mean that you are using the GPU. In Pytorch you can allocate tensors to devices when you create them. By default, tensors get allocated to the cpu. To check where your tensor is allocated do:
# assuming that 'a' is a tensor created somewhere else
a.device # returns the device where the tensor is allocated
Note that you cannot operate on tensors allocated in different devices. To see how to allocate a tensor to the GPU, see here: https://pytorch.org/docs/stable/notes/cuda.html
Almost all answers here reference torch.cuda.is_available(). However, that's only one part of the coin. It tells you whether the GPU (actually CUDA) is available, not whether it's actually being used. In a typical setup, you would set your device with something like this:
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
but in larger environments (e.g. research) it is also common to give the user more options, so based on input they can disable CUDA, specify CUDA IDs, and so on. In such case, whether or not the GPU is used is not only based on whether it is available or not. After the device has been set to a torch device, you can get its type property to verify whether it's CUDA or not.
if device.type == 'cuda':
# do something
Simply from command prompt or Linux environment run the following command.
python -c 'import torch; print(torch.cuda.is_available())'
The above should print True
python -c 'import torch; print(torch.rand(2,3).cuda())'
This one should print the following:
tensor([[0.7997, 0.6170, 0.7042], [0.4174, 0.1494, 0.0516]], device='cuda:0')
If you are here because your pytorch always gives False for torch.cuda.is_available() that's probably because you installed your pytorch version without GPU support. (Eg: you coded up in laptop then testing on server).
The solution is to uninstall and install pytorch again with the right command from pytorch downloads page. Also refer this pytorch issue.
It is possible for
torch.cuda.is_available()
to return True but to get the following error when running
>>> torch.rand(10).to(device)
as suggested by MBT:
RuntimeError: CUDA error: no kernel image is available for execution on the device
This link explains that
... torch.cuda.is_available only checks whether your driver is compatible with the version of cuda we used in the binary. So it means that CUDA 10.1 is compatible with your driver. But when you do computation with CUDA, it couldn't find the code for your arch.
If you are using Linux I suggest to install nvtop
https://github.com/Syllo/nvtop
You will get something like this:
For a MacBook M1 system:
import torch
print(torch.backends.mps.is_available(), torch.backends.mps.is_built())
And both should be True.
Create a tensor on the GPU as follows:
$ python
>>> import torch
>>> print(torch.rand(3,3).cuda())
Do not quit, open another terminal and check if the python process is using the GPU using:
$ nvidia-smi
Using the code below
import torch
torch.cuda.is_available()
will only display whether the GPU is present and detected by pytorch or not.
But in the "task manager-> performance" the GPU utilization will be very few percent.
Which means you are actually running using CPU.
To solve the above issue check and change:
Graphics setting --> Turn on Hardware accelerated GPU settings, restart.
Open NVIDIA control panel --> Desktop --> Display GPU in the notification area
[Note: If you have newly installed windows then you also have to agree the terms and conditions in NVIDIA control panel]
This should work!
step 1: import torch library
import torch
#step 2: create tensor
tensor = torch.tensor([5, 6])
#step 3: find the device type
#output 1: in the below, the output we can get the size(tensor.shape), dimension(tensor.ndim),
#and device on which the tensor is processed
tensor, tensor.device, tensor.ndim, tensor.shape
(tensor([5, 6]), device(type='cpu'), 1, torch.Size([2]))
#or
#output 2: in the below, the output we can get the only device type
tensor.device
device(type='cpu')
#As my system using cpu processor "11th Gen Intel(R) Core(TM) i5-1135G7 # 2.40GHz 2.42 GHz"
#find, if the tensor processed GPU?
print(tensor, torch.cuda.is_available()
# the output will be
tensor([5, 6]) False
#above output is false, hence it is not on gpu
#happy coding :)

How does one train multiple models in a single script in TensorFlow when there are GPUs present?

Say I have access to a number of GPUs in a single machine (for the sake of argument assume 8GPUs each with max memory of 8GB each in one single machine with some amount of RAM and disk). I wanted to run in one single script and in one single machine a program that evaluates multiple models (say 50 or 200) in TensorFlow, each with a different hyper parameter setting (say, step-size, decay rate, batch size, epochs/iterations, etc). At the end of training assume we just record its accuracy and get rid of the model (if you want assume the model is being check pointed every so often, so its fine to just throw away the model and start training from scratch. You may also assume some other data may be recorded like the specific hyper params, train, validation, train errors are recorded as we train etc).
Currently I have a (pseudo-)script that looks as follow:
def train_multiple_modles_in_one_script_with_gpu(arg):
'''
trains multiple NN models in one session using GPUs correctly.
arg = some obj/struct with the params for trianing each of the models.
'''
#### try mutliple models
for mdl_id in range(100):
#### define/create graph
graph = tf.Graph()
with graph.as_default():
### get mdl
x = tf.placeholder(float_type, get_x_shape(arg), name='x-input')
y_ = tf.placeholder(float_type, get_y_shape(arg))
y = get_mdl(arg,x)
### get loss and accuracy
loss, accuracy = get_accuracy_loss(arg,x,y,y_)
### get optimizer variables
opt = get_optimizer(arg)
train_step = opt.minimize(loss, global_step=global_step)
#### run session
with tf.Session(graph=graph) as sess:
# train
for i in range(nb_iterations):
batch_xs, batch_ys = get_batch_feed(X_train, Y_train, batch_size)
sess.run(fetches=train_step, feed_dict={x: batch_xs, y_: batch_ys})
# check_point mdl
if i % report_error_freq == 0:
sess.run(step.assign(i))
#
train_error = sess.run(fetches=loss, feed_dict={x: X_train, y_: Y_train})
test_error = sess.run(fetches=loss, feed_dict={x: X_test, y_: Y_test})
print( 'step %d, train error: %s test_error %s'%(i,train_error,test_error) )
essentially it tries lots of models in one single run but it builds each model in a separate graph and runs each one in a separate session.
I guess my main worry is that its unclear to me how tensorflow under the hood allocates resources for the GPUs to be used. For example, does it load the (part of the) data set only when a session is ran? When I create a graph and a model, is it brought in the GPU immediately or when is it inserted in the GPU? Do I need to clear/free the GPU each time it tries a new model? I don't actually care too much if the models are ran in parallel in multiple GPU (which can be a nice addition), but I want it to first run everything serially without crashing. Is there anything special I need to do for this to work?
Currently I am getting an error that starts as follow:
I tensorflow/core/common_runtime/bfc_allocator.cc:702] Stats:
Limit: 340000768
InUse: 336114944
MaxInUse: 339954944
NumAllocs: 78
MaxAllocSize: 335665152
W tensorflow/core/common_runtime/bfc_allocator.cc:274] ***************************************************xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
W tensorflow/core/common_runtime/bfc_allocator.cc:275] Ran out of memory trying to allocate 160.22MiB. See logs for memory state.
W tensorflow/core/framework/op_kernel.cc:975] Resource exhausted: OOM when allocating tensor with shape[60000,700]
and further down the line it says:
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[60000,700]
[[Node: standardNN/NNLayer1/Z1/add = Add[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](standardNN/NNLayer1/Z1/MatMul, b1/read)]]
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla P100-SXM2-16GB, pci bus id: 0000:06:00.0)
however further down the output file (where it prints) it seems to print fine the errors/messages that should show as training proceeds. Does this mean that it didn't run out of resources? Or was it actually able to use the GPU? If it was able to use the CPU instead of the CPU, when why is this an error only happening when GPU are about to be used?
The weird thing is that the data set is really not that big (all 60K points are 24.5M) and when I run a single model locally in my own computer it seems that the process uses less than 5GB. The GPUs have at least 8GB and the computer with them has plenty of RAM and disk (at least 16GB). Thus, the errors that tensorflow is throwing at me are quite puzzling. What is it trying to do and why are they occurring? Any ideas?
After reading the answer that suggests to use the multiprocessing library I came up with the following script:
def train_mdl(args):
train(mdl,args)
if __name__ == '__main__':
for mdl_id in range(100):
# train one model with some specific hyperparms (assume they are chosen randomly inside the funciton bellow or read from a config file or they could just be passed or something)
p = Process(target=train_mdl, args=(args,))
p.start()
p.join()
print('Done training all models!')
honestly I am not sure why his answer suggests to use pool, or why there are weird tuple brackets but this is what would make sense for me. Would the resources for tensorflow be re-allocated every time a new process is created in the above loop?
I think that running all models in one single script can be bad practice in the long term (see my suggestion below for a better alternative). However, if you would like to do it, here is a solution: You can encapsulate your TF session into a process with the multiprocessing module, this will make sure TF releases the session memory once the process is done. Here is a code snippet:
from multiprocessing import Pool
import contextlib
def my_model((param1, param2, param3)): # Note the extra (), required by the pool syntax
< your code >
num_pool_worker=1 # can be bigger than 1, to enable parallel execution
with contextlib.closing(Pool(num_pool_workers)) as po: # This ensures that the processes get closed once they are done
pool_results = po.map_async(my_model,
((param1, param2, param3)
for param1, param2, param3 in params_list))
results_list = pool_results.get()
Note from OP: The random number generator seed does not reset automatically with the multi-processing library if you choose to use it. Details here: Using python multiprocessing with different random seed for each process
About TF resource allocation: Usually TF allocates much more resources than it needs. Many times you can restrict each process to use a fraction of the total GPU memory, and discover through trial and error the fraction your script requires.
You can do it with the following snippet
gpu_memory_fraction = 0.3 # Choose this number through trial and error
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_memory_fraction,)
session_config = tf.ConfigProto(gpu_options=gpu_options)
sess = tf.Session(config=session_config, graph=graph)
Note that sometimes TF increases the memory usage in order to accelerate the execution. Therefore, reducing the memory usage might make your model run slower.
Answers to the new questions in your edit/comments:
Yes, Tensorflow will be re-allocated every time a new process is created, and cleared once a process ends.
The for-loop in your edit should also do the job. I suggest to use Pool instead, because it will enable you to run several models concurrently on a single GPU. See my notes about setting gpu_memory_fraction and "choosing the maximal number of processes". Also note that: (1) The Pool map runs the loop for you, so you don't need an outer for-loop once you use it. (2) In your example, you should have something like mdl=get_model(args) before calling train()
Weird tuple parenthesis: Pool only accepts a single argument, therefore we use a tuple to pass multiple arguments. See multiprocessing.pool.map and function with two arguments for more details. As suggested in one answer, you can make it more readable with
def train_mdl(params):
(x,y)=params
< your code >
As #Seven suggested, you can use CUDA_VISIBLE_DEVICES environment variable to choose which GPU to use for your process. You can do it from within your python script using the following on the beginning of the process function (train_mdl).
import os # the import can be on the top of the python script
os.environ["CUDA_VISIBLE_DEVICES"] = "{}".format(gpu_id)
A better practice for executing your experiments would be to isolate your training/evaluation code from the hyper parameters/ model search code.
E.g. have a script named train.py, which accepts a specific combination of hyper parameters and references to your data as arguments, and executes training for a single model.
Then, to iterate through the all the possible combinations of parameters you can use a simple task (jobs) queue, and submit all the possible combinations of hyper-parametrs as separate jobs. The task queue will feed your jobs one at a time to your machine. Usually, you can also set the queue to execute number of processes concurrently (see details below).
Specifically, I use task spooler, which is super easy to install and handful (doesn't requires admin privileges, details below).
Basic usage is (see notes below about task spooler usage):
ts <your-command>
In practice, I have a separate python script that manages my experiments, set all the arguments per specific experiment and send the jobs to the ts queue.
Here are some relevant snippets of python code from my experiments manager:
run_bash executes a bash command
def run_bash(cmd):
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, executable='/bin/bash')
out = p.stdout.read().strip()
return out # This is the stdout from the shell command
The next snippet sets the number of concurrent processes to be run (see note below about choosing the maximal number of processes):
max_job_num_per_gpu = 2
run_bash('ts -S %d'%max_job_num_per_gpu)
The next snippet iterates through a list of all combinations of hyper params / model params. Each element of the list is a dictionary, where the keys are the command line arguments for the train.py script
for combination_dict in combinations_list:
job_cmd = 'python train.py ' + ' '.join(
['--{}={}'.format(flag, value) for flag, value in combination_dict.iteritems()])
submit_cmd = "ts bash -c '%s'" % job_cmd
run_bash(submit_cmd)
A note about about choosing the maximal number of processes:
If you are short on GPUs, you can use gpu_memory_fraction you found, to set the number of processes as max_job_num_per_gpu=int(1/gpu_memory_fraction)
Notes about task spooler (ts):
You could set the number of concurrent processes to run ("slots") with:
ts -S <number-of-slots>
Installing ts doesn't requires admin privileges. You can download and compile it from source with a simple make, add it to your path and you're done.
You can set up multiple queues (I use it for multiple GPUs), with
TS_SOCKET=<path_to_queue_name> ts <your-command>
e.g.
TS_SOCKET=/tmp/socket-ts.gpu_queue_1 ts <your-command>
TS_SOCKET=/tmp/socket-ts.gpu_queue_2 ts <your-command>
See here for further usage example
A note about automatically setting the path names and file names:
Once you separate your main code from the experiment manager, you will need an efficient way to generate file names and directory names, given the hyper-params. I usually keep my important hyper params in a dictionary and use the following function to generate a single chained string from the dictionary key-value pairs.
Here are the functions I use for doing it:
def build_string_from_dict(d, sep='%'):
"""
Builds a string from a dictionary.
Mainly used for formatting hyper-params to file names.
Key-value pairs are sorted by the key name.
Args:
d: dictionary
Returns: string
:param d: input dictionary
:param sep: key-value separator
"""
return sep.join(['{}={}'.format(k, _value2str(d[k])) for k in sorted(d.keys())])
def _value2str(val):
if isinstance(val, float):
# %g means: "Floating point format.
# Uses lowercase exponential format if exponent is less than -4 or not less than precision,
# decimal format otherwise."
val = '%g' % val
else:
val = '{}'.format(val)
val = re.sub('\.', '_', val)
return val
As I understand, firstly tensorflow constructs a symbolic graph and infers the derivatives based on chain rule. Then allocates memory for all (necessary) tensors, including some inputs and outputs of layers for efficiency. When running a session, data will be loaded into the graph but in general, memory use will not change any more.
The error you met, I guess, may be caused by constructing several models in one GPU.
Isolating your training/evaluation code from the hyper parameters is a good choice, as #user2476373 proposed. But I am using bash script directly, not task spooler (may be it's more convenient), e.g.
CUDA_VISIBLE_DEVICES=0 python train.py --lrn_rate 0.01 --weight_decay_rate 0.001 --momentum 0.9 --batch_size 8 --max_iter 60000 --snapshot 5000
CUDA_VISIBLE_DEVICES=0 python eval.py
Or you can write a 'for' loop in the bash script, not necessarily in python script. Noting that I used CUDA_VISIBLE_DEVICES=0 at beginning of the script (the index could be 7 if you have 8 GPUs in one machine). Because based on my experience, I've found that tensorflow uses all GPUs in one machine if I didn't specify operations use which GPU with the code like this
with tf.device('/gpu:0'):
If you want to try multi-GPU implementation, there is some example.
Hope this could help you.
An easy solution: Give each model a unique session and graph.
It works for this platform: TensorFlow 1.12.0, Keras 2.1.6-tf, Python 3.6.7, Jupyter Notebook.
Key code:
with session.as_default():
with session.graph.as_default():
# do something about an ANN model
Full code:
import tensorflow as tf
from tensorflow import keras
import gc
def limit_memory():
""" Release unused memory resources. Force garbage collection """
keras.backend.clear_session()
keras.backend.get_session().close()
tf.reset_default_graph()
gc.collect()
#cfg = tf.ConfigProto()
#cfg.gpu_options.allow_growth = True
#keras.backend.set_session(tf.Session(config=cfg))
keras.backend.set_session(tf.Session())
gc.collect()
def create_and_train_ANN_model(hyper_parameter):
print('create and train my ANN model')
info = { 'result about this ANN model' }
return info
for i in range(10):
limit_memory()
session = tf.Session()
keras.backend.set_session(session)
with session.as_default():
with session.graph.as_default():
hyper_parameter = { 'A set of hyper-parameters' }
info = create_and_train_ANN_model(hyper_parameter)
limit_memory()
Inspired by this link: Keras (Tensorflow backend) Error - Tensor input_1:0, specified in either feed_devices or fetch_devices was not found in the Graph
I have the same issue. My solution is to run from another script doing the following as many times and in as many hyperparameter configurations as you want.
cmd = "python3 ./model_train.py hyperparameters"
os.system(cmd)
You probably don't want to do this.
If you run thousands and thousands of models on your data, and pick the one that evaluates best, you are not doing machine learning; instead you are memorizing your data set, and there is no guarantee that the model you pick will perform at all outside that data set.
In other words, that approach is similar to having a single model, which has thousands of degrees of liberty. Having a model with such high order of complexity is problematic, since it will be able to fit your data better than is actually warranted; such a model is annoyingly able to memorize any noise (outliers, measurement errors, and such) in your training data, which causes the model to perform poorly when the noise is even slightly different.
(Apologies for posting this as an answer, the site wouldn't let me add a comment.)

Categories