init_graph
tflearn.config.init_graph (seed=None, log_device=False, num_cores=0, gpu_memory_fraction=0, soft_placement=True)
Initialize a graph with specific parameters.
Arguments
- seed:
int
. Set the graph random seed. - log_device:
bool
. Log device placement or not. - num_cores: Number of CPU cores to be used. Default: All.
- gpu_memory_fraction: A value between 0 and 1 that indicates what fraction of the available GPU memory to pre-allocate for each process. 1 means to pre-allocate all of the GPU memory, 0.5 means the process allocates ~50% of the available GPU memory. Default: Use all GPU's available memory.
- soft_placement:
bool
. Whether soft placement is allowed. If true, an op will be placed on CPU if:1. there's no GPU implementation for the OP - or2. no GPU devices are known or registered - or3. need to co-locate with reftype input(s) which are from CPU.
is_training
tflearn.config.is_training (is_training=False, session=None)
Set the graph training mode.
This is meant to be used to control ops that have different output at training and testing time., such as dropout or batch normalization,
Examples
>> # Retrieve variable responsible for managing training mode
>> training_mode = tflearn.get_training_mode()
>> # Define a conditional op
>> my_conditional_op = tf.cond(training_mode, if_yes_op, if_no_op)
>> # Set training mode to True
>> tflearn.is_training(True)
>> session.run(my_conditional_op)
if_yes_op
>> # Set training mode to False
>> tflearn.is_training(False)
>> session.run(my_conditional_op)
if_no_op
Returns
A bool
, True if training, False else.
get_training_mode
tflearn.config.get_training_mode ()
Returns variable in-use to set training mode.
Returns
A Variable
, the training mode holder.