bootstrap.Engine

class bootstrap.engines.engine.Engine[source]

Contains training and evaluation procedures

eval()[source]

Launch evaluation procedures

eval_epoch(model, dataset, epoch, mode='eval', logs_json=True)[source]

Launch evaluation procedures for one epoch

List of the hooks (mode='eval' by default):

  • mode_on_start_epoch: before the evaluation procedure for an epoch
  • mode_on_start_batch: before the evaluation precedure for a batch
  • mode_on_forward: after the forward of the model
  • mode_on_print: after the print to the terminal
  • mode_on_end_batch: end of the evaluation procedure for a batch
  • mode_on_end_epoch: before saving the logs in logs.json
  • mode_on_flush: end of the evaluation procedure for an epoch
Returns:mean of all the scalar outputs of the model, indexed by output name, for this epoch
Return type:out(dict)
generate_view()[source]

Generate a view.html via an asynchronous call to self.view.generate()

hook(name)[source]

Run all the callback functions that have been registered for a hook.

Parameters:name – the name of the hook
is_best(out, saving_criteria)[source]

Verify if the last model is the best for a specific saving criteria

Parameters:
  • out (dict) – mean of all the scalar outputs of model indexed by output name
  • saving_criteria (str) –
Returns:

is_best(bool)

Example usage:

out = {
    'loss': 0.2,
    'acctop1': 87.02
}

engine.is_best(out, 'loss:min')
load(dir_logs, name, model, optimizer)[source]

Load a checkpoint

Parameters:
  • dir_logs – directory of the checkpoint
  • name – name of the checkpoint
  • model – model associated to the checkpoint
  • optimizer – optimizer associated to the checkpoint
load_state_dict(state)[source]
register_hook(name, func)[source]

Register a callback function to be triggered when the hook is called.

Parameters:
  • name – the name of the hook
  • func – the callback function (no argument)

Example usage:

def func():
    print('hooked!')

engine.register_hook('train_on_start_batch', func)
resume()[source]

Resume a checkpoint using the bootstrap.lib.options.Options

save(dir_logs, name, model, optimizer)[source]

Save a checkpoint

Parameters:
  • dir_logs – directory of the checkpoint
  • name – name of the checkpoint
  • model – model associated to the checkpoint
  • optimizer – optimizer associated to the checkpoint
state_dict()[source]
train()[source]

Launch training procedures

List of the hooks:

  • train_on_start: before the full training procedure
train_epoch(model, dataset, optimizer, epoch, mode='train')[source]

Launch training procedures for one epoch

List of the hooks:

  • train_on_start_epoch: before the training procedure for an epoch
  • train_on_start_batch: before the training precedure for a batch
  • train_on_forward: after the forward of the model
  • train_on_bachward: after the backward of the loss
  • train_on_update: after the optimization step
  • train_on_print: after the print to the terminal
  • train_on_end_batch: end of the training procedure for a batch
  • train_on_end_epoch: before saving the logs in logs.json
  • train_on_flush: end of the training procedure for an epoch
class bootstrap.engines.logger.LoggerEngine[source]

LoggerEngine is similar to Engine. The only difference is a more powerful is_best method. It is able to look into the logger dictionary that contains the list of all the logged variables indexed by name.

Example usage:

out = {
    'loss': 0.2,
    'acctop1': 87.02
}
engine.is_best(out, 'loss:min')

# Logger().values['eval_epoch.recall_at_1'] contains a list
# of all the recall at 1 values for each evaluation epoch
engine.is_best(out, 'eval_epoch.recall_at_1')