wrapper package

torch_scope.basic_wrapper module

class torch_scope.basic_wrapper[source]

Light toolkit wrapper for experiments based on pytorch.

This class features all-static methods and supports:

  1. Checkpoint loading;
  2. Auto device selection.
static auto_device(metrics='memory', logger=None, use_logger=True)[source]

Automatically choose the gpu (would return the gpu index with minimal used gpu memory).

Parameters:
  • metrics (str, optional, (default=’memory’).) – metric for gpu selection, supporting memory (used memory) and utils.
  • logger (Logger, optional, (default = None).) – The logger used to print (otherwise print would be used.
  • use_logger (bool, optional, (default = True).) – Whether to add the information in the log.
static get_bytes(size, suffix='', logger=None)[source]

Convert other memory size to bytes

Parameters:
  • size (str, required.) – The numeric part of the memory size.
  • suffix (str, optional (default=’‘).) – The unit of the memory size.
  • logger (Logger, optional, (default = None).) – The logger used to print (otherwise print would be used).
static nvidia_memory_map(logger=None, use_logger=True, gpu_index=None)[source]

Get the current GPU memory usage. Based on https://discuss.pytorch.org/t/access-gpu-memory-usage-in-pytorch/3192/4

Parameters:
  • use_logger (bool, optional, (default = True).) – Whether to add the information in the log.
  • logger (Logger, optional, (default = None).) – The logger used to print (otherwise print would be used).
  • gpu_index (int, optional, (default = None).) – The index of the GPU for loggin.
Returns:

Memory_map – Keys are device ids as integers. Values are memory usage as integers in MB.

Return type:

Dict[int, str]

static restore_best_checkpoint(folder_path)[source]

Restore the best checkpoint.

Parameters:folder_path (str, required.) – Path to the folder contains checkpoints
Returns:checkpoint – A dict contains ‘model’ and ‘optimizer’ (if saved).
Return type:dict.
static restore_checkpoint(file_path)[source]

Restore checkpoint.

Parameters:folder_path (str, required.) – Path to the checkpoint file
Returns:checkpoint – A dict contains ‘model’ and ‘optimizer’ (if saved).
Return type:dict.
static restore_configue(path, name='config.json')[source]

Restore the config dict.

Parameters:
  • path (str, required.) – The path toward the folder.
  • name (str, optional, (default = “config.json”).) – Name for the configuration name.
static restore_latest_checkpoint(folder_path)[source]

Restore the latest checkpoint.

Parameters:folder_path (str, required.) – Path to the folder contains checkpoints
Returns:checkpoint – A dict contains ‘model’ and ‘optimizer’ (if saved).
Return type:dict.

torch_scope.wrapper module

class torch_scope.wrapper(path: str, name: str = None, seed: int = None, enable_git_track: bool = False, sheet_track_name: str = None, credential_path: str = None, checkpoints_to_keep: int = 1)[source]

Toolkit wrapper for experiments based on pytorch.

This class has three features:

  1. Tracking environments, dependency, implementations and checkpoints;
  2. Logger wrapper with two handlers;
  3. Tensorboard wrapper;
  4. Auto device selection;
Parameters:
  • path (str, required.) – Output path for logger, checkpoint, … If set to None, we would not create any file-writers.
  • name (str, optional, (default=path).) – Name for the experiment,
  • seed (int, optional.) – The random seed (would be random generated if not provided).
  • enable_git_track (bool, optional) – If True, track the implementation with git (would automatically commit tracked files).
  • sheet_track_name (str, optional, (default=None).) – The name of the google sheet for tracking metric.
  • credential_path (str, optional, (default=None).) – The path towards the credential file for tracking with google sheet.
  • checkpoints_to_keep (int, optional, (default=1).) – Number of checkpoints.
add_description(description)[source]

Add description for the experiment to the spreadsheet.

Parameters:description (str, required.) – Description for the experiment.
add_loss_vs_batch(kv_dict: dict, batch_index: int, use_logger: bool = True, use_writer: bool = True, use_sheet_tracker: bool = True)[source]

Add loss to the loss_tracking section in the tensorboard.

Parameters:
  • kv_dict (dict, required.) – Dictionary contains the key-value pair of losses (or metrics).
  • batch_index (int, required.) – Index of the added loss.
  • use_logger (bool, optional, (default = True).) – Whether to print the information in the log.
  • use_sheet_tracker (bool, optional, (default = True).) – Whether to use the sheet writer (when available).
add_model_parameter_histograms(model: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa01f8af048>, batch_index: int)[source]

Add parameter histogram in the tensorboard.

Parameters:
  • model (torch.nn.Module, required.) – The model to be tracked.
  • batch_index (int, required.) – Index of the model parameters updates.
add_model_parameter_stats(model: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa01f9c8f60>, batch_index: int, save: bool = False)[source]

Add parameter stats to the parameter_* sections in the tensorboard.

Parameters:
  • model (torch.nn.Module, required.) – The model to be tracked.
  • batch_index (int, required.) – Index of the model parameters stats.
  • save (bool, optional, (default = False).) – Whether to save the model parameters (for the method add_model_update_stats).
add_model_update_stats(model: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa01f9c8fd0>, batch_index: int)[source]

Add parameter update stats to the parameter_gradient_update sections in the tensorboard.

Parameters:
  • model (torch.nn.Module, required.) – The model to be tracked.
  • batch_index (int, required.) – Index of the model parameters updates.
auto_device(metrics='memory', use_logger=True)[source]

Automatically choose the gpu (would return the gpu index with minimal used gpu memory).

Parameters:
  • metrics (str, optional, (default=’memory’).) – metric for gpu selection, supporting memory (used memory) and utils.
  • use_logger (bool, optional, (default = True).) – Whether to add the information in the log.
close()[source]

Close the tensorboard writer and the logger.

confirm_an_empty_path(path)[source]

Check whether a folder is an empty folder (not-exist).

Parameters:path (str, required.) – Path to the target folder.
critical(*args, **kargs)[source]

Add critical to logger

debug(*args, **kargs)[source]

Add debug to logger

error(*args, **kargs)[source]

Add error to logger

get_bytes(suffix='')[source]

Convert other memory size to bytes

Parameters:
  • size (str, required.) – The numeric part of the memory size.
  • suffix (str, optional (default=’‘).) – The unit of the memory size.
get_logger()[source]

Return the logger.

get_writer()[source]

Return the tensorboard writer.

info(*args, **kargs)[source]

Add info to logger

nvidia_memory_map(use_logger=True, gpu_index=None)[source]

Get the current GPU memory usage. Based on https://discuss.pytorch.org/t/access-gpu-memory-usage-in-pytorch/3192/4

Parameters:
  • use_logger (bool, optional, (default = True).) – Whether to add the information in the log.
  • gpu_index (int, optional, (default = None).) – The index of the GPU for loggin.
Returns:

Memory_map – Keys are device ids as integers. Values are memory usage as integers in MB.

Return type:

Dict[int, str]

restore_best_checkpoint(folder_path=None)[source]

Restore the best checkpoint.

Parameters:folder_path (str, optional, (default = None).) – Path to the folder contains checkpoints
Returns:checkpoint – A dict contains ‘model’ and ‘optimizer’ (if saved).
Return type:dict.
restore_configue(name='config.json')[source]

Restore the config dict.

Parameters:name (str, optional, (default = “config.json”).) – Name for the configuration name.
restore_latest_checkpoint(folder_path=None)[source]

Restore the latest checkpoint.

Parameters:folder_path (str, optional, (default = None).) – Path to the folder contains checkpoints
Returns:checkpoint – A dict contains ‘model’ and ‘optimizer’ (if saved).
Return type:dict.
save_checkpoint(model: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa01f9c8ef0>, optimizer: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa01f9c8eb8> = None, is_best: bool = False, s_dict: dict = None)[source]

Save checkpoint under the path.

Parameters:
  • model (torch.nn.Module, required.) – The model to be saved
  • optimizer (torch.optim.Optimizer, optional.) – The optimizer to be saved (if provided)
  • is_best (bool, optional, (default=False)) – If set false, would only be saved as checkpoint_#counter.th; otherwise, would also be saved as best.th.
  • s_dict (dict, optional, (default=None)) – Other necessay information for checkpoint tracking.
save_configue(config, name='config.json')[source]

Save config dict to the config.json under the path.

Parameters:
  • config (dict, required.) – Config file (supporting dict, Namespace, …)
  • name (str, optional, (default = “config.json”).) – Name for the configuration name.
set_level(level='debug')[source]

Set the level of logging.

Parameters:level (str, required.) – Setting level to one of debug, info, warning, error, critical
warning(*args, **kargs)[source]

Add warning to logger