wrapper package¶
torch_scope.basic_wrapper module¶
-
class
torch_scope.
basic_wrapper
[source]¶ Light toolkit wrapper for experiments based on pytorch.
This class features all-static methods and supports:
- Checkpoint loading;
- Auto device selection.
-
static
auto_device
(metrics='memory', logger=None, use_logger=True)[source]¶ Automatically choose the gpu (would return the gpu index with minimal used gpu memory).
Parameters: - metrics (
str
, optional, (default=’memory’).) – metric for gpu selection, supportingmemory
(used memory) andutils
. - logger (
Logger
, optional, (default = None).) – The logger used to print (otherwiseprint
would be used. - use_logger (
bool
, optional, (default = True).) – Whether to add the information in the log.
- metrics (
-
static
get_bytes
(size, suffix='', logger=None)[source]¶ Convert other memory size to bytes
Parameters: - size (
str
, required.) – The numeric part of the memory size. - suffix (
str
, optional (default=’‘).) – The unit of the memory size. - logger (
Logger
, optional, (default = None).) – The logger used to print (otherwiseprint
would be used).
- size (
-
static
nvidia_memory_map
(logger=None, use_logger=True, gpu_index=None)[source]¶ Get the current GPU memory usage. Based on https://discuss.pytorch.org/t/access-gpu-memory-usage-in-pytorch/3192/4
Parameters: - use_logger (
bool
, optional, (default = True).) – Whether to add the information in the log. - logger (
Logger
, optional, (default = None).) – The logger used to print (otherwiseprint
would be used). - gpu_index (
int
, optional, (default = None).) – The index of the GPU for loggin.
Returns: Memory_map – Keys are device ids as integers. Values are memory usage as integers in MB.
Return type: Dict[int, str]
- use_logger (
-
static
restore_best_checkpoint
(folder_path)[source]¶ Restore the best checkpoint.
Parameters: folder_path ( str
, required.) – Path to the folder contains checkpointsReturns: checkpoint – A dict
contains ‘model’ and ‘optimizer’ (if saved).Return type: dict
.
-
static
restore_checkpoint
(file_path)[source]¶ Restore checkpoint.
Parameters: folder_path ( str
, required.) – Path to the checkpoint fileReturns: checkpoint – A dict
contains ‘model’ and ‘optimizer’ (if saved).Return type: dict
.
torch_scope.wrapper module¶
-
class
torch_scope.
wrapper
(path: str, name: str = None, seed: int = None, enable_git_track: bool = False, sheet_track_name: str = None, credential_path: str = None, checkpoints_to_keep: int = 1)[source]¶ Toolkit wrapper for experiments based on pytorch.
This class has three features:
- Tracking environments, dependency, implementations and checkpoints;
- Logger wrapper with two handlers;
- Tensorboard wrapper;
- Auto device selection;
Parameters: - path (
str
, required.) – Output path for logger, checkpoint, … If set toNone
, we would not create any file-writers. - name (
str
, optional, (default=path).) – Name for the experiment, - seed (
int
, optional.) – The random seed (would be random generated if not provided). - enable_git_track (
bool
, optional) – If True, track the implementation with git (would automatically commit tracked files). - sheet_track_name (
str
, optional, (default=None).) – The name of the google sheet for tracking metric. - credential_path (
str
, optional, (default=None).) – The path towards the credential file for tracking with google sheet. - checkpoints_to_keep (
int
, optional, (default=1).) – Number of checkpoints.
-
add_description
(description)[source]¶ Add description for the experiment to the spreadsheet.
Parameters: description ( str
, required.) – Description for the experiment.
-
add_loss_vs_batch
(kv_dict: dict, batch_index: int, use_logger: bool = True, use_writer: bool = True, use_sheet_tracker: bool = True)[source]¶ Add loss to the
loss_tracking
section in the tensorboard.Parameters: - kv_dict (
dict
, required.) – Dictionary contains the key-value pair of losses (or metrics). - batch_index (
int
, required.) – Index of the added loss. - use_logger (
bool
, optional, (default = True).) – Whether to print the information in the log. - use_sheet_tracker (
bool
, optional, (default = True).) – Whether to use the sheet writer (when available).
- kv_dict (
-
add_model_parameter_histograms
(model: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa01f8af048>, batch_index: int)[source]¶ Add parameter histogram in the tensorboard.
Parameters: - model (
torch.nn.Module
, required.) – The model to be tracked. - batch_index (
int
, required.) – Index of the model parameters updates.
- model (
-
add_model_parameter_stats
(model: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa01f9c8f60>, batch_index: int, save: bool = False)[source]¶ Add parameter stats to the
parameter_*
sections in the tensorboard.Parameters: - model (
torch.nn.Module
, required.) – The model to be tracked. - batch_index (
int
, required.) – Index of the model parameters stats. - save (
bool
, optional, (default = False).) – Whether to save the model parameters (for the methodadd_model_update_stats
).
- model (
-
add_model_update_stats
(model: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa01f9c8fd0>, batch_index: int)[source]¶ Add parameter update stats to the
parameter_gradient_update
sections in the tensorboard.Parameters: - model (
torch.nn.Module
, required.) – The model to be tracked. - batch_index (
int
, required.) – Index of the model parameters updates.
- model (
-
auto_device
(metrics='memory', use_logger=True)[source]¶ Automatically choose the gpu (would return the gpu index with minimal used gpu memory).
Parameters: - metrics (
str
, optional, (default=’memory’).) – metric for gpu selection, supportingmemory
(used memory) andutils
. - use_logger (
bool
, optional, (default = True).) – Whether to add the information in the log.
- metrics (
-
confirm_an_empty_path
(path)[source]¶ Check whether a folder is an empty folder (not-exist).
Parameters: path ( str
, required.) – Path to the target folder.
-
get_bytes
(suffix='')[source]¶ Convert other memory size to bytes
Parameters: - size (
str
, required.) – The numeric part of the memory size. - suffix (
str
, optional (default=’‘).) – The unit of the memory size.
- size (
-
nvidia_memory_map
(use_logger=True, gpu_index=None)[source]¶ Get the current GPU memory usage. Based on https://discuss.pytorch.org/t/access-gpu-memory-usage-in-pytorch/3192/4
Parameters: - use_logger (
bool
, optional, (default = True).) – Whether to add the information in the log. - gpu_index (
int
, optional, (default = None).) – The index of the GPU for loggin.
Returns: Memory_map – Keys are device ids as integers. Values are memory usage as integers in MB.
Return type: Dict[int, str]
- use_logger (
-
restore_best_checkpoint
(folder_path=None)[source]¶ Restore the best checkpoint.
Parameters: folder_path ( str
, optional, (default = None).) – Path to the folder contains checkpointsReturns: checkpoint – A dict
contains ‘model’ and ‘optimizer’ (if saved).Return type: dict
.
-
restore_configue
(name='config.json')[source]¶ Restore the config dict.
Parameters: name ( str
, optional, (default = “config.json”).) – Name for the configuration name.
-
restore_latest_checkpoint
(folder_path=None)[source]¶ Restore the latest checkpoint.
Parameters: folder_path ( str
, optional, (default = None).) – Path to the folder contains checkpointsReturns: checkpoint – A dict
contains ‘model’ and ‘optimizer’ (if saved).Return type: dict
.
-
save_checkpoint
(model: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa01f9c8ef0>, optimizer: <sphinx.ext.autodoc.importer._MockObject object at 0x7fa01f9c8eb8> = None, is_best: bool = False, s_dict: dict = None)[source]¶ Save checkpoint under the path.
Parameters: - model (
torch.nn.Module
, required.) – The model to be saved - optimizer (
torch.optim.Optimizer
, optional.) – The optimizer to be saved (if provided) - is_best (bool, optional, (default=False)) – If set false, would only be saved as
checkpoint_#counter.th
; otherwise, would also be saved asbest.th
. - s_dict (dict, optional, (default=None)) – Other necessay information for checkpoint tracking.
- model (
-
save_configue
(config, name='config.json')[source]¶ Save config dict to the
config.json
under the path.Parameters: - config (
dict
, required.) – Config file (supporting dict, Namespace, …) - name (
str
, optional, (default = “config.json”).) – Name for the configuration name.
- config (