autogllight.nas

class autogllight.nas.algorithm.Autogt(num_epochs=100, device='auto', disable_progress=False, args=None)[source]

AutoGT trainer.

Parameters:
  • num_epochs (int) – Number of epochs planned for training.

  • device (str or torch.device) – The device of the whole process

search(space: BaseSpace, data, estimator: BaseEstimator)[source]

The search process of NAS.

Parameters:
  • space (autogllight.nas.space.BaseSpace) – The search space. Constructed following nni.

  • dataset (any) – Dataset to perform search on.

  • estimator (autoglight.nas.estimator.BaseEstimator) – The estimator to compute loss & metrics.

Returns:

model – The searched model (return_model=True) or the selection (return_model=False).

Return type:

autogllight.nas.space.BaseSpace || dict

class autogllight.nas.algorithm.Darts(num_epochs=5, workers=4, gradient_clip=5.0, model_lr=0.001, model_wd=0.0005, arch_lr=0.0003, arch_wd=0.001, device='auto', disable_progress=False)[source]

DARTS trainer.

Parameters:
  • num_epochs (int) – Number of epochs planned for training.

  • workers (int) – Workers for data loading.

  • gradient_clip (float) – Gradient clipping. Set to 0 to disable. Default: 5.

  • model_lr (float) – Learning rate to optimize the model.

  • model_wd (float) – Weight decay to optimize the model.

  • arch_lr (float) – Learning rate to optimize the architecture.

  • arch_wd (float) – Weight decay to optimize the architecture.

  • device (str or torch.device) – The device of the whole process

search(space: BaseSpace, dataset, estimator)[source]

The search process of NAS.

Parameters:
  • space (autogllight.nas.space.BaseSpace) – The search space. Constructed following nni.

  • dataset (any) – Dataset to perform search on.

  • estimator (autoglight.nas.estimator.BaseEstimator) – The estimator to compute loss & metrics.

Returns:

model – The searched model (return_model=True) or the selection (return_model=False).

Return type:

autogllight.nas.space.BaseSpace || dict

class autogllight.nas.algorithm.Enas(num_epochs=5, n_warmup=100, log_frequency=None, grad_clip=5.0, entropy_weight=0.0001, skip_weight=0.8, baseline_decay=0.999, ctrl_lr=0.00035, ctrl_steps_aggregate=20, ctrl_kwargs=None, model_lr=0.005, model_wd=0.0005, disable_progress=False, device='auto')[source]

ENAS trainer.

Parameters:
  • num_epochs (int) – Number of epochs planned for training.

  • n_warmup (int) – Number of epochs for training super network.

  • log_frequency (int) – Step count per logging.

  • grad_clip (float) – Gradient clipping. Set to 0 to disable. Default: 5.

  • entropy_weight (float) – Weight of sample entropy loss.

  • skip_weight (float) – Weight of skip penalty loss.

  • baseline_decay (float) – Decay factor of baseline. New baseline will be equal to baseline_decay * baseline_old + reward * (1 - baseline_decay).

  • ctrl_lr (float) – Learning rate for RL controller.

  • ctrl_steps_aggregate (int) – Number of steps that will be aggregated into one mini-batch for RL controller.

  • ctrl_kwargs (dict) – Optional kwargs that will be passed to ReinforceController.

  • model_lr (float) – Learning rate for super network.

  • model_wd (float) – Weight decay for super network.

  • disable_progeress (boolean) – Control whether show the progress bar.

  • device (str or torch.device) – The device of the whole process, e.g. “cuda”, torch.device(“cpu”)

search(space: BaseSpace, dset, estimator)[source]

The search process of NAS.

Parameters:
  • space (autogllight.nas.space.BaseSpace) – The search space. Constructed following nni.

  • dataset (any) – Dataset to perform search on.

  • estimator (autoglight.nas.estimator.BaseEstimator) – The estimator to compute loss & metrics.

Returns:

model – The searched model (return_model=True) or the selection (return_model=False).

Return type:

autogllight.nas.space.BaseSpace || dict

class autogllight.nas.algorithm.GRNA(n_warmup=1000, grad_clip=5.0, disable_progress=False, optimize_mode='maximize', population_size=100, sample_size=25, cycles=20000, mutation_prob=0.05, device='cuda')[source]

GRNA trainer.

Parameters:
  • n_warmup (int) – Number of epochs for training super network.

  • model_lr (float) – Learning rate for super network.

  • model_wd (float) – Weight decay for super network.

  • Evolution (Other parameters see) –

class autogllight.nas.algorithm.Gasso(num_epochs=250, warmup_epochs=10, model_lr=0.01, model_wd=0.0001, arch_lr=0.03, stru_lr=0.04, lamb=0.6, device='auto', disable_progress=False)[source]

GASSO trainer.

Parameters:
  • num_epochs (int) – Number of epochs planned for training.

  • warmup_epochs (int) – Number of epochs planned for warming up.

  • workers (int) – Workers for data loading.

  • model_lr (float) – Learning rate to optimize the model.

  • model_wd (float) – Weight decay to optimize the model.

  • arch_lr (float) – Learning rate to optimize the architecture.

  • stru_lr (float) – Learning rate to optimize the structure.

  • lamb (float) – The parameter to control the influence of hidden feature smoothness

  • device (str or torch.device) – The device of the whole process

prepare(dset)[source]

Train Pro-GNN.

search(space: BaseSpace, dataset, estimator)[source]

The search process of NAS.

Parameters:
  • space (autogllight.nas.space.BaseSpace) – The search space. Constructed following nni.

  • dataset (any) – Dataset to perform search on.

  • estimator (autoglight.nas.estimator.BaseEstimator) – The estimator to compute loss & metrics.

Returns:

model – The searched model (return_model=True) or the selection (return_model=False).

Return type:

autogllight.nas.space.BaseSpace || dict

class autogllight.nas.algorithm.Gauss(num_epochs=100, device='auto', disable_progress=False, args=None)[source]

GAUSS trainer.

Parameters:
  • num_epochs (int) – Number of epochs planned for training.

  • device (str or torch.device) – The device of the whole process

search(space: BaseSpace, data, estimator: BaseEstimator)[source]

The search process of NAS.

Parameters:
  • space (autogllight.nas.space.BaseSpace) – The search space. Constructed following nni.

  • dataset (any) – Dataset to perform search on.

  • estimator (autoglight.nas.estimator.BaseEstimator) – The estimator to compute loss & metrics.

Returns:

model – The searched model (return_model=True) or the selection (return_model=False).

Return type:

autogllight.nas.space.BaseSpace || dict

class autogllight.nas.algorithm.Graces(num_epochs=250, device='auto', disable_progress=False, args=None)[source]

GRACES trainer.

Parameters:
  • num_epochs (int) – Number of epochs planned for training.

  • device (str or torch.device) – The device of the whole process

prepare(data)[source]
datalist of data objects.

[dataset, train_dataset, val_dataset, test_dataset, train_loader, val_loader, test_loader]

search(space: BaseSpace, dataset, estimator: BaseEstimator)[source]

The search process of NAS.

Parameters:
  • space (autogllight.nas.space.BaseSpace) – The search space. Constructed following nni.

  • dataset (any) – Dataset to perform search on.

  • estimator (autoglight.nas.estimator.BaseEstimator) – The estimator to compute loss & metrics.

Returns:

model – The searched model (return_model=True) or the selection (return_model=False).

Return type:

autogllight.nas.space.BaseSpace || dict

class autogllight.nas.algorithm.GraphNasRL(device='auto', num_epochs=10, log_frequency=None, grad_clip=5.0, entropy_weight=0.0001, skip_weight=0, baseline_decay=0.95, ctrl_lr=0.00035, ctrl_steps_aggregate=100, ctrl_kwargs=None, n_warmup=100, model_lr=0.005, model_wd=0.0005, topk=5, disable_progress=False, hardware_metric_limit=None, weight_share=True)[source]

RL in GraphNas.

Parameters:
  • device (torch.device) – torch.device("cpu") or torch.device("cuda").

  • num_epochs (int) – Number of epochs planned for training.

  • log_frequency (int) – Step count per logging.

  • grad_clip (float) – Gradient clipping. Set to 0 to disable. Default: 5.

  • entropy_weight (float) – Weight of sample entropy loss.

  • skip_weight (float) – Weight of skip penalty loss.

  • baseline_decay (float) – Decay factor of baseline. New baseline will be equal to baseline_decay * baseline_old + reward * (1 - baseline_decay).

  • ctrl_lr (float) – Learning rate for RL controller.

  • ctrl_steps_aggregate (int) – Number of steps that will be aggregated into one mini-batch for RL controller.

  • ctrl_steps (int) – Number of mini-batches for each epoch of RL controller learning.

  • ctrl_kwargs (dict) – Optional kwargs that will be passed to ReinforceController.

  • n_warmup (int) – Number of epochs for training super network.

  • model_lr (float) – Learning rate for super network.

  • model_wd (float) – Weight decay for super network.

  • topk (int) – Number of architectures kept in training process.

  • disable_progeress (boolean) – Control whether show the progress bar.

search(space: BaseSpace, dset, estimator)[source]

The search process of NAS.

Parameters:
  • space (autogllight.nas.space.BaseSpace) – The search space. Constructed following nni.

  • dataset (any) – Dataset to perform search on.

  • estimator (autoglight.nas.estimator.BaseEstimator) – The estimator to compute loss & metrics.

Returns:

model – The searched model (return_model=True) or the selection (return_model=False).

Return type:

autogllight.nas.space.BaseSpace || dict

class autogllight.nas.algorithm.RL(num_epochs=5, device='auto', log_frequency=None, grad_clip=5.0, entropy_weight=0.0001, skip_weight=0.8, baseline_decay=0.999, ctrl_lr=0.00035, ctrl_steps_aggregate=20, ctrl_kwargs=None, n_warmup=100, model_lr=0.005, model_wd=0.0005, disable_progress=False, weight_share=True)[source]

RL in GraphNas.

Parameters:
  • num_epochs (int) – Number of epochs planned for training.

  • device (torch.device) – torch.device("cpu") or torch.device("cuda").

  • log_frequency (int) – Step count per logging.

  • grad_clip (float) – Gradient clipping. Set to 0 to disable. Default: 5.

  • entropy_weight (float) – Weight of sample entropy loss.

  • skip_weight (float) – Weight of skip penalty loss.

  • baseline_decay (float) – Decay factor of baseline. New baseline will be equal to baseline_decay * baseline_old + reward * (1 - baseline_decay).

  • ctrl_lr (float) – Learning rate for RL controller.

  • ctrl_steps_aggregate (int) – Number of steps that will be aggregated into one mini-batch for RL controller.

  • ctrl_steps (int) – Number of mini-batches for each epoch of RL controller learning.

  • ctrl_kwargs (dict) – Optional kwargs that will be passed to ReinforceController.

  • n_warmup (int) – Number of epochs for training super network.

  • model_lr (float) – Learning rate for super network.

  • model_wd (float) – Weight decay for super network.

  • disable_progress (boolean) – Control whether show the progress bar.

search(space: BaseSpace, dset, estimator)[source]

The search process of NAS.

Parameters:
  • space (autogllight.nas.space.BaseSpace) – The search space. Constructed following nni.

  • dataset (any) – Dataset to perform search on.

  • estimator (autoglight.nas.estimator.BaseEstimator) – The estimator to compute loss & metrics.

Returns:

model – The searched model (return_model=True) or the selection (return_model=False).

Return type:

autogllight.nas.space.BaseSpace || dict

class autogllight.nas.algorithm.RandomSearch(device='auto', num_epochs=400, disable_progress=False, hardware_metric_limit=None, select_metric='acc')[source]

Uniformly random architecture search

Parameters:
  • device (str or torch.device) – The device of the whole process, e.g. “cuda”, torch.device(“cpu”)

  • num_epochs (int) – Number of epochs planned for training.

  • disable_progeress (boolean) – Control whether show the progress bar.

search(space: BaseSpace, dset, estimator)[source]

The search process of NAS.

Parameters:
  • space (autogllight.nas.space.BaseSpace) – The search space. Constructed following nni.

  • dataset (any) – Dataset to perform search on.

  • estimator (autoglight.nas.estimator.BaseEstimator) – The estimator to compute loss & metrics.

Returns:

model – The searched model (return_model=True) or the selection (return_model=False).

Return type:

autogllight.nas.space.BaseSpace || dict

class autogllight.nas.algorithm.Spos(n_warmup=1000, grad_clip=5.0, disable_progress=False, optimize_mode='maximize', population_size=100, sample_size=25, cycles=20000, mutation_prob=0.05, device='cuda')[source]

SPOS trainer.

Parameters:
  • n_warmup (int) – Number of epochs for training super network.

  • model_lr (float) – Learning rate for super network.

  • model_wd (float) – Weight decay for super network.

  • Evolution (Other parameters see) –

search(space: BaseSpace, dset, estimator)[source]

The search process of NAS.

Parameters:
  • space (autogllight.nas.space.BaseSpace) – The search space. Constructed following nni.

  • dataset (any) – Dataset to perform search on.

  • estimator (autoglight.nas.estimator.BaseEstimator) – The estimator to compute loss & metrics.

Returns:

model – The searched model (return_model=True) or the selection (return_model=False).

Return type:

autogllight.nas.space.BaseSpace || dict

class autogllight.nas.space.AutoAttendNodeClassificationSpace(hidden_dim: int | None = 64, layer_number: int | None = 2, dropout: float | None = 0.9, input_dim: int | None = None, output_dim: int | None = None, ops_type=0, gnn_ops: Sequence[str | Any] | None = None, act_op='tanh', head=8, agg_ops=['add', 'attn'])[source]

AutoAttend Search Space , please refer to http://proceedings.mlr.press/v139/guan21a.html for details. The current implementation is nc (no context weight sharing), we will in future add other types of partial weight sharing proposed in the paper.

Parameters:
  • ops_type (int) – 0 or 1 , choosing from two sets of ops with index ops_type

  • gnn_ops (list of str) – op names for searching, which descripts the compostion of operation pool

  • act_op (str) – determine used activation function

  • agg_ops (list of str) – agg op names for searching. Only [‘add’,’attn’] are options, as mentioned in the paper.

build_graph()[source]

Instantiate the operation graph in the space

forward(data)[source]

Define the forward pass of space model

class autogllight.nas.space.AutogtSpace(args)[source]
build_graph()[source]

Instantiate the operation graph in the space

forward(data, params)[source]

Define the forward pass of space model

parse_model(selection)[source]

Get the fixed model from the selection Usage: the fixed model can be obtained by boxmodel._model Warning : this method will randomize the learnable parameters in the model, as the model is re-instantiated.

class autogllight.nas.space.BaseSpace[source]

Base space class of NAS module. Defining space containing all models. Please use mutables to define your whole space. Refer to https://nni.readthedocs.io/en/stable/NAS/WriteSearchSpace.html for detailed information.

Parameters:

init (bool) – Whether to initialize the whole space. Default: False

abstract build_graph()[source]

Instantiate the operation graph in the space

abstract forward(*args, **kwargs)[source]

Define the forward pass of space model

instantiate(**kwargs)[source]

Instantiate modules in the space

parse_model(selection)[source]

Get the fixed model from the selection Usage: the fixed model can be obtained by boxmodel._model Warning : this method will randomize the learnable parameters in the model, as the model is re-instantiated.

setInputChoice(order, n_candidates=None, choose_from=None, n_chosen=None, reduction='sum', return_mask=False, key=None)[source]

Give a unique key if not given

setLayerChoice(order, op_candidates, reduction='sum', return_mask=False, key=None)[source]

Give a unique key if not given

class autogllight.nas.space.DartsInputChoice(input_choice)[source]
forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

named_parameters()[source]

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

Parameters:
  • prefix (str) – prefix to prepend to all parameter names.

  • recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

  • remove_duplicate (bool, optional) – whether to remove the duplicated parameters in the result. Defaults to True.

Yields:

(str, Parameter) – Tuple containing the name and parameter

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for name, param in self.named_parameters():
>>>     if name in ['bias']:
>>>         print(param.size())
parameters()[source]

Returns an iterator over module parameters.

This is typically passed to an optimizer.

Parameters:

recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

Yields:

Parameter – module parameter

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for param in model.parameters():
>>>     print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
class autogllight.nas.space.DartsLayerChoice(layer_choice)[source]
forward(*args, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

named_parameters()[source]

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

Parameters:
  • prefix (str) – prefix to prepend to all parameter names.

  • recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

  • remove_duplicate (bool, optional) – whether to remove the duplicated parameters in the result. Defaults to True.

Yields:

(str, Parameter) – Tuple containing the name and parameter

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for name, param in self.named_parameters():
>>>     if name in ['bias']:
>>>         print(param.size())
parameters()[source]

Returns an iterator over module parameters.

This is typically passed to an optimizer.

Parameters:

recurse (bool) – if True, then yields parameters of this module and all submodules. Otherwise, yields only parameters that are direct members of this module.

Yields:

Parameter – module parameter

Example:

>>> # xdoctest: +SKIP("undefined vars")
>>> for param in model.parameters():
>>>     print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
class autogllight.nas.space.GassoSpace(hidden_dim: int | None = 64, layer_number: int | None = 2, dropout: float | None = 0.8, input_dim: int | None = None, output_dim: int | None = None, ops: Tuple = ['gat', 'gcn', 'gin', 'sage', 'linear'])[source]
build_graph()[source]

Instantiate the operation graph in the space

forward(data, adjs)[source]

Define the forward pass of space model

parse_model(selection)[source]

Get the fixed model from the selection Usage: the fixed model can be obtained by boxmodel._model Warning : this method will randomize the learnable parameters in the model, as the model is re-instantiated.

class autogllight.nas.space.GaussSpace(input_dim, output_dim, add_pre, args)[source]
build_graph()[source]

Instantiate the operation graph in the space

forward(data, arch)[source]

Define the forward pass of space model

parse_model(selection)[source]

Get the fixed model from the selection Usage: the fixed model can be obtained by boxmodel._model Warning : this method will randomize the learnable parameters in the model, as the model is re-instantiated.

class autogllight.nas.space.GracesSpace(input_dim, output_dim, num_nodes, mol, virtual, criterion, args)[source]
build_graph()[source]

Instantiate the operation graph in the space

forward(data)[source]

Define the forward pass of space model

parse_model(selection)[source]

Get the fixed model from the selection Usage: the fixed model can be obtained by boxmodel._model Warning : this method will randomize the learnable parameters in the model, as the model is re-instantiated.

class autogllight.nas.space.GraphNasMacroNodeClassificationSpace(hidden_dim: int | None = 64, layer_number: int | None = 2, dropout: float | None = 0.6, input_dim: int | None = None, output_dim: int | None = None, gnns: Tuple = ['gat', 'gcn', 'cos', 'const', 'gat_sym', 'linear', 'generalized_linear'], aggs=['sum', 'mean', 'max', 'mlp'], acts=['sigmoid', 'tanh', 'relu', 'linear', 'softplus', 'leaky_relu', 'relu6', 'elu'], heads=[1, 2, 4, 6, 8, 16], outs=[4, 8, 16, 32, 64, 128, 256], search_act_con=False)[source]
build_graph()[source]

Instantiate the operation graph in the space

parse_model(selection)[source]

Get the fixed model from the selection Usage: the fixed model can be obtained by boxmodel._model Warning : this method will randomize the learnable parameters in the model, as the model is re-instantiated.

class autogllight.nas.space.GraphNasNodeClassificationSpace(hidden_dim: int | None = 64, layer_number: int | None = 2, dropout: float | None = 0.9, input_dim: int | None = None, output_dim: int | None = None, gnn_ops: Sequence[str | Any] = ['gat_8', 'gat_6', 'gat_4', 'gat_2', 'gat_1', 'gcn', 'cheb', 'sage', 'arma', 'sg', 'linear', 'zero'], act_ops: Sequence[str | Any] = ['sigmoid', 'tanh', 'relu', 'linear', 'elu'], con_ops: Sequence[str | Any] = ['add', 'product', 'concat'])[source]
build_graph()[source]

Instantiate the operation graph in the space

forward(data)[source]

Define the forward pass of space model

class autogllight.nas.space.PathSamplingInputChoice(input_choice)[source]

Mixed input. Take a list of tensor as input, select some of them and return the sum.

sampled

Sampled module indices.

Type:

int or list of int

mask

A multi-hot bool 1D-tensor representing the sampled mask.

Type:

tensor

forward(input_tensors)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class autogllight.nas.space.PathSamplingLayerChoice(layer_choice)[source]

Mixed module, in which fprop is decided by exactly one or multiple (sampled) module. If multiple module is selected, the result will be sumed and returned.

sampled

Sampled module indices.

Type:

int or list of int

mask

A multi-hot bool 1D-tensor representing the sampled mask.

Type:

tensor

forward(*args, **kwargs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class autogllight.nas.space.SinglePathNodeClassificationSpace(hidden_dim: int | None = 64, layer_number: int | None = 2, dropout: float | None = 0.2, input_dim: int | None = None, output_dim: int | None = None, ops: Tuple = ['gcn', 'gat_8'])[source]
build_graph()[source]

Instantiate the operation graph in the space

forward(data)[source]

Define the forward pass of space model

autogllight.nas.space.apply_fixed_architecture(model, fixed_arc, verbose=True)[source]

Load architecture from fixed_arc and apply to model.

Parameters:
  • model (torch.nn.Module) – Model with mutables.

  • fixed_arc (str or dict) – Path to the JSON that stores the architecture, or dict that stores the exported architecture.

  • verbose (bool) – Print log messages if set to True

Returns:

Mutator that is responsible for fixes the graph.

Return type:

FixedArchitecture

autogllight.nas.space.replace_input_choice(root_module, init_fn, modules=None)[source]

Replace input choice modules with modules that are initiated with init_fn.

Parameters:
  • root_module (nn.Module) – Root module to traverse.

  • init_fn (Callable) – Initializing function.

  • modules (dict, optional) – Update the replaced modules into the dict and check duplicate if provided.

Returns:

A list from layer choice keys (names) and replaced modules.

Return type:

List[Tuple[str, nn.Module]]

autogllight.nas.space.replace_layer_choice(root_module, init_fn, modules=None)[source]

Replace layer choice modules with modules that are initiated with init_fn.

Parameters:
  • root_module (nn.Module) – Root module to traverse.

  • init_fn (Callable) – Initializing function.

  • modules (dict, optional) – Update the replaced modules into the dict and check duplicate if provided.

Returns:

A list from layer choice keys (names) and replaced modules.

Return type:

List[Tuple[str, nn.Module]]

class autogllight.nas.estimator.OneShotEstimator(loss_f='nll_loss', evaluation=[<autogllight.utils.evaluation.Acc object>])[source]

One shot estimator.

Use model directly to get estimations.

Parameters:
  • loss_f (str) – The name of loss funciton in PyTorch

  • evaluation (list of Evaluation) – The evaluation metrics in module/train/evaluation

infer(model: BaseSpace, dataset, mask='train', *args, **kwargs)[source]

Calculate the loss and metrics of given model on given dataset using specified masks.

Parameters:
  • model (autogl.module.nas.space.BaseSpace) – The model in space.

  • dataset (any) – The dataset to perform infer

  • mask (str) – The mask to evalute on dataset

Returns:

  • metrics (list of float) – the metrics on given datasets.

  • loss (torch.Tensor) – the loss on given datasets. Note that loss should be differentiable.

class autogllight.nas.estimator.OneShotOGBEstimator(loss_f='nll_loss', evaluation=[<autogllight.utils.evaluation.Acc object>])[source]

One shot estimator on ogb data

Use model directly to get estimations.

Parameters:
  • loss_f (str) – The name of loss funciton in PyTorch

  • evaluation (list of Evaluation) – The evaluation metrics in module/train/evaluation

infer(model: BaseSpace, dataloader, *args, **kwargs)[source]

Calculate the loss and metrics of given model on given dataset using specified masks.

Parameters:
  • model (autogl.module.nas.space.BaseSpace) – The model in space.

  • dataset (any) – The dataset to perform infer

  • mask (str) – The mask to evalute on dataset

Returns:

  • metrics (list of float) – the metrics on given datasets.

  • loss (torch.Tensor) – the loss on given datasets. Note that loss should be differentiable.

class autogllight.nas.estimator.TrainScratchEstimator(trainer, evaluation=[<autogllight.utils.evaluation.Acc object>])[source]

Train scratch estimator.

Train the model to get estimations.

Parameters:
  • trainer (str) – The trainer to train the model

  • evaluation (list of Evaluation) – The evaluation metrics in module/train/evaluation

infer(model: BaseSpace, dataset, mask='train', *args, **kwargs)[source]

Calculate the loss and metrics of given model on given dataset using specified masks.

Parameters:
  • model (autogl.module.nas.space.BaseSpace) – The model in space.

  • dataset (any) – The dataset to perform infer

  • mask (str) – The mask to evalute on dataset

Returns:

  • metrics (list of float) – the metrics on given datasets.

  • loss (torch.Tensor) – the loss on given datasets. Note that loss should be differentiable.