idrlnet.architecture package¶
Submodules¶
idrlnet.architecture.grid module¶
The module is experimental. It may be removed or totally refactored in the future.
- class idrlnet.architecture.grid.Interface(points1, points2, nr, outputs, i1, j1, i2, j2, overlap=0.2)[source]¶
Bases:
object
- class idrlnet.architecture.grid.NetEval(n_inputs: int, n_outputs: int, columns, rows, **kwargs)[source]¶
Bases:
Module
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class idrlnet.architecture.grid.NetGridNode(inputs: Union[Tuple, List[str]], outputs: Union[Tuple, List[str]], x_segments: Optional[List[float]] = None, y_segments: Optional[List[float]] = None, z_segments: Optional[List[float]] = None, t_segments: Optional[List[float]] = None, columns: Optional[List[float]] = None, rows: Optional[List[float]] = None, *args, **kwargs)[source]¶
Bases:
NetNode
- idrlnet.architecture.grid.get_net_reg_grid(inputs: Union[Tuple, List[str]], outputs: Union[Tuple, List[str]], name: str, x_segments: Optional[List[float]] = None, y_segments: Optional[List[float]] = None, z_segments: Optional[List[float]] = None, t_segments: Optional[List[float]] = None, **kwargs)[source]¶
idrlnet.architecture.layer module¶
The module provide elements for construct MLP.
- class idrlnet.architecture.layer.Activation(value)[source]¶
Bases:
Enum
An enumeration.
- leaky_relu = 'leaky_relu'¶
- poly = 'poly'¶
- relu = 'relu'¶
- selu = 'selu'¶
- sigmoid = 'sigmoid'¶
- silu = 'silu'¶
- sin = 'sin'¶
- swish = 'swish'¶
- tanh = 'tanh'¶
- class idrlnet.architecture.layer.Initializer(value)[source]¶
Bases:
Enum
An enumeration.
- Xavier_uniform = 'Xavier_uniform'¶
- constant = 'constant'¶
- default = 'default'¶
- kaiming_uniform = 'kaiming_uniform'¶
- idrlnet.architecture.layer.get_activation_layer(activation: Activation = Activation.swish, *args, **kwargs)[source]¶
- idrlnet.architecture.layer.get_linear_layer(input_dim: int, output_dim: int, weight_norm=False, initializer: Initializer = Initializer.Xavier_uniform, *args, **kwargs)[source]¶
idrlnet.architecture.mlp module¶
This module provide some MLP architectures.
- class idrlnet.architecture.mlp.Arch(value)[source]¶
Bases:
Enum
Enumerate pre-defined neural networks.
- bounded_single_var = 'bounded_single_var'¶
- mlp = 'mlp'¶
- mlp_xl = 'mlp_xl'¶
- single_var = 'single_var'¶
- siren = 'siren'¶
- toy = 'toy'¶
- class idrlnet.architecture.mlp.BoundedSingleVar(lower_bound, upper_bound)[source]¶
Bases:
Module
Wrapper a single parameter to represent an unknown coefficient in inverse problem with the upper and lower bound.
- Parameters
lower_bound (float) – The lower bound for the parameter.
upper_bound (float) – The upper bound for the parameter.
- forward(x) Tensor [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class idrlnet.architecture.mlp.MLP(n_seq: List[int], activation: Union[Activation, List[Activation]] = Activation.swish, initialization: Initializer = Initializer.kaiming_uniform, weight_norm: bool = True, name: str = 'mlp', *args, **kwargs)[source]¶
Bases:
Module
A subclass of torch.nn.Module customizes a multiple linear perceptron network.
- Parameters
n_seq (List[int]) – Define neuron numbers in each layer. The number of the first and the last should be in keeping with inputs and outputs.
activation (Union[Activation,List[Activation]]) – By default, the activation is Activation.swish.
initialization –
:type initialization:Initializer :param weight_norm: If weight normalization is used. :type weight_norm: bool :param name: Symbols will appear in the name of each layer. Do not confuse with the netnode name. :type name: str :param args: :param kwargs:
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class idrlnet.architecture.mlp.SimpleExpr(expr, name='expr')[source]¶
Bases:
Module
This class is for testing. One can override SimpleExper.forward to represent complex formulas.
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class idrlnet.architecture.mlp.SingleVar(initialization: float = 1.0)[source]¶
Bases:
Module
Wrapper a single parameter to represent an unknown coefficient in inverse problem.
- Parameters
initialization (float) – initialization value for the parameter. The default is 0.01
- forward(x) Tensor [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class idrlnet.architecture.mlp.Siren(n_seq: List[int], first_omega: float = 30.0, omega: float = 30.0, name: str = 'siren', *args, **kwargs)[source]¶
Bases:
Module
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- idrlnet.architecture.mlp.get_net_node(inputs: Union[Tuple[str, ...], List[str]], outputs: Union[Tuple[str, ...], List[str]], arch: Optional[Arch] = None, name=None, *args, **kwargs) NetNode [source]¶
Get a net node wrapping networks with pre-defined configurations
- Parameters
inputs (Union[Tuple[str, ...]) – Input symbols for the generated node.
outputs (Union[Tuple[str, ...]) – Output symbols for the generated node.
arch (Arch) – One can choose one of - Arch.mlp - Arch.mlp_xl(more layers and more neurons) - Arch.single_var - Arch.bounded_single_var
name (str) – The name of the generated node.
args –
kwargs –
- Returns
Construct a netnode, the net of which is shared by a given netnode. One can specify different inputs and outputs just like an independent netnode. However, the net parameters may have multiple references. Thus the step operations during optimization should only be applied once.
- Parameters
shared_node (NetNode) – An existing netnode, the network of which will be shared.
inputs (Union[Tuple[str, ...]) – Input symbols for the generated node.
outputs (Union[Tuple[str, ...]) – Output symbols for the generated node.
name (str) – The name of the generated node.
args –
kwargs –
- Returns