idrlnet package

idrlnet.use_cpu()[source]

Use CPU.

idrlnet.use_gpu(device=0)[source]

Use GPU with device device.

Parameters

device (torch.device or int) – selected device.

Subpackages

Submodules

idrlnet.callbacks module

Basic Callback classes

class idrlnet.callbacks.GradientReceiver[source]

Bases: Receiver

Register the receiver to monitor gradient norm on the Tensorboard.

receive_notify(solver: Solver, message)[source]
class idrlnet.callbacks.HandleResultReceiver(result_dir)[source]

Bases: Receiver

The receiver will be automatically registered to save results on training domains.

receive_notify(solver: Solver, message: Dict)[source]
class idrlnet.callbacks.SummaryReceiver(*args, **kwargs)[source]

Bases: SummaryWriter, Receiver

The receiver will be automatically registered to control the Tensorboard.

receive_notify(solver: Solver, message: Dict)[source]

idrlnet.data module

Define DataNode

class idrlnet.data.DataNode(inputs: Union[Tuple[str, ...], List[str]], outputs: Union[Tuple[str, ...], List[str]], sample_fn: Callable, loss_fn: str = 'square', lambda_outputs: Optional[Union[Tuple[str, ...], List[str]]] = None, name=None, sigma=1.0, var_sigma=False, *args, **kwargs)[source]

Bases: Node

A class inherits node.Node. With sampling methods implemented, the instance will generate sample points.

Parameters
  • inputs (Union[Tuple[str, ...], List[str]]) – input keys in return.

  • outputs (Union[Tuple[str, ...], List[str]]) – output keys in return.

  • sample_fn (Callable) – Callable instances for sampling. Implementation of SampleDomain is suggested for this arg.

  • loss_fn (str) – Reduce the difference between a given data and this the output of the node to a simple scalar. square and L1 are implemented currently. defaults to ‘square’.

  • lambda_outputs (Union[Tuple[str,...], List[str]]) – Weight for each output in return, defaults to None.

  • name (str) – The name of the node.

  • sigma (float) – The weight for the whole node. defaults to 1.

  • var_sigma (bool) – whether automatical loss balance technique is used. defaults to false

  • args

  • kwargs

counter = 0
property lambda_outputs
property loss_fn
sample() Variables[source]

Sample a group of points, represented by Variables.

Returns

a group of points.

Return type

Variables

property sample_fn
property sigma

A weight for the domain.

class idrlnet.data.SampleDomain[source]

Bases: object

The Template for Callable sampling functions.

abstract sampling(*args, **kwargs)[source]

The method returns sampling points

idrlnet.data.datanode(_fun: Optional[Callable] = None, name=None, loss_fn='square', sigma=1.0, var_sigma=False, **kwargs)[source]

As an alternative, decorate Callable classes as Datanode.

idrlnet.data.get_data_node(fun: Callable, name=None, loss_fn='square', sigma=1.0, var_sigma=False, *args, **kwargs) DataNode[source]

Construct a datanode from sampling functions.

Parameters
  • fun (Callable) – Each call of the Callable object should return a sampling dict.

  • name (str) – name of the generated Datanode, defaults to None

  • loss_fn (str) – Specify a loss function for the data node.

  • args

  • kwargs

Returns

An instance of Datanode

Return type

DataNode

idrlnet.data.get_data_nodes(funs: List[Callable], *args, **kwargs) Tuple[DataNode][source]

idrlnet.graph module

Define Computational graph

class idrlnet.graph.Vertex(pre=None, next=None, node=None, ntype='c')[source]

Bases: Node

counter = 0
class idrlnet.graph.VertexTaskPipeline(nodes: [List[Union[idrlnet.pde.PdeNode, idrlnet.net.NetNode]]], invar: Variables, req_names: List[str])[source]

Bases: object

MAX_STACK_ALLOWED = 100000
display(filename: Optional[str] = None)[source]
property evaluation_order_list
forward_pipeline(invar: Variables, req_names: Optional[List[str]] = None) Variables[source]
operation_order(invar: Variables)[source]
to_json()[source]

idrlnet.header module

Initialize public objects

class idrlnet.header.TestFun(fun)[source]

Bases: object

registered = []
static run()[source]
idrlnet.header.testmemo(fun)[source]

idrlnet.net module

Define NetNode

class idrlnet.net.NetNode(inputs: Union[Tuple, List[str]], outputs: Union[Tuple, List[str]], net: Module, fixed: bool = False, require_no_grad: bool = False, is_reference=False, name=None, *args, **kwargs)[source]

Bases: Node

counter = 0
property fixed
property is_reference
load_state_dict(state_dict: Dict[str, Tensor], strict: bool = True)[source]
property net
property require_no_grad
state_dict(destination=None, prefix: str = '', keep_vars: bool = False)[source]

idrlnet.node module

Define Basic Node

class idrlnet.node.Node[source]

Bases: object

property derivatives: List[str]
property evaluate: Callable
property inputs: List[str]
property name: str
classmethod new_node(name: Optional[str] = None, tf_eq: Optional[Callable] = None, free_symbols: Optional[List[str]] = None, *args, **kwargs) Node[source]
property outputs: List[str]

idrlnet.optim module

Define Optimizers and LR schedulers

class idrlnet.optim.Optimizable[source]

Bases: object

An abstract class for organizing optimization related configuration and operations. The interface is implemented by solver.Solver

OPTIMIZER_MAP = {'ASGD': <class 'torch.optim.asgd.ASGD'>, 'Adadelta': <class 'torch.optim.adadelta.Adadelta'>, 'Adagrad': <class 'torch.optim.adagrad.Adagrad'>, 'Adam': <class 'torch.optim.adam.Adam'>, 'AdamW': <class 'torch.optim.adamw.AdamW'>, 'Adamax': <class 'torch.optim.adamax.Adamax'>, 'LBFGS': <class 'torch.optim.lbfgs.LBFGS'>, 'RMSprop': <class 'torch.optim.rmsprop.RMSprop'>, 'Rprop': <class 'torch.optim.rprop.Rprop'>, 'SGD': <class 'torch.optim.sgd.SGD'>, 'SparseAdam': <class 'torch.optim.sparse_adam.SparseAdam'>}
SCHEDULE_MAP = {'CosineAnnealingLR': <class 'torch.optim.lr_scheduler.CosineAnnealingLR'>, 'CosineAnnealingWarmRestarts': <class 'torch.optim.lr_scheduler.CosineAnnealingWarmRestarts'>, 'CyclicLR': <class 'torch.optim.lr_scheduler.CyclicLR'>, 'ExponentialLR': <class 'torch.optim.lr_scheduler.ExponentialLR'>, 'LambdaLR': <class 'torch.optim.lr_scheduler.LambdaLR'>, 'MultiStepLR': <class 'torch.optim.lr_scheduler.MultiStepLR'>, 'MultiplicativeLR': <class 'torch.optim.lr_scheduler.MultiplicativeLR'>, 'OneCycleLR': <class 'torch.optim.lr_scheduler.OneCycleLR'>, 'StepLR': <class 'torch.optim.lr_scheduler.StepLR'>}
abstract configure_optimizers()[source]
property optimizers
parse_configure(**kwargs)[source]
parse_lr_schedule(**kwargs)[source]
parse_optimizer(**kwargs)[source]
property schedulers
idrlnet.optim.get_available_class(module, class_name) Dict[str, type][source]

Search specified subclasses of the given class in module.

Parameters
  • module (module) – The module name

  • class_name (type) – the parent class

Returns

A dict mapping from subclass.name to subclass

Return type

Dict[str, type]

idrlnet.pde module

Define PdeNode

class idrlnet.pde.ExpressionNode(expression, name, **kwargs)[source]

Bases: PdeNode

class idrlnet.pde.PdeNode(suffix: str = '', **kwargs)[source]

Bases: Node

property equations: Dict
make_nodes() None[source]
property sub_nodes: List
property suffix: str

idrlnet.receivers module

Concrete predefined callbacks

class idrlnet.receivers.Notifier[source]

Bases: object

notify(obj: object, message: Dict)[source]
property receivers
register_receiver(receiver: Receiver)[source]
class idrlnet.receivers.Receiver[source]

Bases: object

abstract receive_notify(obj: object, message: Dict)[source]
class idrlnet.receivers.Signal(value)[source]

Bases: Enum

An enumeration.

AFTER_COMPUTE_LOSS = 'compute_loss'
BEFORE_BACKWARD = 'signal_before_backward'
BEFORE_COMPUTE_LOSS = 'before_compute_loss'
REGISTER = 'signal_register'
SOLVE_END = 'signal_solve_end'
SOLVE_START = 'signal_solve_start'
TRAIN_PIPE_END = 'signal_train_pipe_end'
TRAIN_PIPE_START = 'signal_train_pipe_start'

idrlnet.shortcut module

shortcut for API

idrlnet.solver module

Solver

class idrlnet.solver.Solver(sample_domains: Tuple[Union[DataNode, SampleDomain], ...], netnodes: List[NetNode], pdes: Optional[List] = None, network_dir: str = './network_dir', summary_dir: Optional[str] = None, max_iter: int = 1000, save_freq: int = 100, print_freq: int = 10, loading: bool = True, init_network_dirs: Optional[List[str]] = None, opt_config: Optional[Dict] = None, schedule_config: Optional[Dict] = None, result_dir='train_domain/results', **kwargs)[source]

Bases: Notifier, Optimizable

Instances of the Solver class integrate configurations and handle the computation operation during solving PINNs. One problem usually needs one instance to solve.

Parameters
  • sample_domains (Tuple[DataNode, ...]) – A tuple of geometry domains used to sample points for training of PINNs.

  • netnodes (List[NetNode]) – A list of neural networks. Trainable computation nodes.

  • pdes (Optional[List[PdeNode]]) – A list of partial differential equations. Similar to net nodes, they can evaluateinputs and output results. But they are not trainable.

  • network_dir (str) – The directory used to automatically load and store ckpt files

  • summary_dir (Optional[str]) – The directory is used for store information about tensorboard. If it is not specified, it will be assigned to network_dir by default.

  • max_iter (int) – Max iteration the solver would run.

  • save_freq (int) – Frequency of saving ckpt.

  • print_freq (int) – Frequency of printing loss.

  • loading (bool) – By default, it is true. It will try to load ckpt and continue previous training stage.

  • init_network_dirs (List[str]) – A list of directories for loading pre-trained networks.

  • opt_config (Dict) –

    Configure one optimizer for all trainable parameters. It is a wrapper of torch.optim.Optimizer. One can specify any subclasses of torch.optim.Optimizer by expanding the args like:

    • opt_config=dict(optimizer=’Adam’, lr=0.001) by default.

    • opt_config=dict(optimizer=’SGD’, lr=0.01, momentum=0.9)

    • opt_config=dict(optimizer=’SparseAdam’, lr=0.001, betas=(0.9, 0.999), eps=1e-08)

    Note that the opt is Case Sensitive.

  • schedule_config (Dict) –

    Configure one lr scheduler for the optimizer. It is a wrapper of

    • torch.optim.lr_scheduler._LRScheduler. One can specify any subclasses of the class lke:

    • schedule_config=dict(scheduler=’ExponentialLR’, gamma=math.pow(0.95, 0.001))

    • schedule_config=dict(scheduler=’StepLR’, step_size=30, gamma=0.1)

    Note that the scheduler is Case Sensitive.

  • result_dir (str) – save the final training domain data. defaults to ‘train_domain/results’

  • kwargs

append_sample_domain(datanode)[source]
compute_loss(in_var: Dict[str, Variables], pred_out_sample: Dict[str, Variables], true_out: Dict[str, Variables], lambda_out: Dict[str, Variables]) Tensor[source]

Compute the total loss in one epoch.

configure_optimizers()[source]

Call interfaces of Optimizable

forward_through_all_graph(invar_dict: Dict[str, Variables], req_outvar_dict_index: Dict[str, List[str]]) Dict[str, Variables][source]
generate_computation_pipeline()[source]

Generate computation pipeline for all domains. The change of self.sample_domains will triger this method.

generate_in_out_dict(samples: Dict[str, Variables]) Tuple[Dict[str, Variables], Dict[str, Variables], Dict[str, Variables]][source]
get_domain_parameter(domain_name: str, parameter: str)[source]
get_sample_domain(name: str) DataNode[source]
infer_step(domain_attr: Dict[str, List[str]]) Dict[str, Variables][source]

Specify a domain and required fields for inference. :param domain_attr: A map from a domain name to the list of required outputs on the domain. :type domain_attr: Dict[str, List[str]] :return: A dict of variables which are required. :rtype: Dict[str, Variables]

init_load()[source]
load()[source]

Load parameters of netnodes and the global step from model.ckpt.

property network_dir
property sample_domains
sample_variables_from_domains() Dict[str, Variables][source]
save()[source]

Save parameters of netnodes and the global step to model.ckpt.

set_domain_parameter(domain_name: str, parameter_dict: dict)[source]
set_param_ranges(param_ranges: Dict)[source]
solve()[source]

After the solver instance is initialized, the method could be called to solve the entire problem.

property summary_receiver: SummaryReceiver
train_pipe()[source]

Sample once; calculate the loss once; backward propagation once :return: None

property trainable_parameters: List[Parameter]

Return trainable parameters in netnodes. Parameters in netnodes with is_reference=True or fixed=True will not be returned. :return: A list of trainable parameters. :rtype: List[torch.nn.parameter.Parameter]

idrlnet.torch_util module

conversion utils for sympy expression and torch functions. todo: replace sampling method in GEOMETRY

class idrlnet.torch_util.integral(*args)

Bases: AppliedUndef

default_assumptions = {}
name = 'integral'
idrlnet.torch_util.torch_lambdify(r, f, *args, **kwargs)[source]

idrlnet.variable module

Define variables, intermediate data format for the package.

class idrlnet.variable.Loss(value)[source]

Bases: Enum

Enumerate loss functions

Identity = 'Identity'
L1 = 'L1'
square = 'square'
class idrlnet.variable.Variables[source]

Bases: dict

static cat(*var_list) Variables[source]

todo: catenate in var list

differentiate_(independent_var: Variables, required_derivatives: List[str])[source]

Derivatives will be computed towards the required_derivatives

differentiate_one_step_(independent_var: Variables, required_derivatives: List[str])[source]

One order of derivatives will be computed towards the required_derivatives.

classmethod from_tensor(tensor: Tensor, variable_names: List[str])[source]

Construct Variables from torch.Tensor

merge_tensor() Tensor[source]

merge tensors in the Variable

save(path, formats=None)[source]

Export variable to various formats

subset(subset_keys: List[str]) Variables[source]

Construct a new variable with subset references

to_csv(filename: str) None[source]

Export variable to csv

to_dataframe() DataFrame[source]

merge to a pandas.DataFrame

to_ndarray() Variables[str, np.ndarray][source]

Return a new numpy based variables

to_ndarray_() Variables[str, np.ndarray][source]

convert to a numpy based variables

to_torch_tensor_() Variables[str, torch.Tensor][source]

Convert the variables to torch.Tensor

to_vtu(filename: str, coordinates=None) None[source]

Export variable to vtu

static var_differentiate_one_step(dependent_var: Variables, independent_var: Variables, required_derivatives: List[str])[source]

Perform one step of differentiate towards the required_derivatives

weighted_loss(name: str, loss_function: Union[Loss, str]) Variables[source]

Regard the variable as residuals and reduce to a weighted_loss.

idrlnet.variable.export_var(domain_var: Dict[str, Variables], path='./inference_domain/results', formats=None)[source]

Export a dict of variables to csv, vtu or npz.