Batch Training

The training procedure can be carried over without using the GUI.

In the romAI library, there is a function called romAIdirector_batch that can be deployed in regular Compose OML scripts to train romAI models. This function comes particularly in handy when the user is looking for evaluating different configurations and parameters of the same romAI model. In fact, as it will be explained, the function can trigger the training of multiple romAI models with a unique script run.

Syntax

[loss,r2coeff] = romAIdirector_batch(train_dataset_file, romAI_destination_folder, romAI_name, romAI_structure, romAI_math_model, romAI_training_params);

Inputs

train_dataset_file
A string or a cell of strings specifying the full path of the training datasets.
Type: string | cell
romAI_destination_folder
A string or a cell of strings specifying the folders where to save the romAI models. If this folder does not exist, it gets created.
Type: string | cell
romAI_name
A string or a cell of strings specifying the names of the romAI models. If a unique name is given, but the batch training is used to generate more models (as explained later), a suffix '_<i>' will be appended to the name, where '_<i>' increases with the number of models being trained.
Type: string | cell
romAI_structure
An array struct with fields:
'_inputs'
A cell containing inputs labels.
Type: struct | cell
'_outputs'
A cell containing outputs labels.
Type: struct | cell
'_states'
A cell containing state variables labels. Set it as an empty cell to not use state variables.
Type: struct | cell
'_physical_constraints'
A cell defining state variables derivatives. There is an element-by-element relation with the romAI_structure._states cell. The ith element of romAI_structure._constraints cell represents the derivative of the ith element of romAI_structure._states cell. Leave the ith element as an empty string to not assign a derivative. Set romAI_structure._constraints as an empty cell to not define derivatives.
Type: struct | cell
romAI_math_model
An array struct with fields:
'_type'
'linear' or 'nonlinear.'
Type: struct
If 'nonlinear,' the other fields are:
'_activation_fun'
'relu' or 'tanh'.
'_architecture'
A struct with fields:
'_hidden_layers'
A scalar that specifies the number of hidden layers.
Type: scalar | struct
'_neurons_x_layer'
A matrix that defines the number of neurons for each hidden layer.
Type: scalar | struct
romAI_training_params
Optional
An array struct with fields:
'_output_normalization'
Type: Boolean | struct
Default: True
'_early_stopping'
Type: Boolean | struct
Default: False
'_test_split_ratio'
Type: scalar | struct
Default: 0.2
'_crossval_split_ratio'
Type: scalar | struct
Default: 0.25
'_epochs'
Type: scalar | struct
Default: 10
'_reg_coeff'
Type: scalar | struct
Default: 1x10-6
'_learning_rate'
Type: scalar | struct
Default: 1x10-3
'_monitor_metric'
'val_loss' or 'val_acc'
Default: 'val_loss'
Not needed if '_early_stopping' is False.
'_min_improvement'
Type: scalar | struct
Default: 1x10-3
Not needed if '_early_stopping' is False.
'_patience'
Type: scalar | struct
Default: 20
Not needed if '_early_stopping' is False.

Outputs

loss
An array struct with fields:
'epochs'
An array (1,2,3, …number of epochs).
Type: struct
'test_loss'
Type: scalar | struct
'train_loss'
Has subfields 'overall' and romAI model outputs and state variables labels.
Type: scalar | struct
'val_loss'
Has subfields 'overall' and romAI model outputs and state variables labels.
Type: scalar | struct
2coeff
An array struct with fields:
'state_index'
Has subfields romAI model state variables labels.
Type: scalar | struct
'output'
Has with subfields romAI model outputs labels.
Type: scalar | struct
Note:

Both loss and r2coeff represent the same metrics that can be evaluated from the GUI.

Examples

The function works based on the broadcasting principle. This means that if the input arguments do not have coherent sizes, they are broadcast to the same size by repetition. This lets you evaluate different romAI models with very few lines of code.

A practical example is now going to be discussed to deepen the broadcasting principle. Assume you would like to train two romAI models that differ only for training parameters. Here is the script to achieve it:

train_dataset_file = ‘c:/Users/username/Desktop/mkc_train_dataset.csv’;
romAI_destination_folder = ‘c:/Users/username/Desktop/batch_romAImodel’;
romAI_name = 'lin_mkc2dof';
romAI_structure._inputs = {'F1','F2'};
romAI_structure._outputs = {'s1', 's2'};
romAI_structure._states = {'s1','s2','v1','v2'};
romAI_structure._physical_constraints = {'v1','v2','',''};
romAI_math_model._type = 'linear';

%1st set of training parameters
romAI_training_params(1)._output_normalization = true;
romAI_training_params(1)._early_stopping = false;
romAI_training_params(1)._test_split_ratio = 0.2;
romAI_training_params(1)._crossval_split_ratio = 0.25;
romAI_training_params(1)._reg_coeff = 0;
romAI_training_params(1)._learning_rate = 1e-2;
romAI_training_params(1)._epochs = 50;

%2nd set of training parameters
romAI_training_params(2)._output_normalization = true;
romAI_training_params(2)._early_stopping = false;
romAI_training_params(2)._test_split_ratio = 0.2;
romAI_training_params(2)._crossval_split_ratio = 0.25;
romAI_training_params(2)._reg_coeff = 0;
romAI_training_params(2)._learning_rate = 1e-3;
romAI_training_params(2)._epochs = 100;

%training
[loss, r2coeff] = romAIdirector_batch(train_dataset_file, romAI_destination_folder, romAI_name, …
                                                           romAI_structure, romAI_math_model, romAI_training_params);

Thanks to the broadcasting principle, this is equivalent to the following script, but with less lines of code.

train_dataset_file{1} = ‘c:/Users/username/Desktop/mkc_train_dataset.csv’;
train_dataset_file{2} = ‘c:/Users/username/Desktop/mkc_train_dataset.csv’;
romAI_destination_folder{1} = ‘c:/Users/username/Desktop/batch_romAImodel’;
romAI_destination_folder{2} = ‘c:/Users/username/Desktop/batch_romAImodel’;
romAI_name{1} = 'lin_mkc2dof';
romAI_name{2} = 'lin_mkc2dof_1';
romAI_structure(1)._inputs = {'F1','F2'};
romAI_structure(2)._inputs = {'F1','F2'};
romAI_structure(1)._outputs = {'s1', 's2'};
romAI_structure(2)._outputs = {'s1', 's2'};
romAI_structure(1)._states = {'s1','s2','v1','v2'};
romAI_structure(2)._states = {'s1','s2','v1','v2'};
romAI_structure(1)._physical_constraints = {'v1','v2','',''};
romAI_structure(2)._physical_constraints = {'v1','v2','',''};
romAI_math_model(1)._type = 'linear';
romAI_math_model(2)._type = 'linear';

%1st set of training parameters
romAI_training_params(1)._output_normalization = true;
romAI_training_params(1)._early_stopping = false;
romAI_training_params(1)._test_split_ratio = 0.2;
romAI_training_params(1)._crossval_split_ratio = 0.25;
romAI_training_params(1)._reg_coeff = 0;
romAI_training_params(1)._learning_rate = 1e-2;
romAI_training_params(1)._epochs = 50;

%2nd set of training parameters
romAI_training_params(2)._output_normalization = true;
romAI_training_params(2)._early_stopping = false;
romAI_training_params(2)._test_split_ratio = 0.2;
romAI_training_params(2)._crossval_split_ratio = 0.25;
romAI_training_params(2)._reg_coeff = 0;
romAI_training_params(2)._learning_rate = 1e-3;
romAI_training_params(2)._epochs = 100;

%training
[loss, r2coeff] = romAIdirector_batch(train_dataset_file, romAI_destination_folder, romAI_name, …
                                                           romAI_structure , romAI_math_model, romAI_training_params);