alt.hst.api.evaluation_config Module#

class EvaluationConfiguration(approach: Approach)#

Bases: object

getModelTasks() Dict[str, List[EvaluationTask]]#

Get the currently selected tasks for all models.

Returns:

A dictionary where keys are model varnames and values are lists of EvaluationTask.

Return type:

Dict[str, List[EvaluationTask]

getOutputExtractions() List[str] | None#

Get the currently selected outputs.

Returns:

A list of output variable names that are set for extraction or None if all outputs are selected.

Return type:

Optional[List[str]]

getRuns() List[int] | None#

Get the currently selected runs.

Returns:

A list of run indices to be evaluated, or None if all runs are selected.

Return type:

Optional[List[int]]

setAllModelTasks(tasks: List[EvaluationTask]) None#

Turn selected tasks on for all models.

Parameters:

tasks (List[EvaluationTask]) – The list of tasks to set for all models.

setAllModelTasksAllOn() None#

Turn all tasks on for all models.

setModelTasks(model: str, tasks: List[EvaluationTask]) None#

Set specific tasks for a specific model.

Parameters:
  • model (str) – The varname of the model for which to set the tasks.

  • tasks (List[EvaluationTask]) – The list of tasks to set for the model.

setModelTasksOn(model: str) None#

Set all tasks on for a specific model.

Parameters:

model (str) – The varname of the model for which to set the tasks.

setOutputExtractions(outputs: List[str]) None#

Set specific outputs to be extracted.

Parameters:

outputs (List[str]) – A list of output variable names to be extracted.

setOutputExtractionsAllOn() None#

Extract all outputs.

setRuns(runs: List[int]) None#

Set the runs to be evaluated.

Parameters:

runs (List[int]) – A list of run indices to be evaluated.

setRunsAllOn() None#

Turn all runs on.

enum EvaluationTask(value)#

Bases: Enum

Valid values are as follows:

WRITE = EvaluationTask(write)#
EXECUTE = EvaluationTask(execute)#
EXTRACT = EvaluationTask(extract)#
PURGE = EvaluationTask(purge)#