ExecPythonScript
This block executes a user-defined command or script in Python.
Library
Activate/CustomBlocks
Description
This block runs a Python script that is entered through the GUI. The script is executed in the base environment interactively through the block GUI, but is not active during the simulation.
The main usage is to allow users to initialize the environment, for example by running a script to linearize a plant and construct a controller before running a simulation to validate the controller.
Parameters
Name | Label | Description | Data Type | Valid Values |
---|---|---|---|---|
script | Python script | The Python script to be executed using the Execute button | String |
Example
Reinforcement learning
The demo model 'reinforcement learning execPython.scm' (Demo browser/Activate/Custom blocks/ExecScript) considers the cart pendulum system with the controller, which is designed using reinforcement learning.
The reinforcement learning process used for the controller design of this inverted pendulum on a cart system is based on the cart-pole system implemented by Rich Sutton et al. The code is inspired by the ’gym’ environment CartPole and the corresponding example in stable-baselines3package (stable-baselines3). The Python code installs the required Python packages if needed.
The learning code is run by executing the Python script inside the ExecPythonScript block (this is done double-clicking on the block and ’Execute’-ing the Python script). The execution of the script takes a few minutes; at the end the result is saved in the model’s temporary folder. At this point the model can be simulated. The controller obtained from the learning process is loaded by the PyCustomBlock and used during simulation to implement the feedback controller.
This implementation of the reinforcement learning requires that the model of the plant (cart-pendulum in this case) be expressed in Python. So the plant model is actually created twice (once in Twin Activate inside the cart pendulum Super Block, and once in Python). So there is a risk of discrepancy between the models.
An alternative to the above approach is to use the Twin Activate model to create the code used by reinforcement learning. This can be done by using the code generation capabilities of Twin Activate to create a C code for the corresponding Super Block, and interface it in Python. In this approach not only the plant model is created only once (so simplicity and no risk of discrepancy) but the performance is also improved since the reinforcement learning program runs simulations in C code instead of Python code.
The Twin Activate model 'Reinforcement learning Python Pcode.scm' contains an implementation of this approach. The OML script in the ExecOmlScript block must be executed first to generate the C code for the cart pendulum Super Block. This code is compiled and linked with Python. Then the Python Script in the Reinforcement learning block must be executed. This runs the reinforcement learning algorithm that creates the controller. Finally running the simulation of the model shows the result of the application of the controller.