List of Flux 2022.3 new features
New features dealing with Environment
New features | Description |
---|---|
New management of the parametric distribution in Flux | The previous CDE tools (Computing Distributed
Engine) has been removed. Starting from Flux 2022.3, the user can distributed a Flux computation locally and configure it in the Supervisor Options (see Parametric distribution in Flux) Note: For more informations about
distribution & parallel computing in Flux (ie: using
PBS), please consult the How to Use Solver Parallelization and Parametric Distribution with Flux?
|
New features dealing with Physics
New features | Description |
---|---|
New material property B(Stress) |
A new magneto-mechanical property B(Stress) is now available in Flux 2D in the case of Magnetostatic and Transient Magnetic Applications. When a material containing this new B(Stress) property is assigned to a Laminated magnetic non-conducting region, its B(H) property is modified to account for the magneto-mechanical effect of stress. This new feature is useful to represent the degradation of the magnetic properties of a material due to mechanical constraints resulting from the fabrication process (e.g., punching of electric steel sheets). |
Demagnetization improvements |
Flux 2022.3 brings several improvements related to magnets and the evaluation of their demagnetization in Transient magnetic projects:
|
Free-Shape Optimization |
|
New features dealing with Solving
New features | Description |
---|---|
Zip for PBS | Through the solving menu, it is now possible to generate a zip file from the project currently opened in Flux and use it directly in a job submission for PBS to take advantage of parallel computing on a cluster. |
Flux parametric distribution in multicores | Using parametric distribution, one can now define through the supervisor options the number of cores the secondary flux instances will use to solve the different parameters. |
PETSc | The parallel iterative solver using PETSc library, previously available in Beta mode, is now available for all user mode, becoming the default solver while selecting “iterative solver” in standard mode. PETSc library offers various options of solver and preconditioners (see https://petsc.org/). Recommended to solve large 3D projects (more than ~100 000 nodes), Flux will automatically select the most appropriate solving method depending on the problem to be solved, to ensure good convergence and performances through parallel computing. |
Solver overview | When defining a scenario or use the check physic function, a project analysis is now done to provide some recommendations in terms of number of cores and solver configuration. |
New features dealing with Postprocessing
New features | Description |
---|---|
Spatial Plot Script |
This Compose Script that can be found in the ~flux\Flux\DocExamples\Tools directory computes the harmonic content of the electromagnetic forces in the airgap. Forces are obtained by computed the magnetic flux density components along a path in the middle of the airgap. This script reads the text file that can be exported from Flux and performs the 2D FFT that gives the harmonic content of the forces in Compose. |
New features dealing with Flux e-Machine Toolbox ( FEMT)
New features | Description |
---|---|
FluxMotor Inputs | Include initial default
values coming from FluxMotor (max irms, vrms, speed) When a motor is exported from FluxMotor to FEMT, now the input parameters from FluxMotor are automatically configure as input parameters of FEMT. |
Distribution Inputs | In FeMT 2022.3, to take into account the new
management of the distribution via the Flux Supervisor, 3 Input
Parameters are displayed:
The two last parameters are in "read only", and these displayed values depend to the configuration of the Options Supervisor. |
New "How to" document
New features | Description |
---|---|
How to Use parametric distribution and parallelization with Flux? |
In order to speed up computation times, Flux offers its users the ability to parallelize and do some parametric distribution for heavy project computations. Firstly, this document uses 2D, 3D and Skew examples to give recommendations in terms of number of cores, linear solver and an estimation of the memory needed for an efficient parallelization of solvers. On the other hand, this document shows the establishment of a parametric distribution on a single machine and on a cluster. In conclusion this document gives the whole needed
informations to the user on the following topics:
Note: For more informations about
distribution & parallel computing in Flux (ie: using
PBS), please consult the How to Use Solver Parallelization and Parametric Distribution with Flux?
|