!******************************************************************

           M U S U B I    T E S T S U I T E

!******************************************************************


Description: 
---------------------------------------------------------

This folder contains several tests for the Musubi Code.
Please obey the following rules for creating new tests
and changing the existing.


Test case setup:
---------------------------------------------------------

Take the reference folder as a model on how to build your own test. 
Each test case has to consist of
- a README file describing 
  - what happens in the test case
  - how to run it
  - how to evaluate it
- Python scripts 
  - 1_run_test.py   running the case (with various parameter sets) 
  - 2_evaluate.py   evaluating the results 
- sample configuration files
- optional: reference result values to compare against





Folder structure:
---------------------------------------------------------

- physics     Contains physical meaningful test cases which 
              are evaluated based on a known solution






Adding New Testcase to the regression check: 
------------------------------------------------------------------------------------------

Use the format below to add up your testcase in one of the file named "testcaselist.txt".
These files can be found in Apes and all the 3 solvers (musubi,ngnear,fidisol).
If not found you need to create a file with this name and copy template from below.

An example for "lidcavity" testcase with "musubi.lua" as input file and the input
file located in "musubi/testsuite/physics/lidcavity" can be written as:

musubi = "lidcavity":{"dir":"./musubi/testsuite/physics/lidcavity/","input":"musubi.lua",
"output":"timing.res","position":["-","6"],"interval":"1","category":"validation"}

where:
dir - path to testcase input (and where your validated files are located)
    - in this case "./testsuite/physics/lidcavity/" (regression check run from musubi)
    - "./musubi/testsuite/physics/lidcavity/" (when regression check run from apes)

input - input filename (lua configuration file)
      - in this case "musubi.lua"

output - name of an output file that you want to compare
       - in this case "timing.res"

position - a list that holds the line number and the column number of output file given 
           to validate results with.
           eg: ["<Line number>","<column number>"]  a "-" in line number checks the last line by default
         - in our case above: The validation compares the value in last line and 6th column of "Timing.res"
           for validation. 

interval - how often you want to test it (unit = days)
         - in this case "everyday"

category - categorise the testcase. Default categories are [validation,scaling,performance]
         - in this case "validation"

Please check the following before running your testcase:
1. That you have commented out any visualization output (like vtk).
2. That in case you do performance measurement you disabled "Tracking".
3. Your directory containing "input" file has a folder named "mesh" (with required mesh).
4. Your directory containing "input" file has a file for validating the result named 
   "ref_<your output file>". In above example it would be "ref_timing.res"
5. Position <yet to be implemented>

-------------------------------------------------------------------------------------------

 



















!******************************************************************
!******************************************************************
!******************************************************************
!******************************************************************
