Skip to content

Commit

Permalink
Merge pull request #3 from travissluka/bugfix/clarify_docs
Browse files Browse the repository at this point in the history
update docs to address feedback
  • Loading branch information
travissluka authored Mar 27, 2024
2 parents f7a9b4e + d7056b3 commit 622ea95
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 2 deletions.
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,10 @@ make -j 5

Assuming SOCA compiled correctly, you should be able to run the ctests, which are simple tests using a 5 degree ocean grid. See the notes [here](https://jointcenterforsatellitedataassimilation-jedi-docs.readthedocs-hosted.com/en/7.0.0/using/running_skylab/HPC_users_guide.html) about obtaining a compute node before running the tests. Assuming you are within the `build/soca` directory, running `ctest` will only run the tests for SOCA (there are hundreds of other tests for the other JEDI components that you probably don't care about)

If for some reason a test fails, you can rerun a given test and view the output with `ctest -R <test name> -V`.

If for some reason ALL of the tests fail, it's possible that the data files were not downloaded correctly with git lfs, double check to make sure git lfs was setup correctly. (Look at the netCDF files in `./soca/test/Data/` they should be actual netCDF files, not tet files describing which data file git lfs should download.)

## Tutorial Experiments

The files need for a single cycle of several DA methods are provided (observations, background, static files, and yaml configurations). To get the binary data, download the input data from our [Google drive here](https://drive.google.com/uc?export=download&id=15dpIwXWXU72hYQy-wGLuYnrVB-J0eIb4) . Unpack the file with the following command and you should now have a `soca-tutorial/input_data` directory.
Expand Down
6 changes: 4 additions & 2 deletions init/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ The first step in calibrating the correlation operator is to generate the desire
> ./calc_scales.py diffusion_setscales.yaml
> ```
You can look at the resulting `scales.nc` file. You can notice that the vertical scales are deeper in the southern hemisphere, and shallow in the Northern hemisphere, which is appropriate for the date of the initial conditions (Aug 1).
You can look at the resulting `scales.nc` file. You should notice that the vertical scales are deeper in the southern hemisphere, and shallow in the Northern hemisphere, which is appropriate for the date of the initial conditions (Aug 1). (Note: your vertical scales will look different. The plot shown is without any clipping to the size of the vertical scales. However, the resulting values of >50 are too large for explicit diffusion to be efficient, so they are clipped to 10 levels in the given configuration file. )
| hz scales (0-300km) | vt scales (0-50 lvls) |
| :--: | :--: |
Expand All @@ -94,11 +94,13 @@ The operator is split in this way so that the calculation of the horizontal diff
Open the configuration file, `diffusion_parameters.yaml`, to see the structure of the yaml file. You'll notice that the vertical and horizontal parameters are specified and calculated separately as two distinct `group` items, and they use the scales that were generated in the previous step.
> [!IMPORTANT]
> Run the diffusion operator calibration, replace `-n 10` with the actual number of cores you have available:
> Run the diffusion operator calibration, replace `-n 10` with the actual number of cores you have available:
>
> ```bash
> mpirun -n 10 ./soca_error_covariance_toolbox.x diffusion_parameters.yaml
> ```
>
> (Note, if you run with too few cores, you'll have to adjust `domains_stack_size` in the `mom_input.nml` configuration file to something larger)
For each group, the log file will display some important information. One important thing to note is how many iterations of the diffusion operator will be required. This is a function of the length scale and grid size, and the number of iterations required will be kept large enough in order to keep the system stable.
Expand Down

0 comments on commit 622ea95

Please sign in to comment.