Case configuration

WAM2layers uses case configuration files to store the settings for an experiment. That makes it possible to run various experiments without changing the model code. The configuration files are written in yaml format. When you run WAM2layers, these settings are loaded into a Config object. The options in your yaml file should correspond to the attributes of the Config class listed below.

class wam2layers.config.Config
model_config: ClassVar[ConfigDict] = {'validate_assignment': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

filename_template: str

The filename pattern of the raw input data.

Used to find the input files during preprocessing. The pattern will be interpreted during execution of the model to find the input data for each date and variable.

For example, the following pattern:

filename_template: /ERA5data/{year}/{month:02}/ERA5_{year}-{month:02d}-{day:02d}{levtype}_{variable}.nc

will be converted to

/ERA5data/2021/07/ERA5_2021-07-15_ml_u.nc

for date 2022-07-15, variable u, and levtype “_ml” (note the underscore).

preprocessed_data_folder: Path

Location where the pre-processed data should be stored.

If it does not exist, it will be created during pre-processing.

For example:

preprocessed_data_folder: ~/floodcase_202107/preprocessed_data
tracking_direction: Literal['forward', 'backward']

The tracking direction, either forward or backward.

You have to specify if either forward or backward tracking should be performed.

For example:

tracking_direction: backward
tagging_region: Path | BoundingBox

Subdomain delimiting the source/sink regions for tagged moisture.

You can either specify a path that contains a netcdf file, or a bounding box of the form [west, south, east, north].

The bounding box should be inside -180, -80, 180, 80; if west > south, the coordinates will be rolled to retain a continous longitude.

The file should exist. If it has a time dimension, the nearest field will be used as tagging region, and the time should still be between tagging_start_date and tagging_end_date

For example:

tagging_region: /data/volume_2/era5_2021/tagging_region_global.nc
tagging_region: [0, 50, 10, 55]
tracking_domain: BoundingBox | None

Subdomain delimiting the region considered during tracking.

This is useful when you have global pre-processed data but you don’t need global tracking.

You can specify a bounding box of the form [west, south, east, north].

The bounding box should be inside -180, -80, 180, 80; if west > south, the coordinates will be rolled to retain a continous longitude.

If it is set to null, then it will use full domain of preprocessed data.

Note that you should always set period to False if you use a subdomain.

For example:

tracking_domain: [0, 50, 10, 55]
output_folder: Path

Location where output of tracking and analysis should be written.

For example:

output_folder: ~/floodcase_202107/output_data
preprocess_start_date: datetime

Start date for preprocessing.

Should be formatted as: “YYYY-MM-DD[T]HH:MM”. Start date < end date. The preprocess_start_date is included in the preprocessing.

For example:

preprocess_start_date: "2021-07-01T00:00"
preprocess_end_date: datetime

End date for preprocessing.

Should be formatted as: “YYYY-MM-DD[T]HH:MM”. Start date < end date. The preprocess_end_date is included in the preprocessing.

For example:

preprocess_end_date: "2021-07-15T23:00"
tracking_start_date: datetime

Start date for tracking.

Should be formatted as: “YYYY-MM-DD[T]HH:MM”. Start date < end date, even if backtracking. When backward tracking the tracking_start_date is not given as output date.

For example:

tracking_start_date: "2021-07-01T00:00"
tracking_end_date: datetime

Start date for tracking.

Should be formatted as: “YYYY-MM-DD[T]HH:MM”. Start date < end date, even if backtracking.

For example:

tracking_end_date: "2021-07-15T23:00"
tagging_start_date: datetime

Start date for tagging.

For tracking individual (e.g. heavy precipitation) events, you can set the start and end date to something different than the total tracking start and end date, you can also indicate the hours that you want to track. The start date is included.

Should be formatted as: “YYYY-MM-DD[T]HH:MM”. Start date < end date, even if backtracking.

For example:

tagging_start_date: "2021-07-13T00:00"
tagging_end_date: datetime

End date for tagging.

For tracking individual (e.g. heavy precipitation) events, you can set the start and end date to something different than the total tracking start and end date, you can also indicate the hours that you want to track. The end date is included.

Should be formatted as: “YYYY-MM-DD[T]HH:MM”. Start date < end date, even if backtracking.

For example:

tagging_end_date: "2021-07-14T23:00"
model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_fields: ClassVar[dict[str, FieldInfo]] = {'filename_template': FieldInfo(annotation=str, required=True), 'input_frequency': FieldInfo(annotation=str, required=True), 'kvf': FieldInfo(annotation=float, required=True), 'level_type': FieldInfo(annotation=Literal['model_levels', 'pressure_levels'], required=True), 'levels': FieldInfo(annotation=Union[List[int], Literal['All']], required=True), 'output_folder': FieldInfo(annotation=Path, required=True), 'output_frequency': FieldInfo(annotation=str, required=True), 'periodic_boundary': FieldInfo(annotation=bool, required=True), 'preprocess_end_date': FieldInfo(annotation=datetime, required=True), 'preprocess_start_date': FieldInfo(annotation=datetime, required=True), 'preprocessed_data_folder': FieldInfo(annotation=Path, required=True), 'restart': FieldInfo(annotation=bool, required=True), 'tagging_end_date': FieldInfo(annotation=datetime, required=True), 'tagging_region': FieldInfo(annotation=Union[Annotated[Path, PathType], BoundingBox], required=True, metadata=[AfterValidator(func=<function validate_region>)]), 'tagging_start_date': FieldInfo(annotation=datetime, required=True), 'timestep': FieldInfo(annotation=int, required=True), 'tracking_direction': FieldInfo(annotation=Literal['forward', 'backward'], required=True), 'tracking_domain': FieldInfo(annotation=Union[Annotated[BoundingBox, AfterValidator], NoneType], required=False, default=None), 'tracking_end_date': FieldInfo(annotation=datetime, required=True), 'tracking_start_date': FieldInfo(annotation=datetime, required=True)}

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

input_frequency: str

Frequency of the raw input data.

Used to calculated water volumes.

For example:

input_frequency: '1h'
timestep: int

Timestep in seconds with which to perform the tracking.

The data will be interpolated during model execution. Too large timestep will violate CFL criterion, too small timestep will lead to excessive numerical diffusion and slow progress. For best performance, the input frequency should be divisible by the timestep.

For example:

timestep: 600  # timestep in seconds
output_frequency: str

Frequency at which to write output to file.

For example, for daily output files:

output_frequency: '1d'
level_type: Literal['model_levels', 'pressure_levels']

Type of vertical levels in the raw input data.

Can be either model_levels or pressure_levels.

For example:

level_type: model_levels
levels: List[int] | Literal['All']

Which levels to use from the raw input data.

A list of integers corresponding to the levels in the input data, or a subset thereof. Shorthand “all” will attempt to use all 137 ERA5 levels.

For example:

levels: [20,40,60,80,90,95,100,105,110,115,120,123,125,128,130,131,132,133,134,135,136,137]
restart: bool

Whether to restart from previous run.

If set to true, this will attempt to read the output from a previous model run and continue from there. The output from the previous timestep must be available for this to work.

For example:

restart: False
periodic_boundary: bool

Whether to use period boundaries in the zonal direction.

This should be used when working with global datasets.

For example:

periodic_boundary: true
kvf: float

net to gross vertical flux multiplication parameter

For example:

kvf: 3
classmethod from_yaml(config_file)

Read settings from a configuration.yaml file.

For example:

from wam2layers.config import Config
config = Config.from_yaml('../../cases/floodcase_2021.yaml')
check_date_order()
to_file(fname: str | Path) None

Export the configuration to a file.

Note that any comments and formatting from an original yaml file is lost.