Skip to content

main_recipe

Classes:

Name Description
MainRecipe

Main class for DSCIM execution.

dscim.menu.main_recipe.MainRecipe

Bases: StackedDamages, ABC

Main class for DSCIM execution.

Parameters:

Name Type Description Default
discounting_type str

Choice of discounting: euler_gwr, euler_ramsey, constant, naive_ramsey, naive_gwr, gwr_gwr.

None
discrete_discounting

Discounting is discrete if True, else continuous (default is False).

False
fit_type str

Type of damage function estimation: 'ols', 'quantreg'

'ols'
weitzman_parameter

If <= 1: The share of global consumption below which bottom coding is implemented. If > 1: Absolute dollar value of global consumption below which bottom. Default is [0.1, 0.5]. coding is implemented.

None
fair_aggregation list of str or None

How to value climate uncertainty from FAIR: median, mean, ce, median_params. Default is ["ce", "mean", "gwr_mean", "median", "median_params"].

None
rho float

Pure rate of time preference parameter

0.00461878399
fair_dims list of str or None

List of dimensions over which the FAIR CE/mean/median options should be collapsed. Default value is ["simulation"], but lists such as ["simulation", "rcp", "ssp"] can be passed. Note: If dimensions other than 'simulation' are passed, 'median_params' fair aggregation cannot be passed.

None

Methods:

Name Description
calculate_discount_factors

Calculates the stream of discount factors based on the Euler equation that defines an optimal

calculate_scc

Calculate range of FAIR-aggregated SCCs

calculate_stream_discount_factors

Stream of discount factors

calculated_damages

Calculate damages (difference between CEs) for collapsing

ce

Rechunk data appropriately and apply the certainty equivalence

ce_cc_calculation

Calculate CE damages depending on discount type

ce_no_cc_calculation

Calculate GDP CE depending on discount type.

collapsed_pop

Collapse population according to discount type.

damage_function

Calls damage function calculation method.

damage_function_calculation

The damage function model fit may be : (1) ssp specific, (2) ssp-model specific, (3) unique across ssp-model.

damage_function_points

Global damages by RCP/GCM or SLR

discounted_damages

Discount marginal damages. Distinguishes between constant discount rates method and non-constant discount rates.

full_uncertainty_iqr

Calculate the distribution of quantile-weighted SCCs produced from

global_consumption

Global consumption without climate change

global_consumption_calculation

Calculation of global consumption without climate change

global_consumption_no_pulse

Global consumption under FAIR control scenario.

global_consumption_per_capita

Global consumption per capita

global_consumption_pulse

Global consumption under FAIR pulse scenario.

global_damages_calculation

Calculate global collapsed damages for a desired discount type

marginal_damages

Marginal damages due to additional pulse

order_plate

Execute menu option section and save results

order_scc

Execute menu option section and save results

stat_uncertainty_iqr

Calculate the distribution of quantile-weighted SCCs produced from

uncollapsed_sccs

Calculate full distribution of SCCs without FAIR aggregation

weitzman_min

Implements bottom coding that fixes marginal utility below a threshold

Attributes:

Name Type Description
ce_cc

Certainty equivalent of consumption with climate change damages

ce_fair_no_pulse

Certainty equivalent of global consumption under FAIR control scenario

ce_fair_pulse

Certainty equivalent of global consumption under FAIR pulse scenario

ce_no_cc

Certainty equivalent of consumption without climate change damages

damage_function_coefficients Dataset

Load damage function coefficients if the coefficients are provided by the user.

damage_function_fit Dataset

Load fitted damage function if the fit is provided by the user.

gmsl_max

This function finds the GMSL value at which the damage function

median_params_marginal_damages

Calculate marginal damages due to a pulse using a FAIR simulation

output_attrs

Return dict with class attributes for output metadata

Source code in src/dscim/menu/main_recipe.py
  23
  24
  25
  26
  27
  28
  29
  30
  31
  32
  33
  34
  35
  36
  37
  38
  39
  40
  41
  42
  43
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
class MainRecipe(StackedDamages, ABC):
    """Main class for DSCIM execution.

    Parameters
    ----------
    discounting_type : str
        Choice of discounting: ``euler_gwr``, ``euler_ramsey``, ``constant``, ``naive_ramsey``,
        ``naive_gwr``, ``gwr_gwr``.
    discrete_discounting: boolean
        Discounting is discrete if ``True``, else continuous (default is ``False``).
    fit_type : str
        Type of damage function estimation: ``'ols'``, ``'quantreg'``
    weitzman_parameter: list of float or None, optional
        If <= 1: The share of global consumption below which bottom coding is implemented.
        If > 1: Absolute dollar value of global consumption below which bottom.
        Default is [0.1, 0.5].
        coding is implemented.
    fair_aggregation : list of str or None, optional
        How to value climate uncertainty from FAIR: ``median``, ``mean``,
        ``ce``, ``median_params``. Default is ["ce", "mean", "gwr_mean",
        "median", "median_params"].
    rho : float
        Pure rate of time preference parameter
    fair_dims : list of str or None, optional
        List of dimensions over which the FAIR CE/mean/median options should be collapsed. Default value is ["simulation"], but lists such as ["simulation", "rcp", "ssp"] can be passed. Note: If dimensions other than 'simulation' are passed, 'median_params' fair aggregation cannot be passed.
    """

    NAME = ""
    CONST_DISC_RATES = [0.01, 0.015, 0.02, 0.025, 0.03, 0.05]
    DISCOUNT_TYPES = [
        "constant",
        "constant_model_collapsed",
        "naive_ramsey",
        "euler_ramsey",
        "naive_gwr",
        "gwr_gwr",
        "euler_gwr",
    ]
    FORMULAS = [
        "damages ~ -1 + np.power(anomaly, 2)",
        "damages ~ gmsl + np.power(gmsl, 2)",
        "damages ~ -1 + gmsl + np.power(gmsl, 2)",
        "damages ~ -1 + gmsl",
        "damages ~ anomaly + np.power(anomaly, 2)",
        "damages ~ -1 + anomaly + np.power(anomaly, 2)",
        "damages ~ -1 + gmsl + anomaly + np.power(anomaly, 2)",
        "damages ~ -1 + anomaly + np.power(anomaly, 2) + gmsl + np.power(gmsl, 2)",
        "damages ~ -1 + anomaly * gmsl + anomaly * np.power(gmsl, 2) + gmsl * np.power(anomaly, 2) + np.power(anomaly, 2) * np.power(gmsl, 2)",
        "damages ~ anomaly + np.power(anomaly, 2) + gmsl + np.power(gmsl, 2)",
        "damages ~ -1 + anomaly:gmsl + anomaly:np.power(gmsl, 2) + gmsl:np.power(anomaly, 2) + np.power(anomaly, 2):np.power(gmsl, 2)",
        "damages ~ -1 + gmsl:anomaly + gmsl:np.power(anomaly, 2)",
    ]

    def __init__(
        self,
        econ_vars,
        climate_vars,
        sector,
        formula,
        sector_path=None,
        save_path=None,
        rho=0.00461878399,
        eta=1.421158116,
        fit_type="ols",
        discounting_type=None,
        ext_method="global_c_ratio",
        ext_subset_start_year=2085,
        ext_subset_end_year=2099,
        ext_end_year=2300,
        subset_dict=None,
        ce_path=None,
        damage_function_path=None,
        clip_gmsl=False,
        gdppc_bottom_code=39.39265060424805,
        scc_quantiles=None,
        scenario_dimensions=None,
        weitzman_parameter=None,
        fair_aggregation=None,
        filename_suffix="",
        discrete_discounting=False,
        quantreg_quantiles=None,
        quantreg_weights=None,
        full_uncertainty_quantiles=None,
        extrap_formula=None,
        fair_dims=None,
        save_files=None,
        **kwargs,
    ):
        if scc_quantiles is None:
            scc_quantiles = [0.05, 0.17, 0.25, 0.5, 0.75, 0.83, 0.95]

        if weitzman_parameter is None:
            weitzman_parameter = [0.1, 0.5]

        if fair_aggregation is None:
            fair_aggregation = ["ce", "mean", "gwr_mean", "median", "median_params"]

        if quantreg_quantiles is None:
            quantreg_quantiles = [
                0.05,
                0.1,
                0.15,
                0.2,
                0.25,
                0.3,
                0.35,
                0.4,
                0.45,
                0.5,
                0.55,
                0.6,
                0.65,
                0.7,
                0.75,
                0.8,
                0.85,
                0.9,
                0.95,
            ]

        if quantreg_weights is None:
            quantreg_weights = [
                0.075,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.05,
                0.075,
            ]

        if full_uncertainty_quantiles is None:
            full_uncertainty_quantiles = [
                0.01,
                0.05,
                0.17,
                0.25,
                0.5,
                0.75,
                0.83,
                0.95,
                0.99,
            ]

        if fair_dims is None:
            fair_dims = ["simulation"]

        if save_files is None:
            save_files = [
                "damage_function_points",
                "damage_function_coefficients",
                "damage_function_fit",
                "marginal_damages",
                "discount_factors",
                "uncollapsed_sccs",
                "scc",
                "uncollapsed_discount_factors",
                "uncollapsed_marginal_damages",
                "global_consumption",
                "global_consumption_no_pulse",
            ]

        super().__init__(
            sector_path=sector_path,
            save_path=save_path,
            econ_vars=econ_vars,
            climate_vars=climate_vars,
            gdppc_bottom_code=gdppc_bottom_code,
            eta=eta,
            subset_dict=subset_dict,
            ce_path=ce_path,
        )

        self.rho = rho
        self.eta = eta
        self.fit_type = fit_type
        self.fair_aggregation = fair_aggregation
        self.filename_suffix = filename_suffix
        self.weitzman_parameter = weitzman_parameter
        self.discrete_discounting = discrete_discounting
        self.discounting_type = discounting_type
        self.sector = sector
        self.save_path = save_path
        self.damage_function_path = damage_function_path
        self.ext_subset_start_year = ext_subset_start_year
        self.ext_subset_end_year = ext_subset_end_year
        self.ext_end_year = ext_end_year
        self.ext_method = ext_method
        self.clip_gmsl = clip_gmsl
        self.scenario_dimensions = scenario_dimensions
        self.scc_quantiles = scc_quantiles
        self.quantreg_quantiles = quantreg_quantiles
        self.quantreg_weights = quantreg_weights
        self.full_uncertainty_quantiles = full_uncertainty_quantiles
        self.formula = formula
        self.ce_path = ce_path
        self.extrap_formula = extrap_formula
        self.fair_dims = fair_dims
        self.save_files = save_files
        self.__dict__.update(**kwargs)
        self.kwargs = kwargs

        self.logger = logging.getLogger(__name__)

        if self.quantreg_quantiles is not None:
            assert len(self.quantreg_quantiles) == len(
                self.quantreg_weights
            ), "Length of quantreg quantiles does not match length of weights."

        assert (
            self.discounting_type in self.DISCOUNT_TYPES
        ), f"Discount type not implemented. Try one of {self.DISCOUNT_TYPES}."

        assert (
            self.formula in self.FORMULAS
        ), f"Formula not implemented. Try one of {self.FORMULAS}."

        # Set stream of discounts to None if discounting_type is 'constant'
        # 'constant_model_collapsed' should be here except that we allow
        # for a collapsed-model Ramsey rate to be calculated (for labour
        # and energy purposes)
        if self.discounting_type in ["constant", "constant_model_collapsed"]:
            self.stream_discount_factors = None

        # assert formulas for which clip_gmsl is implemented
        if self.clip_gmsl:
            assert self.formula in [
                "damages ~ -1 + anomaly + np.power(anomaly, 2) + gmsl + np.power(gmsl, 2)",
                "damages ~ -1 + gmsl + np.power(gmsl, 2)",
            ]

    def __repr__(self):
        return f"""
        Running {self.NAME}
        sector: {self.sector}
        discounting: {self.discounting_type}
        eta: {self.eta}
        rho: {self.rho}
        """

    def order_plate(self, course):
        """
        Execute menu option section and save results

        This method is a entry point to the class and allows the user to
        calculate different elements of a specific menu option. These elements
        will automatically be saved in the path defined in `save_path`.

        Parameters
        ----------
        course str
            Output to be calculated. Options are:
                - `damage_function`: Return and save all damage function
                  elements including damage function points, coefficients, and
                  fitted values.
                - `scc`: Return Social Cost of Carbon calculation. All elements
                from `damage_function` are saved and returned.

        Returns
        -------
        None. Saved all elements to `save_path`

        """

        self.logger.info(f"\n Executing {self.__repr__()}")

        def damage_function():
            self.logger.info("Processing damage functions ...")
            if self.damage_function_path is None:
                self.logger.info(
                    "Existing damage functions not found. Damage points will be loaded."
                )
                self.damage_function_points
            self.damage_function_coefficients
            try:
                self.damage_function_fit
            except FileNotFoundError:
                pass

        def scc():
            damage_function()
            self.global_consumption
            self.global_consumption_no_pulse
            self.logger.info("Processing SCC calculation ...")
            if self.fit_type == "quantreg":
                self.full_uncertainty_iqr
                self.calculate_scc
                self.stat_uncertainty_iqr
            else:
                if len(self.fair_aggregation) > 0:
                    self.stream_discount_factors
                    self.calculate_scc
                self.uncollapsed_sccs
                self.uncollapsed_marginal_damages
                self.uncollapsed_discount_factors

        course_dict = {"damage_function": damage_function, "scc": scc}

        try:
            course_dict[course]()
            self.logger.info(f"Results available: {self.save_path}")
        except KeyError as e:
            self.logger.error(f"{course} is not a valid option: {e}")
            raise e
        except Exception as e:
            self.logger.error("Error detected.")
            raise e

        return None

    def order_scc(self):
        """
        Execute menu option section and save results

        This method is a wrapper to `order_plate` that calls the "scc" course,
        which is the Social Cost of Carbon calculation. Elements involved in the calculation
        (`fair` and `damage_function`) will automatically be saved in the path
        defined in `save_path`.

        Parameters
        ----------
        None

        Returns
        -------
        xr.Dataset of SCCs

        """

        self.logger.info(f"\n Executing {self.__repr__()}")

        try:
            sccds = self.calculate_scc
            self.logger.info(f"Results available: {self.save_path}")
        except Exception as e:
            self.logger.error("Error detected.")
            raise e

        if ("rcp45" in sccds.rcp) or ("rcp85" in sccds.rcp):
            # leave the dataset alone if there are already rcp scenario names
            pass
        else:
            # rename the CMIP6 scenario names that start with "ssp*"
            sccds = sccds.sortby(sccds.rcp)

            rcpdt = {
                "ssp126": "RCP2.6",
                "ssp245": "RCP4.5",
                "ssp370": "RCP7.0",
                "ssp460": "RCP6.0",
                "ssp585": "RCP8.5",
            }
            rlst = []
            for rcp in sccds.rcp.values:
                rlst.append(rcpdt[rcp])
            sccds.coords["rcp"] = rlst
            sccds = sccds.sortby(sccds.rcp)

        return sccds.squeeze(drop=True)

    @property
    def output_attrs(self):
        """Return dict with class attributes for output metadata

        Returns
        ------
            A dict Class metadata
        """

        # find machine name
        machine_name = os.getenv("HOSTNAME")
        if machine_name is None:
            try:
                machine_name = os.uname()[1]
            except AttributeError:
                machine_name = "unknown"

        # find git commit hash
        try:
            label = subprocess.check_output(["git", "describe", "--always"]).strip()
        except CalledProcessError:
            label = "unknown"

        meta = {}
        for attr_dict in [
            vars(self),
            vars(vars(self)["climate"]),
            vars(vars(self)["econ_vars"]),
        ]:
            meta.update(
                {
                    k: v
                    for k, v in attr_dict.items()
                    if (type(v) not in [xr.DataArray, xr.Dataset, pd.DataFrame])
                    and k not in ["damage_function", "logger"]
                }
            )

        # update with git hash and machine name
        meta.update(dict(machine=machine_name, commit=label))

        # convert to strs
        meta = {k: v if type(v) in [int, float] else str(v) for k, v in meta.items()}

        return meta

    @cachedproperty
    def collapsed_pop(self):
        """Collapse population according to discount type."""
        if (self.discounting_type == "constant") or ("ramsey" in self.discounting_type):
            pop = self.pop
        elif self.discounting_type == "constant_model_collapsed":
            pop = self.pop.mean("model")
        elif "gwr" in self.discounting_type:
            pop = self.pop.mean(["model", "ssp"])
        return pop

    @abstractmethod
    def ce_cc_calculation(self):
        """Calculate CE damages depending on discount type"""

    @abstractmethod
    def ce_no_cc_calculation(self):
        """Calculate GDP CE depending on discount type."""

    @abstractmethod
    def calculated_damages(self):
        """Calculate damages (difference between CEs) for collapsing"""

    @abstractmethod
    def global_damages_calculation(self):
        """Calculate global collapsed damages for a desired discount type"""

    @property
    def ce_cc(self):
        """Certainty equivalent of consumption with climate change damages"""
        return self.ce_cc_calculation()

    @property
    def ce_no_cc(self):
        """Certainty equivalent of consumption without climate change damages"""
        return self.ce_no_cc_calculation()

    @cachedproperty
    @save(name="damage_function_points")
    def damage_function_points(self) -> pd.DataFrame:
        """Global damages by RCP/GCM or SLR

        Returns
        --------
            pd.DataFrame
        """
        df = self.global_damages_calculation()

        if "slr" in df.columns:
            df = df.merge(self.climate.gmsl, on=["year", "slr"])
        if "gcm" in df.columns:
            df = df.merge(self.climate.gmst, on=["year", "gcm", "rcp"])

        # removing illegal combinations from estimation
        if any([i in df.ssp.unique() for i in ["SSP1", "SSP5"]]):
            self.logger.info("Dropping illegal model combinations.")
            for var in [i for i in df.columns if i in ["anomaly", "gmsl"]]:
                df.loc[
                    ((df.ssp == "SSP1") & (df.rcp == "rcp85"))
                    | ((df.ssp == "SSP5") & (df.rcp == "rcp45")),
                    var,
                ] = np.nan

        # agriculture lacks ACCESS0-1/rcp85 combo
        if "agriculture" in self.sector:
            self.logger.info("Dropping illegal model combinations for agriculture.")
            df.loc[(df.gcm == "ACCESS1-0") & (df.rcp == "rcp85"), "anomaly"] = np.nan

        return df

    def damage_function_calculation(self, damage_function_points, global_consumption):
        """The damage function model fit may be : (1) ssp specific, (2) ssp-model specific, (3) unique across ssp-model.
        This depends on the type of discounting. In each case the input data passed to the fitting functions and the formatting of the returned
        output is different because dimensions are different. This function handles this and returns the model fit.

        Returns
        ------
        dict with two xr.Datasets, 'params' (model fit) and 'preds' (predictions from model fit), with dimensions depending
        on self.discounting_type.
        """

        yrs = range(self.climate.pulse_year, self.ext_subset_end_year + 1)

        params_list, preds_list = [], []

        if self.discounting_type == "constant_model_collapsed":
            for ssp in damage_function_points["ssp"].unique():
                # Subset dataframe to specific SSP
                fit_subset = damage_function_points[
                    damage_function_points["ssp"] == ssp
                ]

                global_c_subset = global_consumption.sel({"ssp": ssp})
                # Fit damage function curves using the data subset
                damage_function = model_outputs(
                    damage_function=fit_subset,
                    formula=self.formula,
                    type_estimation=self.fit_type,
                    global_c=global_c_subset,
                    extrapolation_type=self.ext_method,
                    quantiles=self.quantreg_quantiles,
                    year_range=yrs,
                    year_start_pred=self.ext_subset_end_year + 1,
                )

                # Add variables
                params = damage_function["parameters"].expand_dims(
                    dict(
                        discount_type=[self.discounting_type],
                        ssp=[ssp],
                        model=[str(list(self.gdp.model.values))],
                    )
                )

                preds = damage_function["preds"].expand_dims(
                    dict(
                        discount_type=[self.discounting_type],
                        ssp=[ssp],
                        model=[str(list(self.gdp.model.values))],
                    )
                )

                params_list.append(params)
                preds_list.append(preds)

        elif (self.discounting_type == "constant") or (
            "ramsey" in self.discounting_type
        ):
            for ssp, model in list(
                product(
                    damage_function_points.ssp.unique(),
                    damage_function_points.model.unique(),
                )
            ):
                # Subset dataframe to specific SSP-IAM combination.
                fit_subset = damage_function_points[
                    (damage_function_points["ssp"] == ssp)
                    & (damage_function_points["model"] == model)
                ]

                global_c_subset = global_consumption.sel({"ssp": ssp, "model": model})

                # Fit damage function curves using the data subset
                damage_function = model_outputs(
                    damage_function=fit_subset,
                    formula=self.formula,
                    type_estimation=self.fit_type,
                    global_c=global_c_subset,
                    extrapolation_type=self.ext_method,
                    quantiles=self.quantreg_quantiles,
                    year_range=yrs,
                    year_start_pred=self.ext_subset_end_year + 1,
                )

                # Add variables
                params = damage_function["parameters"].expand_dims(
                    dict(
                        discount_type=[self.discounting_type], ssp=[ssp], model=[model]
                    )
                )

                preds = damage_function["preds"].expand_dims(
                    dict(
                        discount_type=[self.discounting_type], ssp=[ssp], model=[model]
                    )
                )

                params_list.append(params)
                preds_list.append(preds)

        elif "gwr" in self.discounting_type:
            # Fit damage function across all SSP-IAM combinations, as expected
            # from the Weitzman-Ramsey discounting
            fit_subset = damage_function_points

            # Fit damage function curves using the data subset
            damage_function = model_outputs(
                damage_function=fit_subset,
                type_estimation=self.fit_type,
                formula=self.formula,
                global_c=global_consumption,
                extrapolation_type=self.ext_method,
                quantiles=self.quantreg_quantiles,
                year_range=yrs,
                year_start_pred=self.ext_subset_end_year + 1,
            )

            # Add variables
            params = damage_function["parameters"].expand_dims(
                dict(
                    discount_type=[self.discounting_type],
                    ssp=[str(list(self.gdp.ssp.values))],
                    model=[str(list(self.gdp.model.values))],
                )
            )

            preds = damage_function["preds"].expand_dims(
                dict(
                    discount_type=[self.discounting_type],
                    ssp=[str(list(self.gdp.ssp.values))],
                    model=[str(list(self.gdp.model.values))],
                )
            )

            params_list.append(params)
            preds_list.append(preds)

        return dict(
            params=xr.combine_by_coords(params_list),
            preds=xr.combine_by_coords(preds_list),
        )

    @cachedproperty
    def damage_function(self):
        """Calls damage function calculation method.

        This function calls the damage function calculation in
        model_outputs(). It calculates a damage function for each
        passed `scenario_dimension` based on subsets of
        self.damage_function_points and extrapolates this function
        using the specified method for all years post-end_ext_subset_year.

        Returns
        -------
        dict
            dict['params'] is a dataframe of betas for each year
            dict['preds'] is a dataframe of predicted y hat for each
                year and anomaly
        """
        if self.scenario_dimensions is None:
            # this only occurs for global discounting
            # with a single scenario passed
            damage_function = self.damage_function_calculation(
                damage_function_points=self.damage_function_points,
                global_consumption=self.global_consumption,
            )

        else:
            # cycle through the different combinations of the scenario dims
            subset = self.damage_function_points.groupby(self.scenario_dimensions)
            damage_function, dict_list = {}, []

            for name, dt in subset:
                # turn single-dim into a tuple to make indexing easier later
                if len(self.scenario_dimensions) == 1:
                    name = tuple([name])

                df = self.damage_function_calculation(
                    damage_function_points=dt,
                    global_consumption=self.global_consumption,
                )
                # assigning dimensions to each dataArray in the dictionary
                for key in df.keys():
                    df[key] = df[key].expand_dims(
                        {
                            var: [val]
                            for var, val in zip(self.scenario_dimensions, list(name))
                        }
                    )
                dict_list.append(df)

            # concatenate different scenarios into one big dataArray
            damage_function["params"] = xr.combine_by_coords(
                [x["params"] for x in dict_list]
            )

            damage_function["preds"] = xr.combine_by_coords(
                [x["preds"] for x in dict_list]
            )

        return damage_function

    @property
    @save(name="damage_function_coefficients")
    def damage_function_coefficients(self) -> xr.Dataset:
        """
        Load damage function coefficients if the coefficients are provided by the user.
        Otherwise, compute them.
        """
        if self.damage_function_path is not None:
            return xr.open_dataset(
                f"{self.damage_function_path}/{self.NAME}_{self.discounting_type}_eta{self.eta}_rho{self.rho}_damage_function_coefficients.nc4"
            )
        else:
            return self.damage_function["params"]

    @property
    @save(name="damage_function_fit")
    def damage_function_fit(self) -> xr.Dataset:
        """
        Load fitted damage function if the fit is provided by the user.
        Otherwise, compute them.
        """
        if self.damage_function_path is not None:
            return xr.open_dataset(
                f"{self.damage_function_path}/{self.NAME}_{self.discounting_type}_eta{self.eta}_rho{self.rho}_damage_function_fit.nc4"
            )
        else:
            return self.damage_function["preds"]

    @property
    def median_params_marginal_damages(self):
        """Calculate marginal damages due to a pulse using a FAIR simulation
        calculated with the median climate parameters.
        """

        fair_control = self.climate.fair_median_params_control
        fair_pulse = self.climate.fair_median_params_pulse

        if self.clip_gmsl:
            fair_control["gmsl"] = np.minimum(fair_control["gmsl"], self.gmsl_max)
            fair_pulse["gmsl"] = np.minimum(fair_pulse["gmsl"], self.gmsl_max)

        damages_pulse = compute_damages(
            fair_pulse,
            betas=self.damage_function_coefficients,
            formula=self.formula,
        )

        damages_no_pulse = compute_damages(
            fair_control,
            betas=self.damage_function_coefficients,
            formula=self.formula,
        )

        median_params_marginal_damages = damages_pulse - damages_no_pulse

        # collapse further if further collapsing dimensions are provided
        if len(self.fair_dims) > 1:
            median_params_marginal_damages = median_params_marginal_damages.mean(
                [i for i in self.fair_dims if i in median_params_marginal_damages.dims]
            )

        # add a Weitzman parameter dimension so this dataset can be concatenated
        # with the other FAIR aggregation results
        median_params_marginal_damages = median_params_marginal_damages.expand_dims(
            {"weitzman_parameter": [str(i) for i in self.weitzman_parameter]}
        )

        return median_params_marginal_damages

    @abstractmethod
    def global_consumption_calculation(self, disc_type):
        """Calculation of global consumption without climate change

        Returns
        -------
            xr.DataArray
        """

    def global_consumption_per_capita(self, disc_type):
        """Global consumption per capita

        Returns
        -------
            xr.DataArray
        """

        # Calculate global consumption per capita
        array_pc = self.global_consumption_calculation(
            disc_type
        ) / self.collapsed_pop.sum("region")

        if self.NAME == "equity":
            # equity recipe's growth is capped to
            # risk aversion recipe's growth rates
            extrapolated = extrapolate(
                xr_array=array_pc,
                start_year=self.ext_subset_start_year,
                end_year=self.ext_subset_end_year,
                interp_year=self.ext_end_year,
                method="growth_constant",
                cap=self.risk_aversion_growth_rates(),
            )

        else:
            extrapolated = extrapolate(
                xr_array=array_pc,
                start_year=self.ext_subset_start_year,
                end_year=self.ext_subset_end_year,
                interp_year=self.ext_end_year,
                method="growth_constant",
            )

        complete_array = xr.concat([array_pc, extrapolated], dim="year")

        return complete_array

    @cachedproperty
    @save(name="global_consumption")
    def global_consumption(self):
        """Global consumption without climate change"""

        # rff simulation means that GDP already exists out to 2300
        if 2300 in self.gdp.year:
            self.logger.debug("Global consumption found up to 2300.")
            global_cons = self.gdp.sum("region").rename("global_consumption")
        else:
            self.logger.info("Extrapolating global consumption.")

            # holding population constant
            # from 2100 to 2300 with 2099 values
            pop = self.collapsed_pop.sum("region")
            pop = pop.reindex(
                year=range(pop.year.min().values, self.ext_end_year + 1),
                method="ffill",
            )

            # Calculate global consumption back by
            global_cons = (
                self.global_consumption_per_capita(self.discounting_type) * pop
            )

        # Add dimension
        # @TODO: remove this line altogether
        global_cons = global_cons.expand_dims(
            {"discount_type": [self.discounting_type]}
        )

        return global_cons

    def weitzman_min(self, no_cc_consumption, cc_consumption, parameter):
        """
        Implements bottom coding that fixes marginal utility below a threshold
        to the marginal utility at that threshold. The threshold is the Weitzman
        parameter.

        Parameters
        ----------
        no_cc_consumption: xr.DataArray
            Consumption array of which the share will be used to calculate
            the absolute Weitzman parameter, only if parameter <= 1.
        consumption: xr.DataArray
            Consumption array to be bottom-coded.
        parameter: float
            A positive number representing the Weitzman parameter, below which marginal utility will be
            top coded; ie., 0.01 implies marginal utility is top coded to the
            value of marginal utility at 1% of no-climate change global consumption.
            If parameter > 1, it is assumed to be an absolute value.
            If parameter <= 1, it is assumed to be a share of future global consumption
            (without climate change).

        Returns
        -------
            xr.Dataset

        """
        # if parameter is share of consumption,
        # multiply by no-climate-change consumption
        if parameter <= 1:
            parameter = parameter * no_cc_consumption

        if self.eta == 1:
            w_utility = np.log(parameter)
            bottom_utility = np.power(parameter, -1) * (parameter - cc_consumption)
            bottom_coded_cons = np.exp(w_utility - bottom_utility)

            clipped_cons = xr.where(
                cc_consumption > parameter, cc_consumption, bottom_coded_cons
            )
        else:
            w_utility = np.power(parameter, (1 - self.eta)) / (1 - self.eta)
            bottom_utility = np.power(parameter, -self.eta) * (
                parameter - cc_consumption
            )
            bottom_coded_cons = power(
                ((1 - self.eta) * (w_utility - bottom_utility)), (1 / (1 - self.eta))
            )

            clipped_cons = xr.where(
                cc_consumption > parameter, cc_consumption, bottom_coded_cons
            )

        return clipped_cons

    @property
    def gmsl_max(self):
        """
        This function finds the GMSL value at which the damage function
        reaches its local maximum along the GMSL dimension.

        Returns
        -------
        xr.DataArray
            the array of GMSL values at which the local maximum is located, for all
            years, ssps, models, and if applicable, values of GMST
        """

        if self.formula in [
            "damages ~ -1 + anomaly + np.power(anomaly, 2) + gmsl + np.power(gmsl, 2)",
            "damages ~ -1 + gmsl + np.power(gmsl, 2)",
        ]:
            gmsl_max = -self.damage_function_coefficients["gmsl"] / (
                2 * self.damage_function_coefficients["np.power(gmsl, 2)"]
            )

            # if the damage function curves up, there is no local max.
            # Thus there will be no linearization.
            gmsl_max = xr.where(
                self.damage_function_coefficients["np.power(gmsl, 2)"] > 0,
                np.inf,
                gmsl_max,
            )

            # confirm that all max values are positive, which is expected
            # for our damage functions
            assert len(gmsl_max.where(gmsl_max < 0, drop=True)) == 0

        else:
            raise NotImplementedError(
                f"The first derivative w.r.t. gmsl for {self.formula} has not been implemented in the menu."
            )

        return gmsl_max

    @cachedproperty
    @save(name="global_consumption_no_pulse")
    def global_consumption_no_pulse(self):
        """Global consumption under FAIR control scenario."""

        fair_control = self.climate.fair_control

        if self.clip_gmsl:
            fair_control["gmsl"] = np.minimum(fair_control["gmsl"], self.gmsl_max)

        damages = compute_damages(
            fair_control,
            betas=self.damage_function_coefficients,
            formula=self.formula,
        )

        cc_cons = self.global_consumption - damages

        gc_no_pulse = []
        for wp in self.weitzman_parameter:
            gc = self.weitzman_min(
                no_cc_consumption=self.global_consumption,
                cc_consumption=cc_cons,
                parameter=wp,
            )
            gc = gc.assign_coords({"weitzman_parameter": str(wp)})

            gc_no_pulse.append(gc)

        return xr.concat(gc_no_pulse, dim="weitzman_parameter")

    @cachedproperty
    @save(name="global_consumption_pulse")
    def global_consumption_pulse(self):
        """Global consumption under FAIR pulse scenario."""

        fair_pulse = self.climate.fair_pulse

        if self.clip_gmsl:
            fair_pulse["gmsl"] = np.minimum(fair_pulse["gmsl"], self.gmsl_max)

        damages = compute_damages(
            fair_pulse,
            betas=self.damage_function_coefficients,
            formula=self.formula,
        )

        cc_cons = self.global_consumption - damages
        gc_no_pulse = []
        for wp in self.weitzman_parameter:
            gc = self.weitzman_min(
                no_cc_consumption=self.global_consumption,
                cc_consumption=cc_cons,
                parameter=wp,
            )
            gc = gc.assign_coords({"weitzman_parameter": str(wp)})
            gc_no_pulse.append(gc)
        return xr.concat(gc_no_pulse, dim="weitzman_parameter")

    @property
    @save(name="ce_fair_pulse")
    def ce_fair_pulse(self):
        """Certainty equivalent of global consumption under FAIR pulse scenario"""
        ce_array = self.ce(self.global_consumption_pulse, dims=self.fair_dims)

        return ce_array.rename("ce_fair_pulse")

    @property
    @save(name="ce_fair_no_pulse")
    def ce_fair_no_pulse(self):
        """Certainty equivalent of global consumption under FAIR control scenario"""

        ce_array = self.ce(self.global_consumption_no_pulse, dims=self.fair_dims)

        return ce_array.rename("ce_fair_no_pulse")

    @cachedproperty
    @save(name="marginal_damages")
    def marginal_damages(self):
        """Marginal damages due to additional pulse"""

        marginal_damages = []

        for agg in [i for i in self.fair_aggregation if i != "median"]:
            if agg == "ce":
                md = self.ce_fair_no_pulse - self.ce_fair_pulse
            elif agg in ["mean", "gwr_mean"]:
                md = self.global_consumption_no_pulse.mean(
                    self.fair_dims
                ) - self.global_consumption_pulse.mean(self.fair_dims)
            elif agg == "median_params":
                md = self.median_params_marginal_damages
            else:
                raise NotImplementedError(
                    f"{agg} is not available. Enter list including"
                    '["ce", "fair", "median", "median_params"]'
                )

            md = md.assign_coords({"fair_aggregation": agg}).expand_dims(
                "fair_aggregation"
            )

            # convert to the marginal damages from a single tonne
            md = md * self.climate.conversion
            marginal_damages.append(md)

        return xr.concat(marginal_damages, dim="fair_aggregation")

    @cachedproperty
    @save(name="scc")
    def calculate_scc(self):
        """Calculate range of FAIR-aggregated SCCs"""

        discounted_damages = self.discounted_damages(
            damages=self.marginal_damages, discrate=self.discounting_type
        )
        discounted_damages = discounted_damages.sum(dim="year").rename("scc")

        if "median" in self.fair_aggregation:
            median = self.uncollapsed_sccs.median(self.fair_dims)
            median["fair_aggregation"] = ["median"]
            discounted_damages = xr.merge([median.rename("scc"), discounted_damages])

        return discounted_damages

    def discounted_damages(self, damages, discrate):
        """Discount marginal damages. Distinguishes between constant discount rates method and non-constant discount rates.

        Parameters
        ----------
        damages : xr.DataArray or xr.Dataset
            Array of damages with a`discount_type` dimension to subset the damages.
        discrate : str
             Discounting type. Be aware that the constant rates are class-wide defined. If this str is either 'constant' or 'constant_model_collapsed', the predetermined constant discount rates are used, otherwise, the stream of (non-constant) discount factors from self.stream_discount_factors() is used.

        Returns
        -------
            xr.Dataset
        """

        if discrate in ["constant", "constant_model_collapsed"]:
            if self.discrete_discounting:
                discrate_damages = [
                    damages * (1 / (1 + r)) ** (damages.year - self.climate.pulse_year)
                    for r in self.CONST_DISC_RATES
                ]
            else:
                discrate_damages = [
                    damages * np.exp(-r * (damages.year - self.climate.pulse_year))
                    for r in self.CONST_DISC_RATES
                ]

            pvd_damages = xr.concat(
                discrate_damages,
                dim=pd.Index(self.CONST_DISC_RATES, name="discrate"),
            )
        else:
            factors = self.calculate_stream_discount_factors(
                discounting_type=self.discounting_type,
                fair_aggregation=damages.fair_aggregation,
            )
            pvd_damages = factors * damages

        return pvd_damages

    @cachedproperty
    @save(name="uncollapsed_sccs")
    def uncollapsed_sccs(self):
        """Calculate full distribution of SCCs without FAIR aggregation"""

        md = (
            self.global_consumption_no_pulse - self.global_consumption_pulse
        )  # this is for full uncertainty

        # convert to the marginal damages from a single pulse
        md = md * self.climate.conversion

        md = md.expand_dims({"fair_aggregation": ["uncollapsed"]})

        sccs = self.discounted_damages(
            damages=md,
            discrate=self.discounting_type,
        ).sum(dim="year")

        return sccs

    @cachedproperty
    @save(name="stat_uncertainty_iqr")
    def stat_uncertainty_iqr(self):
        """Calculate the distribution of quantile-weighted SCCs produced from
        quantile regressions that have already been collapsed across other dimensions to give statistical-only uncertainty.
        """
        return quantile_weight_quantilereg(
            self.calculate_scc,
            fair_dims=self.fair_dims,
            quantiles=self.full_uncertainty_quantiles,
        )

    @cachedproperty
    @save(name="full_uncertainty_iqr")
    def full_uncertainty_iqr(self):
        """Calculate the distribution of quantile-weighted SCCs produced from
        quantile regressions.
        """
        return quantile_weight_quantilereg(
            self.uncollapsed_sccs,
            fair_dims=self.fair_dims,
            quantiles=self.full_uncertainty_quantiles,
        )

    def calculate_discount_factors(self, cons_pc):
        """Calculates the stream of discount factors based on the Euler equation that defines an optimal
        intertemporal consumption allocation. Rearranging that equation shows that an outcome that will occur at the
        period t will be converted into today's, period 0 value, the following way :

        Discrete discounting: discount_factor_t = [ 1/(1+rho)^t ] * [ U'(C(t)) / U'(C(0)) ]

        where rho is the pure rate of time preference, U() is the utility function, U'() the first derivative,
        C(0) and C(t) today's and the future consumption respectively. The higher rho, the higher the importance
        of the present relative to the future so the lower the discount factor, and, if the utility function is concave,
        the higher the growth of consumption, the lower the importance of the future consumption relative to today, and
        therefore again the lower the discount factor.

        Using a CRRA utility function and plugging the first derivative :

        discount_factor_t = [ 1/(1+rho)^t ] * [ C(0)^eta / C(t)^eta ]

        eta represents the degree of concavity of the utility function.

        With continuous discounting,
        discount_factor_t = Product_1^t [e^-(rho + eta * g),
            where g = ln(C(t)/C(t-1))

        rearranging yields rho_continuous = e^rho_discrete - 1

        Parameters
        ----------
        cons_pc : array
            Array of per capita consumption from pulse year to end of time period.

        Returns
        -------
        `xarray.DataArray` with discount factors computed following the last equation in the above description.
        """

        # subset to pulse year onward
        cons_pc = cons_pc.sel(year=slice(self.climate.pulse_year, self.ext_end_year))

        # calculate the time preference component of the discount factors for each period.
        if self.discrete_discounting:
            # plug the unique rho in an array
            rhos = xr.DataArray(self.rho, coords=[cons_pc.year])
        else:
            # plug the unique rho in an array, and compute e^rho - 1
            rhos = np.expm1(xr.DataArray(self.rho, coords=[cons_pc.year]))

        stream_rhos = np.divide(
            1, np.multiply.accumulate((rhos.values + 1), rhos.dims.index("year"))
        )

        # calculate the marginal utility component of the discount factors for each period.
        ratio = cons_pc.sel(year=self.climate.pulse_year) ** (self.eta) / cons_pc ** (
            self.eta
        )

        # the discount factor is the product of the two components.
        factors = stream_rhos * ratio

        return factors

    def calculate_stream_discount_factors(self, discounting_type, fair_aggregation):
        """Stream of discount factors
        Returns specified Ramsey or Weitzman-Ramsey discount factors.

        Parameters
        ----------

        discounting_type : str
            Type of discounting to implement. Typically,
            self.discounting_type is passed. However, for local Euler rates,
            this changes depending on the option.

        Returns
        -------
        `xarray.DataArray`
            Discount rates indexed by year and (if Ramsey) SSP/model
        """

        # holding population constant
        # from 2100 to 2300 with 2099 values
        full_pop = self.collapsed_pop.sum("region")
        full_pop = full_pop.reindex(
            year=range(full_pop.year.min().values, self.ext_end_year + 1),
            method="ffill",
        )

        # for aggregations other than uncollapsed,
        # we need to collapse over pop dimensions
        if len(self.fair_dims) > 1:
            pop = full_pop.mean([i for i in self.fair_dims if i in full_pop.dims])
        else:
            pop = full_pop

        # for gwr_gwr, we need to calculate regular naive_ramsey rates, get
        # discount factors, and then average them at the end
        if discounting_type == "gwr_gwr":
            array = self.global_consumption_per_capita("naive_ramsey")

            # average discount factors and expand dims to match Euler
            discount_factors = (
                self.calculate_discount_factors(array)
                .mean(dim=["ssp", "model"])
                .expand_dims({"fair_aggregation": fair_aggregation})
            )

        # naive options are calculated from the no-climate-change consumption growth rate
        elif "naive" in discounting_type:
            array = self.global_consumption_per_capita(self.discounting_type)

            # expand dims to match Euler
            discount_factors = self.calculate_discount_factors(array).expand_dims(
                {"fair_aggregation": fair_aggregation}
            )

        elif "euler" in discounting_type:
            discount_factors = []
            for agg in [i for i in fair_aggregation if i != "median"]:
                if agg == "ce":
                    factors = self.calculate_discount_factors(
                        self.ce_fair_no_pulse / pop
                    )
                elif agg == "mean":
                    factors = self.calculate_discount_factors(
                        self.global_consumption_no_pulse.mean(self.fair_dims) / pop
                    )
                elif agg == "gwr_mean":
                    factors = self.calculate_discount_factors(
                        self.global_consumption_no_pulse / full_pop
                    ).mean(self.fair_dims)
                elif agg == "median_params":
                    median_params_damages = compute_damages(
                        self.climate.fair_median_params_control,
                        betas=self.damage_function_coefficients,
                        formula=self.formula,
                    )

                    median_params_consumption = (
                        self.global_consumption - median_params_damages
                    ).expand_dims(
                        {
                            "weitzman_parameter": [
                                str(i) for i in self.weitzman_parameter
                            ]
                        }
                    )

                    if len(self.fair_dims) > 1:
                        median_params_consumption = median_params_consumption.mean(
                            [
                                i
                                for i in self.fair_dims
                                if i in median_params_consumption.dims
                            ]
                        )

                    factors = self.calculate_discount_factors(
                        median_params_consumption / pop
                    )
                elif agg == "uncollapsed":
                    factors = self.calculate_discount_factors(
                        self.global_consumption_no_pulse / full_pop
                    )

                factors = factors.assign_coords({"fair_aggregation": agg})
                discount_factors.append(factors)

            discount_factors = xr.concat(discount_factors, dim="fair_aggregation")

        return discount_factors

    @cachedproperty
    @save("discount_factors")
    def stream_discount_factors(self):
        return self.calculate_stream_discount_factors(
            discounting_type=self.discounting_type,
            fair_aggregation=self.fair_aggregation,
        )

    @cachedproperty
    @save("uncollapsed_discount_factors")
    def uncollapsed_discount_factors(self):
        pop = self.collapsed_pop.sum("region")
        pop = pop.reindex(
            year=range(pop.year.min().values, self.ext_end_year + 1),
            method="ffill",
        )
        f = self.calculate_discount_factors(
            self.global_consumption_no_pulse / pop
        ).to_dataset(name="discount_factor")
        for var in f.variables:
            f[var].encoding.clear()

        return f

    @cachedproperty
    @save("uncollapsed_marginal_damages")
    def uncollapsed_marginal_damages(self):
        md = (
            (
                (self.global_consumption_no_pulse - self.global_consumption_pulse)
                * self.climate.conversion
            )
            .rename("marginal_damages")
            .to_dataset()
        )

        for var in md.variables:
            md[var].encoding.clear()

        return md

    def ce(self, obj, dims):
        """Rechunk data appropriately and apply the certainty equivalence
        calculation. This is done in a loop to avoid memory crashes.
        Not that data MUST be chunked, otherwise Dask will take a CE over each
        chunk and sum the result.

        *** IMPORTANT NOTE ***
        This wrapper function CANNOT execute with weights as it uses a map_blocks
        function which is unable to determine how to match weight dimensions with
        its chunk. If you must weight, `c_equivalence` function must be used directly
        on the data.
        """
        for dim in dims:
            obj = obj.chunk({dim: len(obj[dim])})
            obj = obj.map_blocks(c_equivalence, kwargs=dict(dims=dim, eta=self.eta))
        return obj

dscim.menu.main_recipe.MainRecipe.ce_cc property

ce_cc

Certainty equivalent of consumption with climate change damages

dscim.menu.main_recipe.MainRecipe.ce_fair_no_pulse property

ce_fair_no_pulse

Certainty equivalent of global consumption under FAIR control scenario

dscim.menu.main_recipe.MainRecipe.ce_fair_pulse property

ce_fair_pulse

Certainty equivalent of global consumption under FAIR pulse scenario

dscim.menu.main_recipe.MainRecipe.ce_no_cc property

ce_no_cc

Certainty equivalent of consumption without climate change damages

dscim.menu.main_recipe.MainRecipe.damage_function_coefficients property

damage_function_coefficients

Load damage function coefficients if the coefficients are provided by the user. Otherwise, compute them.

dscim.menu.main_recipe.MainRecipe.damage_function_fit property

damage_function_fit

Load fitted damage function if the fit is provided by the user. Otherwise, compute them.

dscim.menu.main_recipe.MainRecipe.gmsl_max property

gmsl_max

This function finds the GMSL value at which the damage function reaches its local maximum along the GMSL dimension.

Returns:

Type Description
DataArray

the array of GMSL values at which the local maximum is located, for all years, ssps, models, and if applicable, values of GMST

dscim.menu.main_recipe.MainRecipe.median_params_marginal_damages property

median_params_marginal_damages

Calculate marginal damages due to a pulse using a FAIR simulation calculated with the median climate parameters.

dscim.menu.main_recipe.MainRecipe.output_attrs property

output_attrs

Return dict with class attributes for output metadata

Returns:

Type Description
A dict Class metadata

dscim.menu.main_recipe.MainRecipe.calculate_discount_factors

calculate_discount_factors(cons_pc)

Calculates the stream of discount factors based on the Euler equation that defines an optimal intertemporal consumption allocation. Rearranging that equation shows that an outcome that will occur at the period t will be converted into today's, period 0 value, the following way :

Discrete discounting: discount_factor_t = [ 1/(1+rho)^t ] * [ U'(C(t)) / U'(C(0)) ]

where rho is the pure rate of time preference, U() is the utility function, U'() the first derivative, C(0) and C(t) today's and the future consumption respectively. The higher rho, the higher the importance of the present relative to the future so the lower the discount factor, and, if the utility function is concave, the higher the growth of consumption, the lower the importance of the future consumption relative to today, and therefore again the lower the discount factor.

Using a CRRA utility function and plugging the first derivative :

discount_factor_t = [ 1/(1+rho)^t ] * [ C(0)^eta / C(t)^eta ]

eta represents the degree of concavity of the utility function.

With continuous discounting, discount_factor_t = Product_1^t [e^-(rho + eta * g), where g = ln(C(t)/C(t-1))

rearranging yields rho_continuous = e^rho_discrete - 1

Parameters:

Name Type Description Default
cons_pc array

Array of per capita consumption from pulse year to end of time period.

required

Returns:

Type Description
`xarray.DataArray` with discount factors computed following the last equation in the above description.
Source code in src/dscim/menu/main_recipe.py
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
def calculate_discount_factors(self, cons_pc):
    """Calculates the stream of discount factors based on the Euler equation that defines an optimal
    intertemporal consumption allocation. Rearranging that equation shows that an outcome that will occur at the
    period t will be converted into today's, period 0 value, the following way :

    Discrete discounting: discount_factor_t = [ 1/(1+rho)^t ] * [ U'(C(t)) / U'(C(0)) ]

    where rho is the pure rate of time preference, U() is the utility function, U'() the first derivative,
    C(0) and C(t) today's and the future consumption respectively. The higher rho, the higher the importance
    of the present relative to the future so the lower the discount factor, and, if the utility function is concave,
    the higher the growth of consumption, the lower the importance of the future consumption relative to today, and
    therefore again the lower the discount factor.

    Using a CRRA utility function and plugging the first derivative :

    discount_factor_t = [ 1/(1+rho)^t ] * [ C(0)^eta / C(t)^eta ]

    eta represents the degree of concavity of the utility function.

    With continuous discounting,
    discount_factor_t = Product_1^t [e^-(rho + eta * g),
        where g = ln(C(t)/C(t-1))

    rearranging yields rho_continuous = e^rho_discrete - 1

    Parameters
    ----------
    cons_pc : array
        Array of per capita consumption from pulse year to end of time period.

    Returns
    -------
    `xarray.DataArray` with discount factors computed following the last equation in the above description.
    """

    # subset to pulse year onward
    cons_pc = cons_pc.sel(year=slice(self.climate.pulse_year, self.ext_end_year))

    # calculate the time preference component of the discount factors for each period.
    if self.discrete_discounting:
        # plug the unique rho in an array
        rhos = xr.DataArray(self.rho, coords=[cons_pc.year])
    else:
        # plug the unique rho in an array, and compute e^rho - 1
        rhos = np.expm1(xr.DataArray(self.rho, coords=[cons_pc.year]))

    stream_rhos = np.divide(
        1, np.multiply.accumulate((rhos.values + 1), rhos.dims.index("year"))
    )

    # calculate the marginal utility component of the discount factors for each period.
    ratio = cons_pc.sel(year=self.climate.pulse_year) ** (self.eta) / cons_pc ** (
        self.eta
    )

    # the discount factor is the product of the two components.
    factors = stream_rhos * ratio

    return factors

dscim.menu.main_recipe.MainRecipe.calculate_scc

calculate_scc()

Calculate range of FAIR-aggregated SCCs

Source code in src/dscim/menu/main_recipe.py
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
@cachedproperty
@save(name="scc")
def calculate_scc(self):
    """Calculate range of FAIR-aggregated SCCs"""

    discounted_damages = self.discounted_damages(
        damages=self.marginal_damages, discrate=self.discounting_type
    )
    discounted_damages = discounted_damages.sum(dim="year").rename("scc")

    if "median" in self.fair_aggregation:
        median = self.uncollapsed_sccs.median(self.fair_dims)
        median["fair_aggregation"] = ["median"]
        discounted_damages = xr.merge([median.rename("scc"), discounted_damages])

    return discounted_damages

dscim.menu.main_recipe.MainRecipe.calculate_stream_discount_factors

calculate_stream_discount_factors(discounting_type, fair_aggregation)

Stream of discount factors Returns specified Ramsey or Weitzman-Ramsey discount factors.

Parameters:

Name Type Description Default
discounting_type str

Type of discounting to implement. Typically, self.discounting_type is passed. However, for local Euler rates, this changes depending on the option.

required

Returns:

Type Description
`xarray.DataArray`

Discount rates indexed by year and (if Ramsey) SSP/model

Source code in src/dscim/menu/main_recipe.py
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
def calculate_stream_discount_factors(self, discounting_type, fair_aggregation):
    """Stream of discount factors
    Returns specified Ramsey or Weitzman-Ramsey discount factors.

    Parameters
    ----------

    discounting_type : str
        Type of discounting to implement. Typically,
        self.discounting_type is passed. However, for local Euler rates,
        this changes depending on the option.

    Returns
    -------
    `xarray.DataArray`
        Discount rates indexed by year and (if Ramsey) SSP/model
    """

    # holding population constant
    # from 2100 to 2300 with 2099 values
    full_pop = self.collapsed_pop.sum("region")
    full_pop = full_pop.reindex(
        year=range(full_pop.year.min().values, self.ext_end_year + 1),
        method="ffill",
    )

    # for aggregations other than uncollapsed,
    # we need to collapse over pop dimensions
    if len(self.fair_dims) > 1:
        pop = full_pop.mean([i for i in self.fair_dims if i in full_pop.dims])
    else:
        pop = full_pop

    # for gwr_gwr, we need to calculate regular naive_ramsey rates, get
    # discount factors, and then average them at the end
    if discounting_type == "gwr_gwr":
        array = self.global_consumption_per_capita("naive_ramsey")

        # average discount factors and expand dims to match Euler
        discount_factors = (
            self.calculate_discount_factors(array)
            .mean(dim=["ssp", "model"])
            .expand_dims({"fair_aggregation": fair_aggregation})
        )

    # naive options are calculated from the no-climate-change consumption growth rate
    elif "naive" in discounting_type:
        array = self.global_consumption_per_capita(self.discounting_type)

        # expand dims to match Euler
        discount_factors = self.calculate_discount_factors(array).expand_dims(
            {"fair_aggregation": fair_aggregation}
        )

    elif "euler" in discounting_type:
        discount_factors = []
        for agg in [i for i in fair_aggregation if i != "median"]:
            if agg == "ce":
                factors = self.calculate_discount_factors(
                    self.ce_fair_no_pulse / pop
                )
            elif agg == "mean":
                factors = self.calculate_discount_factors(
                    self.global_consumption_no_pulse.mean(self.fair_dims) / pop
                )
            elif agg == "gwr_mean":
                factors = self.calculate_discount_factors(
                    self.global_consumption_no_pulse / full_pop
                ).mean(self.fair_dims)
            elif agg == "median_params":
                median_params_damages = compute_damages(
                    self.climate.fair_median_params_control,
                    betas=self.damage_function_coefficients,
                    formula=self.formula,
                )

                median_params_consumption = (
                    self.global_consumption - median_params_damages
                ).expand_dims(
                    {
                        "weitzman_parameter": [
                            str(i) for i in self.weitzman_parameter
                        ]
                    }
                )

                if len(self.fair_dims) > 1:
                    median_params_consumption = median_params_consumption.mean(
                        [
                            i
                            for i in self.fair_dims
                            if i in median_params_consumption.dims
                        ]
                    )

                factors = self.calculate_discount_factors(
                    median_params_consumption / pop
                )
            elif agg == "uncollapsed":
                factors = self.calculate_discount_factors(
                    self.global_consumption_no_pulse / full_pop
                )

            factors = factors.assign_coords({"fair_aggregation": agg})
            discount_factors.append(factors)

        discount_factors = xr.concat(discount_factors, dim="fair_aggregation")

    return discount_factors

dscim.menu.main_recipe.MainRecipe.calculated_damages abstractmethod

calculated_damages()

Calculate damages (difference between CEs) for collapsing

Source code in src/dscim/menu/main_recipe.py
460
461
462
@abstractmethod
def calculated_damages(self):
    """Calculate damages (difference between CEs) for collapsing"""

dscim.menu.main_recipe.MainRecipe.ce

ce(obj, dims)

Rechunk data appropriately and apply the certainty equivalence calculation. This is done in a loop to avoid memory crashes. Not that data MUST be chunked, otherwise Dask will take a CE over each chunk and sum the result.

*** IMPORTANT NOTE *** This wrapper function CANNOT execute with weights as it uses a map_blocks function which is unable to determine how to match weight dimensions with its chunk. If you must weight, c_equivalence function must be used directly on the data.

Source code in src/dscim/menu/main_recipe.py
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
def ce(self, obj, dims):
    """Rechunk data appropriately and apply the certainty equivalence
    calculation. This is done in a loop to avoid memory crashes.
    Not that data MUST be chunked, otherwise Dask will take a CE over each
    chunk and sum the result.

    *** IMPORTANT NOTE ***
    This wrapper function CANNOT execute with weights as it uses a map_blocks
    function which is unable to determine how to match weight dimensions with
    its chunk. If you must weight, `c_equivalence` function must be used directly
    on the data.
    """
    for dim in dims:
        obj = obj.chunk({dim: len(obj[dim])})
        obj = obj.map_blocks(c_equivalence, kwargs=dict(dims=dim, eta=self.eta))
    return obj

dscim.menu.main_recipe.MainRecipe.ce_cc_calculation abstractmethod

ce_cc_calculation()

Calculate CE damages depending on discount type

Source code in src/dscim/menu/main_recipe.py
452
453
454
@abstractmethod
def ce_cc_calculation(self):
    """Calculate CE damages depending on discount type"""

dscim.menu.main_recipe.MainRecipe.ce_no_cc_calculation abstractmethod

ce_no_cc_calculation()

Calculate GDP CE depending on discount type.

Source code in src/dscim/menu/main_recipe.py
456
457
458
@abstractmethod
def ce_no_cc_calculation(self):
    """Calculate GDP CE depending on discount type."""

dscim.menu.main_recipe.MainRecipe.collapsed_pop

collapsed_pop()

Collapse population according to discount type.

Source code in src/dscim/menu/main_recipe.py
441
442
443
444
445
446
447
448
449
450
@cachedproperty
def collapsed_pop(self):
    """Collapse population according to discount type."""
    if (self.discounting_type == "constant") or ("ramsey" in self.discounting_type):
        pop = self.pop
    elif self.discounting_type == "constant_model_collapsed":
        pop = self.pop.mean("model")
    elif "gwr" in self.discounting_type:
        pop = self.pop.mean(["model", "ssp"])
    return pop

dscim.menu.main_recipe.MainRecipe.damage_function

damage_function()

Calls damage function calculation method.

This function calls the damage function calculation in model_outputs(). It calculates a damage function for each passed scenario_dimension based on subsets of self.damage_function_points and extrapolates this function using the specified method for all years post-end_ext_subset_year.

Returns:

Type Description
dict

dict['params'] is a dataframe of betas for each year dict['preds'] is a dataframe of predicted y hat for each year and anomaly

Source code in src/dscim/menu/main_recipe.py
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
@cachedproperty
def damage_function(self):
    """Calls damage function calculation method.

    This function calls the damage function calculation in
    model_outputs(). It calculates a damage function for each
    passed `scenario_dimension` based on subsets of
    self.damage_function_points and extrapolates this function
    using the specified method for all years post-end_ext_subset_year.

    Returns
    -------
    dict
        dict['params'] is a dataframe of betas for each year
        dict['preds'] is a dataframe of predicted y hat for each
            year and anomaly
    """
    if self.scenario_dimensions is None:
        # this only occurs for global discounting
        # with a single scenario passed
        damage_function = self.damage_function_calculation(
            damage_function_points=self.damage_function_points,
            global_consumption=self.global_consumption,
        )

    else:
        # cycle through the different combinations of the scenario dims
        subset = self.damage_function_points.groupby(self.scenario_dimensions)
        damage_function, dict_list = {}, []

        for name, dt in subset:
            # turn single-dim into a tuple to make indexing easier later
            if len(self.scenario_dimensions) == 1:
                name = tuple([name])

            df = self.damage_function_calculation(
                damage_function_points=dt,
                global_consumption=self.global_consumption,
            )
            # assigning dimensions to each dataArray in the dictionary
            for key in df.keys():
                df[key] = df[key].expand_dims(
                    {
                        var: [val]
                        for var, val in zip(self.scenario_dimensions, list(name))
                    }
                )
            dict_list.append(df)

        # concatenate different scenarios into one big dataArray
        damage_function["params"] = xr.combine_by_coords(
            [x["params"] for x in dict_list]
        )

        damage_function["preds"] = xr.combine_by_coords(
            [x["preds"] for x in dict_list]
        )

    return damage_function

dscim.menu.main_recipe.MainRecipe.damage_function_calculation

damage_function_calculation(damage_function_points, global_consumption)

The damage function model fit may be : (1) ssp specific, (2) ssp-model specific, (3) unique across ssp-model. This depends on the type of discounting. In each case the input data passed to the fitting functions and the formatting of the returned output is different because dimensions are different. This function handles this and returns the model fit.

Returns:

Type Description
dict with two xr.Datasets, 'params' (model fit) and 'preds' (predictions from model fit), with dimensions depending
on self.discounting_type.
Source code in src/dscim/menu/main_recipe.py
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
def damage_function_calculation(self, damage_function_points, global_consumption):
    """The damage function model fit may be : (1) ssp specific, (2) ssp-model specific, (3) unique across ssp-model.
    This depends on the type of discounting. In each case the input data passed to the fitting functions and the formatting of the returned
    output is different because dimensions are different. This function handles this and returns the model fit.

    Returns
    ------
    dict with two xr.Datasets, 'params' (model fit) and 'preds' (predictions from model fit), with dimensions depending
    on self.discounting_type.
    """

    yrs = range(self.climate.pulse_year, self.ext_subset_end_year + 1)

    params_list, preds_list = [], []

    if self.discounting_type == "constant_model_collapsed":
        for ssp in damage_function_points["ssp"].unique():
            # Subset dataframe to specific SSP
            fit_subset = damage_function_points[
                damage_function_points["ssp"] == ssp
            ]

            global_c_subset = global_consumption.sel({"ssp": ssp})
            # Fit damage function curves using the data subset
            damage_function = model_outputs(
                damage_function=fit_subset,
                formula=self.formula,
                type_estimation=self.fit_type,
                global_c=global_c_subset,
                extrapolation_type=self.ext_method,
                quantiles=self.quantreg_quantiles,
                year_range=yrs,
                year_start_pred=self.ext_subset_end_year + 1,
            )

            # Add variables
            params = damage_function["parameters"].expand_dims(
                dict(
                    discount_type=[self.discounting_type],
                    ssp=[ssp],
                    model=[str(list(self.gdp.model.values))],
                )
            )

            preds = damage_function["preds"].expand_dims(
                dict(
                    discount_type=[self.discounting_type],
                    ssp=[ssp],
                    model=[str(list(self.gdp.model.values))],
                )
            )

            params_list.append(params)
            preds_list.append(preds)

    elif (self.discounting_type == "constant") or (
        "ramsey" in self.discounting_type
    ):
        for ssp, model in list(
            product(
                damage_function_points.ssp.unique(),
                damage_function_points.model.unique(),
            )
        ):
            # Subset dataframe to specific SSP-IAM combination.
            fit_subset = damage_function_points[
                (damage_function_points["ssp"] == ssp)
                & (damage_function_points["model"] == model)
            ]

            global_c_subset = global_consumption.sel({"ssp": ssp, "model": model})

            # Fit damage function curves using the data subset
            damage_function = model_outputs(
                damage_function=fit_subset,
                formula=self.formula,
                type_estimation=self.fit_type,
                global_c=global_c_subset,
                extrapolation_type=self.ext_method,
                quantiles=self.quantreg_quantiles,
                year_range=yrs,
                year_start_pred=self.ext_subset_end_year + 1,
            )

            # Add variables
            params = damage_function["parameters"].expand_dims(
                dict(
                    discount_type=[self.discounting_type], ssp=[ssp], model=[model]
                )
            )

            preds = damage_function["preds"].expand_dims(
                dict(
                    discount_type=[self.discounting_type], ssp=[ssp], model=[model]
                )
            )

            params_list.append(params)
            preds_list.append(preds)

    elif "gwr" in self.discounting_type:
        # Fit damage function across all SSP-IAM combinations, as expected
        # from the Weitzman-Ramsey discounting
        fit_subset = damage_function_points

        # Fit damage function curves using the data subset
        damage_function = model_outputs(
            damage_function=fit_subset,
            type_estimation=self.fit_type,
            formula=self.formula,
            global_c=global_consumption,
            extrapolation_type=self.ext_method,
            quantiles=self.quantreg_quantiles,
            year_range=yrs,
            year_start_pred=self.ext_subset_end_year + 1,
        )

        # Add variables
        params = damage_function["parameters"].expand_dims(
            dict(
                discount_type=[self.discounting_type],
                ssp=[str(list(self.gdp.ssp.values))],
                model=[str(list(self.gdp.model.values))],
            )
        )

        preds = damage_function["preds"].expand_dims(
            dict(
                discount_type=[self.discounting_type],
                ssp=[str(list(self.gdp.ssp.values))],
                model=[str(list(self.gdp.model.values))],
            )
        )

        params_list.append(params)
        preds_list.append(preds)

    return dict(
        params=xr.combine_by_coords(params_list),
        preds=xr.combine_by_coords(preds_list),
    )

dscim.menu.main_recipe.MainRecipe.damage_function_points

damage_function_points()

Global damages by RCP/GCM or SLR

Returns:

Type Description
pd.DataFrame
Source code in src/dscim/menu/main_recipe.py
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
@cachedproperty
@save(name="damage_function_points")
def damage_function_points(self) -> pd.DataFrame:
    """Global damages by RCP/GCM or SLR

    Returns
    --------
        pd.DataFrame
    """
    df = self.global_damages_calculation()

    if "slr" in df.columns:
        df = df.merge(self.climate.gmsl, on=["year", "slr"])
    if "gcm" in df.columns:
        df = df.merge(self.climate.gmst, on=["year", "gcm", "rcp"])

    # removing illegal combinations from estimation
    if any([i in df.ssp.unique() for i in ["SSP1", "SSP5"]]):
        self.logger.info("Dropping illegal model combinations.")
        for var in [i for i in df.columns if i in ["anomaly", "gmsl"]]:
            df.loc[
                ((df.ssp == "SSP1") & (df.rcp == "rcp85"))
                | ((df.ssp == "SSP5") & (df.rcp == "rcp45")),
                var,
            ] = np.nan

    # agriculture lacks ACCESS0-1/rcp85 combo
    if "agriculture" in self.sector:
        self.logger.info("Dropping illegal model combinations for agriculture.")
        df.loc[(df.gcm == "ACCESS1-0") & (df.rcp == "rcp85"), "anomaly"] = np.nan

    return df

dscim.menu.main_recipe.MainRecipe.discounted_damages

discounted_damages(damages, discrate)

Discount marginal damages. Distinguishes between constant discount rates method and non-constant discount rates.

Parameters:

Name Type Description Default
damages DataArray or Dataset

Array of damages with adiscount_type dimension to subset the damages.

required
discrate str

Discounting type. Be aware that the constant rates are class-wide defined. If this str is either 'constant' or 'constant_model_collapsed', the predetermined constant discount rates are used, otherwise, the stream of (non-constant) discount factors from self.stream_discount_factors() is used.

required

Returns:

Type Description
xr.Dataset
Source code in src/dscim/menu/main_recipe.py
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
def discounted_damages(self, damages, discrate):
    """Discount marginal damages. Distinguishes between constant discount rates method and non-constant discount rates.

    Parameters
    ----------
    damages : xr.DataArray or xr.Dataset
        Array of damages with a`discount_type` dimension to subset the damages.
    discrate : str
         Discounting type. Be aware that the constant rates are class-wide defined. If this str is either 'constant' or 'constant_model_collapsed', the predetermined constant discount rates are used, otherwise, the stream of (non-constant) discount factors from self.stream_discount_factors() is used.

    Returns
    -------
        xr.Dataset
    """

    if discrate in ["constant", "constant_model_collapsed"]:
        if self.discrete_discounting:
            discrate_damages = [
                damages * (1 / (1 + r)) ** (damages.year - self.climate.pulse_year)
                for r in self.CONST_DISC_RATES
            ]
        else:
            discrate_damages = [
                damages * np.exp(-r * (damages.year - self.climate.pulse_year))
                for r in self.CONST_DISC_RATES
            ]

        pvd_damages = xr.concat(
            discrate_damages,
            dim=pd.Index(self.CONST_DISC_RATES, name="discrate"),
        )
    else:
        factors = self.calculate_stream_discount_factors(
            discounting_type=self.discounting_type,
            fair_aggregation=damages.fair_aggregation,
        )
        pvd_damages = factors * damages

    return pvd_damages

dscim.menu.main_recipe.MainRecipe.full_uncertainty_iqr

full_uncertainty_iqr()

Calculate the distribution of quantile-weighted SCCs produced from quantile regressions.

Source code in src/dscim/menu/main_recipe.py
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
@cachedproperty
@save(name="full_uncertainty_iqr")
def full_uncertainty_iqr(self):
    """Calculate the distribution of quantile-weighted SCCs produced from
    quantile regressions.
    """
    return quantile_weight_quantilereg(
        self.uncollapsed_sccs,
        fair_dims=self.fair_dims,
        quantiles=self.full_uncertainty_quantiles,
    )

dscim.menu.main_recipe.MainRecipe.global_consumption

global_consumption()

Global consumption without climate change

Source code in src/dscim/menu/main_recipe.py
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
@cachedproperty
@save(name="global_consumption")
def global_consumption(self):
    """Global consumption without climate change"""

    # rff simulation means that GDP already exists out to 2300
    if 2300 in self.gdp.year:
        self.logger.debug("Global consumption found up to 2300.")
        global_cons = self.gdp.sum("region").rename("global_consumption")
    else:
        self.logger.info("Extrapolating global consumption.")

        # holding population constant
        # from 2100 to 2300 with 2099 values
        pop = self.collapsed_pop.sum("region")
        pop = pop.reindex(
            year=range(pop.year.min().values, self.ext_end_year + 1),
            method="ffill",
        )

        # Calculate global consumption back by
        global_cons = (
            self.global_consumption_per_capita(self.discounting_type) * pop
        )

    # Add dimension
    # @TODO: remove this line altogether
    global_cons = global_cons.expand_dims(
        {"discount_type": [self.discounting_type]}
    )

    return global_cons

dscim.menu.main_recipe.MainRecipe.global_consumption_calculation abstractmethod

global_consumption_calculation(disc_type)

Calculation of global consumption without climate change

Returns:

Type Description
xr.DataArray
Source code in src/dscim/menu/main_recipe.py
782
783
784
785
786
787
788
789
@abstractmethod
def global_consumption_calculation(self, disc_type):
    """Calculation of global consumption without climate change

    Returns
    -------
        xr.DataArray
    """

dscim.menu.main_recipe.MainRecipe.global_consumption_no_pulse

global_consumption_no_pulse()

Global consumption under FAIR control scenario.

Source code in src/dscim/menu/main_recipe.py
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
@cachedproperty
@save(name="global_consumption_no_pulse")
def global_consumption_no_pulse(self):
    """Global consumption under FAIR control scenario."""

    fair_control = self.climate.fair_control

    if self.clip_gmsl:
        fair_control["gmsl"] = np.minimum(fair_control["gmsl"], self.gmsl_max)

    damages = compute_damages(
        fair_control,
        betas=self.damage_function_coefficients,
        formula=self.formula,
    )

    cc_cons = self.global_consumption - damages

    gc_no_pulse = []
    for wp in self.weitzman_parameter:
        gc = self.weitzman_min(
            no_cc_consumption=self.global_consumption,
            cc_consumption=cc_cons,
            parameter=wp,
        )
        gc = gc.assign_coords({"weitzman_parameter": str(wp)})

        gc_no_pulse.append(gc)

    return xr.concat(gc_no_pulse, dim="weitzman_parameter")

dscim.menu.main_recipe.MainRecipe.global_consumption_per_capita

global_consumption_per_capita(disc_type)

Global consumption per capita

Returns:

Type Description
xr.DataArray
Source code in src/dscim/menu/main_recipe.py
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
def global_consumption_per_capita(self, disc_type):
    """Global consumption per capita

    Returns
    -------
        xr.DataArray
    """

    # Calculate global consumption per capita
    array_pc = self.global_consumption_calculation(
        disc_type
    ) / self.collapsed_pop.sum("region")

    if self.NAME == "equity":
        # equity recipe's growth is capped to
        # risk aversion recipe's growth rates
        extrapolated = extrapolate(
            xr_array=array_pc,
            start_year=self.ext_subset_start_year,
            end_year=self.ext_subset_end_year,
            interp_year=self.ext_end_year,
            method="growth_constant",
            cap=self.risk_aversion_growth_rates(),
        )

    else:
        extrapolated = extrapolate(
            xr_array=array_pc,
            start_year=self.ext_subset_start_year,
            end_year=self.ext_subset_end_year,
            interp_year=self.ext_end_year,
            method="growth_constant",
        )

    complete_array = xr.concat([array_pc, extrapolated], dim="year")

    return complete_array

dscim.menu.main_recipe.MainRecipe.global_consumption_pulse

global_consumption_pulse()

Global consumption under FAIR pulse scenario.

Source code in src/dscim/menu/main_recipe.py
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
@cachedproperty
@save(name="global_consumption_pulse")
def global_consumption_pulse(self):
    """Global consumption under FAIR pulse scenario."""

    fair_pulse = self.climate.fair_pulse

    if self.clip_gmsl:
        fair_pulse["gmsl"] = np.minimum(fair_pulse["gmsl"], self.gmsl_max)

    damages = compute_damages(
        fair_pulse,
        betas=self.damage_function_coefficients,
        formula=self.formula,
    )

    cc_cons = self.global_consumption - damages
    gc_no_pulse = []
    for wp in self.weitzman_parameter:
        gc = self.weitzman_min(
            no_cc_consumption=self.global_consumption,
            cc_consumption=cc_cons,
            parameter=wp,
        )
        gc = gc.assign_coords({"weitzman_parameter": str(wp)})
        gc_no_pulse.append(gc)
    return xr.concat(gc_no_pulse, dim="weitzman_parameter")

dscim.menu.main_recipe.MainRecipe.global_damages_calculation abstractmethod

global_damages_calculation()

Calculate global collapsed damages for a desired discount type

Source code in src/dscim/menu/main_recipe.py
464
465
466
@abstractmethod
def global_damages_calculation(self):
    """Calculate global collapsed damages for a desired discount type"""

dscim.menu.main_recipe.MainRecipe.marginal_damages

marginal_damages()

Marginal damages due to additional pulse

Source code in src/dscim/menu/main_recipe.py
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
@cachedproperty
@save(name="marginal_damages")
def marginal_damages(self):
    """Marginal damages due to additional pulse"""

    marginal_damages = []

    for agg in [i for i in self.fair_aggregation if i != "median"]:
        if agg == "ce":
            md = self.ce_fair_no_pulse - self.ce_fair_pulse
        elif agg in ["mean", "gwr_mean"]:
            md = self.global_consumption_no_pulse.mean(
                self.fair_dims
            ) - self.global_consumption_pulse.mean(self.fair_dims)
        elif agg == "median_params":
            md = self.median_params_marginal_damages
        else:
            raise NotImplementedError(
                f"{agg} is not available. Enter list including"
                '["ce", "fair", "median", "median_params"]'
            )

        md = md.assign_coords({"fair_aggregation": agg}).expand_dims(
            "fair_aggregation"
        )

        # convert to the marginal damages from a single tonne
        md = md * self.climate.conversion
        marginal_damages.append(md)

    return xr.concat(marginal_damages, dim="fair_aggregation")

dscim.menu.main_recipe.MainRecipe.order_plate

order_plate(course)

Execute menu option section and save results

This method is a entry point to the class and allows the user to calculate different elements of a specific menu option. These elements will automatically be saved in the path defined in save_path.

Parameters:

Name Type Description Default
course

Output to be calculated. Options are: - damage_function: Return and save all damage function elements including damage function points, coefficients, and fitted values. - scc: Return Social Cost of Carbon calculation. All elements from damage_function are saved and returned.

required

Returns:

Type Description
None. Saved all elements to `save_path`
Source code in src/dscim/menu/main_recipe.py
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
def order_plate(self, course):
    """
    Execute menu option section and save results

    This method is a entry point to the class and allows the user to
    calculate different elements of a specific menu option. These elements
    will automatically be saved in the path defined in `save_path`.

    Parameters
    ----------
    course str
        Output to be calculated. Options are:
            - `damage_function`: Return and save all damage function
              elements including damage function points, coefficients, and
              fitted values.
            - `scc`: Return Social Cost of Carbon calculation. All elements
            from `damage_function` are saved and returned.

    Returns
    -------
    None. Saved all elements to `save_path`

    """

    self.logger.info(f"\n Executing {self.__repr__()}")

    def damage_function():
        self.logger.info("Processing damage functions ...")
        if self.damage_function_path is None:
            self.logger.info(
                "Existing damage functions not found. Damage points will be loaded."
            )
            self.damage_function_points
        self.damage_function_coefficients
        try:
            self.damage_function_fit
        except FileNotFoundError:
            pass

    def scc():
        damage_function()
        self.global_consumption
        self.global_consumption_no_pulse
        self.logger.info("Processing SCC calculation ...")
        if self.fit_type == "quantreg":
            self.full_uncertainty_iqr
            self.calculate_scc
            self.stat_uncertainty_iqr
        else:
            if len(self.fair_aggregation) > 0:
                self.stream_discount_factors
                self.calculate_scc
            self.uncollapsed_sccs
            self.uncollapsed_marginal_damages
            self.uncollapsed_discount_factors

    course_dict = {"damage_function": damage_function, "scc": scc}

    try:
        course_dict[course]()
        self.logger.info(f"Results available: {self.save_path}")
    except KeyError as e:
        self.logger.error(f"{course} is not a valid option: {e}")
        raise e
    except Exception as e:
        self.logger.error("Error detected.")
        raise e

    return None

dscim.menu.main_recipe.MainRecipe.order_scc

order_scc()

Execute menu option section and save results

This method is a wrapper to order_plate that calls the "scc" course, which is the Social Cost of Carbon calculation. Elements involved in the calculation (fair and damage_function) will automatically be saved in the path defined in save_path.

Parameters:

Name Type Description Default
None
required

Returns:

Type Description
xr.Dataset of SCCs
Source code in src/dscim/menu/main_recipe.py
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def order_scc(self):
    """
    Execute menu option section and save results

    This method is a wrapper to `order_plate` that calls the "scc" course,
    which is the Social Cost of Carbon calculation. Elements involved in the calculation
    (`fair` and `damage_function`) will automatically be saved in the path
    defined in `save_path`.

    Parameters
    ----------
    None

    Returns
    -------
    xr.Dataset of SCCs

    """

    self.logger.info(f"\n Executing {self.__repr__()}")

    try:
        sccds = self.calculate_scc
        self.logger.info(f"Results available: {self.save_path}")
    except Exception as e:
        self.logger.error("Error detected.")
        raise e

    if ("rcp45" in sccds.rcp) or ("rcp85" in sccds.rcp):
        # leave the dataset alone if there are already rcp scenario names
        pass
    else:
        # rename the CMIP6 scenario names that start with "ssp*"
        sccds = sccds.sortby(sccds.rcp)

        rcpdt = {
            "ssp126": "RCP2.6",
            "ssp245": "RCP4.5",
            "ssp370": "RCP7.0",
            "ssp460": "RCP6.0",
            "ssp585": "RCP8.5",
        }
        rlst = []
        for rcp in sccds.rcp.values:
            rlst.append(rcpdt[rcp])
        sccds.coords["rcp"] = rlst
        sccds = sccds.sortby(sccds.rcp)

    return sccds.squeeze(drop=True)

dscim.menu.main_recipe.MainRecipe.stat_uncertainty_iqr

stat_uncertainty_iqr()

Calculate the distribution of quantile-weighted SCCs produced from quantile regressions that have already been collapsed across other dimensions to give statistical-only uncertainty.

Source code in src/dscim/menu/main_recipe.py
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
@cachedproperty
@save(name="stat_uncertainty_iqr")
def stat_uncertainty_iqr(self):
    """Calculate the distribution of quantile-weighted SCCs produced from
    quantile regressions that have already been collapsed across other dimensions to give statistical-only uncertainty.
    """
    return quantile_weight_quantilereg(
        self.calculate_scc,
        fair_dims=self.fair_dims,
        quantiles=self.full_uncertainty_quantiles,
    )

dscim.menu.main_recipe.MainRecipe.uncollapsed_sccs

uncollapsed_sccs()

Calculate full distribution of SCCs without FAIR aggregation

Source code in src/dscim/menu/main_recipe.py
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
@cachedproperty
@save(name="uncollapsed_sccs")
def uncollapsed_sccs(self):
    """Calculate full distribution of SCCs without FAIR aggregation"""

    md = (
        self.global_consumption_no_pulse - self.global_consumption_pulse
    )  # this is for full uncertainty

    # convert to the marginal damages from a single pulse
    md = md * self.climate.conversion

    md = md.expand_dims({"fair_aggregation": ["uncollapsed"]})

    sccs = self.discounted_damages(
        damages=md,
        discrate=self.discounting_type,
    ).sum(dim="year")

    return sccs

dscim.menu.main_recipe.MainRecipe.weitzman_min

weitzman_min(no_cc_consumption, cc_consumption, parameter)

Implements bottom coding that fixes marginal utility below a threshold to the marginal utility at that threshold. The threshold is the Weitzman parameter.

Parameters:

Name Type Description Default
no_cc_consumption

Consumption array of which the share will be used to calculate the absolute Weitzman parameter, only if parameter <= 1.

required
consumption

Consumption array to be bottom-coded.

required
parameter

A positive number representing the Weitzman parameter, below which marginal utility will be top coded; ie., 0.01 implies marginal utility is top coded to the value of marginal utility at 1% of no-climate change global consumption. If parameter > 1, it is assumed to be an absolute value. If parameter <= 1, it is assumed to be a share of future global consumption (without climate change).

required

Returns:

Type Description
xr.Dataset
Source code in src/dscim/menu/main_recipe.py
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
def weitzman_min(self, no_cc_consumption, cc_consumption, parameter):
    """
    Implements bottom coding that fixes marginal utility below a threshold
    to the marginal utility at that threshold. The threshold is the Weitzman
    parameter.

    Parameters
    ----------
    no_cc_consumption: xr.DataArray
        Consumption array of which the share will be used to calculate
        the absolute Weitzman parameter, only if parameter <= 1.
    consumption: xr.DataArray
        Consumption array to be bottom-coded.
    parameter: float
        A positive number representing the Weitzman parameter, below which marginal utility will be
        top coded; ie., 0.01 implies marginal utility is top coded to the
        value of marginal utility at 1% of no-climate change global consumption.
        If parameter > 1, it is assumed to be an absolute value.
        If parameter <= 1, it is assumed to be a share of future global consumption
        (without climate change).

    Returns
    -------
        xr.Dataset

    """
    # if parameter is share of consumption,
    # multiply by no-climate-change consumption
    if parameter <= 1:
        parameter = parameter * no_cc_consumption

    if self.eta == 1:
        w_utility = np.log(parameter)
        bottom_utility = np.power(parameter, -1) * (parameter - cc_consumption)
        bottom_coded_cons = np.exp(w_utility - bottom_utility)

        clipped_cons = xr.where(
            cc_consumption > parameter, cc_consumption, bottom_coded_cons
        )
    else:
        w_utility = np.power(parameter, (1 - self.eta)) / (1 - self.eta)
        bottom_utility = np.power(parameter, -self.eta) * (
            parameter - cc_consumption
        )
        bottom_coded_cons = power(
            ((1 - self.eta) * (w_utility - bottom_utility)), (1 / (1 - self.eta))
        )

        clipped_cons = xr.where(
            cc_consumption > parameter, cc_consumption, bottom_coded_cons
        )

    return clipped_cons