Modelling of Insured Losses of Natural Catastrophes Using Block Maxima Model

Catastrophic events have a huge impact on society as a whole. Insurance, or reinsurance is one way of reducing the economic consequences of catastrophic events. By Sigma Swiss Re criteria the event can be noted as a catastrophe when the economic losses, insured claims or casualties associated with an event exceed just one of the thresholds. These thresholds are updated every year. We can observe a growing trend in both the number of catastrophic events as well as in total economic losses and insured losses too. Risk management of insurance and reinsurance companies have to have available relevant information for estimation and adjusting premium to cover these risks. The aim of this article is to present one of the useful method – block maxima method. This method uses information from historical events about insured losses of natural catastrophes and estimates future insured losses. These estimates are very important for actuaries and for risk managers as it is one of the bases for calculating and adjusting premiums of products covering these types of risks.


Introduction
Catastrophic events are characterised by three main points: there are relatively rareness, there are statistical unexpected and there have huge impact on the whole society. Source: Sigma Swiss Re, [1][2][3][4] To classify as a catastrophe according to Sigma criteria [1][2][3][4], the economic losses, insured claims or casualties associated with an event must exceed just one of the thresholds, which are shown in Table 1 for years 2014-2017.
In Table 1 we can see, that values of these criteria are updated over time. Based on the values in Table 1, we see a growing trend in the values of insured and economic losses during the period under review. For example, total economic losses threshold increased from 97.6 USD million in 2014 to 101 USD million in 2017, this therefore represents increasing of this threshold by 2.5 USD million. The thresholds for the number of casualties remained constant over this period. For example, an event is classified as a catastrophic event if the number of deaths or missing people exceeds 20 persons.
We divide the catastrophic events into two groups according to their cause. The first are natural catastrophes caused by the effects of natural influences such as geological disasters, hydrological disasters, ... The second group consists of catastrophic events caused by human activity, i.e. man-made disasters, such as industrial disasters, traffic disasters, ... This figure shows 10-year moving average of total economic losses and insurance losses too. We can see increasing trend in both cases but we can see increasing differences in these trends.
Source: Sigma Swiss Re, [4]  Catastrophe modelling helps insurers and reinsurers assess the potential losses caused by natural and manmade catastrophes. The Pareto model is very often used as a basis for Excess of Loss quotations as it gives a pretty good description of the random behaviour of large losses [5]. Especially quantile methods provide an appropriate and flexible approach to the probability modelling needed to obtain well-fitted tails [6][7]. Application of quantile based conceptual modelling methods has its foundation in the Order statistics theory [8].
Extreme value theory (EVT) [9][10][11] is a promising class of approaches to modelling catastrophe losses. Although originally utilised in other fields such as hydrology or operational risk [12]. There are two main kinds of models in EVT: block maxima models and peak over threshold (POT) models. More traditional models are Block maxima models which are for the largest observations collected from large samples observations. The whole sample is divided into equal non-overlapping time intervals and the biggest loss from each interval is used for modelling [13][14][15][16][17]. In the more modern approach using POT model (or the threshold exceedances model) the large enough threshold is determined and the observations above are considered [18][19][20][21]. The Extreme value methods do not predict the future with certainty, but they do offer models for explaining the extreme events we have seen in the past. These models are not arbitrary but based on rigorous mathematical theory concerning the behaviour of extrema [22][23][24].
For the purposes of this paper, the Block maxima model has been chosen based on real data of insured losses of natural catastrophes published by Swiss Re Sigma [1][2][3][4].

Block Maxima Models
The block maxima models are models for the largest observations collected from large samples of identically distributed observations. The Fisher-Tippett theorem [22] is the fundamental result in Extreme Value Theory (EVT) and can be considered to have the same status in EVT as the central limit theorem has in the study of sums. The theorem describes the limiting behaviour of appropriately normalized sample maxima.
Suppose catastrophe losses are denoted by the independent, identically distributed random variables X 1 , X 2 , …, whose common distribution function is Extreme Value Theorem [9]: Suppose X 1 , X 2 , … are independent, identically distributed with distribution function F X (x). If there exist constants c n > 0 and d n ∈ R such that − → , → ∞

Error! Bookmark not defined.
where M n = max (X 1 , …, X n ), Y is non-degenerate with distribution function G. Then G is of one the following types: 1. Gumbel These three types of limiting distribution there are in standard form. We can parameterize them within the location and scale families: The generalized Gumbel, Frechet and Weibull families can be combined into a single family of the Generalized extreme value distributions (GEV) in the form It is straightforward to check the result by letting:

Modelling of insured losses of natural catastrophes using block maxima model on real data
For modelling by block maxima model we will use real data. The analysis focus on chronological list of 479 insured losses (in USD million) of natural catastrophes in time period from January 2010 to December 2016, published in Swiss Re Sigma 2011-2017. Fig. 4 shows times series plot of these real data. Table 2 shows summary statistics of insured losses caused by natural catastrophes using our real date. In this table we can see that for example average, which is equal to 827.02, is higher than median which is equal to 300. The value of skewness is bigger than 10 and for example the value of kurtoses is really high -its value is 130.45. Fig. 4. Chronologically arranged the insured losses of natural catastrophes in USD million.

Source: Own processing by Sigma Swiss Re, No 1/2011-2/2017
We can compare value of lower and upper quartiles too. Summary statistics in Table 2 show that there are many small losses and a few very large values of losses. It follows that we need to find some long tail distribution that provides a suitable model for the variation amongst the catastrophe losses data. The catastrophe losses data presented by Fig. 4 we have divided into n blocks (Table 3)

Source: Own calculations
For these blocks of date we had modelling generalised extreme value distribution (GEV). We had used the software Statistica 12 for estimation of parameters of this distribution. Estimated value of parameters of GEV using formulas (1) and (2) for different blocks of data shows table 4.  Fig. 5. This figure shows GEV and empirical distribution function with 95% confidence interval of this distribution. We can see good fit for our data with our model. For the best estimated GEV model for n = 25 we can calculate quantiles. Some of these quantiles shows table 5. By these value we can estimate, that for example in the future 50 % insurance extreme losses of natural catastrophes exceed 4 632.03 million USD and 1 % exceed 39 104 million USD.

Conclusion
Catastrophic events have a huge impact on society as a whole. We can observe a growing trend in both the number of catastrophic events as well as in total and insured losses. Insurance and reinsurance undertakings must be prepared to pay for insured losses as a result of catastrophic events. A number of methods are used to help estimate and refine future claims cover. The block maximum method is one of these methods. Based on this method, this article presented its use on real data on insured losses caused by natural catastrophes from the period 2010-2016. Finding the appropriate distribution -GEV and by estimating the parameters of this distribution it can be calculated its quantiles. The results of our real-time data analysis have shown that insurance and reinsurance companies can expect insured losses in the future, which in 50 % exceed 4 632.03 million USD, in 10% of the insured losses exceed 17 118.3 million USD and in 1% even more than 39 104 million USD. This information is very important for risk management as it is one of the bases for calculating and adjusting premiums.