Home > Preview
The flashcards below were created by user
EExam8
on FreezingBlue Flashcards.

Excess Loss Factor
Excess Loss Factor = Excess Ratio * PLR + Risk Loading

Initial adjustments to the data
 develop, trend, adjust for legislative changes
 groups claims into accidents since ELFs are based on per occurrence limits, not per claim
 note that this grouping prevents us to look at accident level data by injury type
 approximation could've been to divide claim level limits by 1.1 (not used)

Additional adjustments for curve fitting
 combine data for 3^{rd}, 4^{th} and 5^{th} reports
 truncate and shift data: Y = X  T ⇒ R(L) = R_{Y}(L  T)*R(T)
 normalize the truncated and shifted data for each HG to have a mean of 1
 → all those adjustments make the 4 HG comparable

Methodology
 fit a mixture to differentiate loss development at different limits
 Mahler uses a mix of Exponential (works well just above truncation point) and Pareto (thick tail makes it ideal at very high limits)
 uses mean residual life ($XS/#XS) to examine tail of severity distribution
 pick an truncation point (e.g. $100K) under which we rely on the data for values of R(L) vs relying on the fitted curve for limits above

Application to actual data
 under the truncation point use the data R(L)
 over we use R(L) = R_{data}(L_{trunc}) * R_{fit}([X  trunc]/Y])
 Y = avg value of truncated and shifted losses
 R_{fit}(L) = ∑ p_{j}E_{j}(X)R_{j}(L) / ∑ p_{j}E_{j}(X)
 R_{fit}(L) = weighted avg of R(L) using p_{j}E_{j}(X) / ∑ p_{j}E_{j}(X)

Selection of a truncation point
 permit the maximum reliance on reported data
 retain enough data above the truncation point to permit reasonable fittingd
 in general, should be a round number prior to the "thinning out" of the data

