A.5. Robertson - HG Mapping

Card Set Information

Author:
EExam8
ID:
305202
Filename:
A.5. Robertson - HG Mapping
Updated:
2015-09-05 09:42:25
Tags:
Robertson Hazard Group Mapping
Folders:

Description:
Robertson NCCI 2007 Hazard Group Mapping
Show Answers:

Home > Flashcards > Print Preview

The flashcards below were created by user EExam8 on FreezingBlue Flashcards. What would you like to do?


  1. Hazard Group
    Collection of WC classifications that have relatively similar expected excess loss factors over a broad range of limits.
  2. Original classification methodology
    • grouped 7 variables (indicative of excess loss potential) into 3 subsets based on correlation
    • ran principal components analysis to determine a single representative variable
    • first component was used to determine the classes
  3. WCIRB classification methodology
    • used 2 statistics to sort classes into HG
    • first statistic = % of claims XS of $150,00 (proxy for large loss potential)
    • second statistic = difference between class loss distribution and average loss distribution
  4. New classification methodology
    • sorted class into HG based on their XS ratios
    • note: a distribution is characterized by its excess ratios so there’s no loss of information 
    • used cluster analysis to group classes with similar XS ratios
    • determined the optimal # of HG
    • compared the new HG assignments with the prior assignments
  5. Selection of loss limits - why 5 instead of the prior 17
    • excess ratios at different limits are highly correlated
    • there were initially too many limits below $100,000
    • 1 limit didn’t capture full variability
    • matches range commonly used for retro-rating
  6. Class excess ratios
    • j = HG, Xi = loss for injury type i, L = limit
    • Si(r) = normalized state excess ratio for injury type i = E[max(Xii - r,0)], r = L/μ
    • Rj(L) = excess ratio = ∑wi,jSi(L/μi,j)
    • wi,j = % loss due to injury i in group j and μi,j = average cost per case for injury i in group j
  7. Credibility
    • previous: z = min[n / n + k) x 1.5, 1], n = # claims in class, k = average # of claims per class
    • this gave 72% of classes a credibility of 100%, which was perceived as an issue
    • considered excluding medical only claims, or including only serious claims
    • considered using the median rather than the mean for k, restricting k to classes with some minimal number of claims, square root method, advanced square root (by injury type)
    • → no alternative was compelling enough to warrant change, so stuck with original formula
  8. Steps of cluster analysis
    • selection of loss limits
    • metrics to evaluate cluster distances
    • standardization
  9. Metrics used
    • Euclidian distance L2 = ‖x - y ‖2 = √∑(xi - yi)2; penalizes large deviations
    • L1 = ‖x - y ‖1 = ∑|xi - yi|; minimizes relative error = PLR x |Rj(L) - Rc(L)|
  10. Standardization
    • applied to prevent a variable with large values from exerting undue influence on the results (each variable has a similar impact on the clusters). However:
    • groups already have a common denominator (limit) which would get filtered out
    • wanted to keep excess ratios between 0 and 1
    • analysis didn’t produce significantly different results with or without standardization
    • without it, excess ratios at lower loss limits have more influence on clusters; not undesirable since they rely on more observed values (rather than fitted distribution)
  11. k-means clustering technique
    • k-means groups classes into k HG so as to minimize ∑∑‖Rc - Ri22 where centroid Ri = 1/|HGi| ∑Rc is average XS ratio vector for the ith HG 
    • k-means algorithm: start with random clusters. Compute centroid of each cluster, and assign each class to the cluster with the closest centroid. Repeat until no class is re-assigned
    • weights: to avoid letting small classes have undue influence, premium-weigh each class
  12. Cooper and Milligan tests to find the best k
    • Calinski and Harabasz: higher values indicates a higher between distance (B) and lower within distance (W); aka Pseudo-F test
    • Cubic Clustering Criterion (CCC): compare variance explained by a given set of clusters to that of random clusters; less reliable when data is elongated (highly correlated variables), which is the case here (excess ratios correlated across limits)
  13. Underwriting criteria used to modify HGs
    • similarity between class codes that were in different groups
    • degree of exposure to automobile accident by class
    • extend heavy machinery is used in a given class
  14. Comparison to old hazard groups
    • new hazard groups have more even distribution of claims & premium by class
    • complement of credibility was prior hazard group (many small classes stayed grouped)
    • new hazard groups show less within variance and more between variance

What would you like to do?

Home > Flashcards > Print Preview