ISSN 1862-2941



WelcomeOnline-Issues2-20191-20191/2-20182-20171-20172-20161-20162-20151-20152-20141-20142-20131-20132-20121-20122-20111-20112-20101-20102-20091-20092-20081-20082-20071-20073-20062-20061-2006Guidelines to authorsAbout usLegal NoticePrivacy

 

 

Online-Issues » 2-2017 » Fernandez

A field examination of the inter-rater reliability of the Static-99 and STABLE-2007 scored by Correctional Program Officers

Yolanda M. Ferndandez1, L. Maaike Helmus2

1Correctional Services of Canada
2University of Saskatchewan

[Sexual Offender Treatment, Volume 12 (2017), Issue 2]

Abstract

Aim: For measures to be considered useful they must be: 1) reasonably standardized; 2) valid; 3) reliable. The third of these principles, namely reliability, concerns the extent to which the results/scores on a measure are replicable over time and across different persons. Previous research has demonstrated good reliability for scoring of the STATIC-99 and preliminary support for the reliability of the STABLE-2000. However, the majority of this research has been based on coding by well-trained researchers rather than front-line practitioners, which may result in higher inter-rater reliability than would be found in an applied setting. To date, no research on inter-rater reliability has been conducted using the STABLE 2007.
Method: The present study was designed to establish the inter-rater reliability of the Static-99 and STABLE 2007 scored independently by trained Correctional Program Officers at a sex offender assessment unit within Correctional Services of Canada.
Results: Overall the inter-rater reliability for field coding of the Static-99 and STABLE-2007 was excellent for total scores and ranged from acceptable to excellent for all individual items of both measures.
Conclusions: Establishing good inter-rater reliability on assessment measures has the beneficial effect of ensuring accurate and high quality assessments and lends support to the credibility of the assessments when they are reviewed by external bodies for decision making.

Keywords: Static-99, Stable-2007, interrater reliability, field trial


The potential danger to public safety of those who are already known to have committed a sexual offense constitutes a major concern to courts, policing organizations, and releasing institutions such as jails and prisons alike. As a consequence risk assessment has become a critical part of many legal decisions (e.g., civil commitment evaluations, end of sentence evaluations, and allocation of treatment) and is considered important information for decision makers involved in the management of sex offenders both by releasing organizations (e.g., jails and prisons) and community supervision personnel. Risk assessment and risk measures have evolved considerably over the last two decades (e.g., Harris & Hanson, 2010; Hanson, 2005; Mann, Hanson, & Thornton, 2010). Distinct approaches to and different generations of risk assessment have been identified and discussed as part of this evolution (Andrews & Bonta, 2010; Bonta, 1996; Heilbrun, 1997). Actuarial risk assessment measures (static or dynamic) help identify which offenders are at highest risk to reoffend and dynamic risk assessment measures add information regarding areas of concern, potential treatment targets, and provide focus for supervision efforts.

Actuarial measures from the STATIC family1 including the Static-99 (Hanson & Thornton, 2000) and the associated dynamic risk measure, the STABLE-2007 (Hanson, Harris, Scott, & Helmus, 2007) are two of the most widely used tools to assess static and dynamic risk among sexual offenders in Canada and the United States (McGrath, Cumming, Burchard, Zeoli, & Ellerby, 2010). Given the gravity of decisions informed by risk assessment it is vital for these measures to be considered: 1) reasonably standardized; 2) valid; and 3) reliable. Comprehensive coding manuals for both the Static-99 (Harris, Phenix, Hanson, & Thornton, 2003) and Stable-2007 (Fernandez, Harris, Hanson, & Sparks, 2014) contribute to improved standardization.

Various studies have demonstrated the validity of the Static-99 (see www.static99.org for an overview of over 60 replications) and the STABLE-2007 (Hanson, Harris, Scott, & Helmus, 2007) for predicting sexual reoffence. Research regarding the reliability of the Static-99 has consistently demonstrated high intraclass correlations (ICC) for the total score ranging from .90 to .98 (Barbaree, Seto, Langton & Peacock, 2001; Rettenberger, Matthes, Boer, & Eher, 2009) and acceptable ICCs for individual Static-99 items (Harris et al., 2003).

However, most of these studies have been based on the measure being scored under research conditions, which often results in higher reliability estimates. Research studies often present best-case estimates of reliability, as the raters generally have the same level of experience and training, and score the scale based on identical information and conditions (e.g., time constraints).

Coding of risk measures in applied settings may reasonably be expected to result in lower levels of reliability. In clinical settings risk measures are typically completed by front-line staff who often have limited formal training in psychometrics or risk assessment. Additionally, raters may have access to different information, especially if the raters are conducting their own interview. Particularly for some scales such as the STABLE-2007 or the PCL-R, different raters may ask different questions in an interview, and consequently elicit meaningfully different information. It is possible that offenders' responses may also be influenced by the interpersonal style of the interviewers, or by the offender's mood or time of day. Even with the same sources of information accessible, real-world raters may selectively attend to different information (e.g., for example, some raters may directly look up institutional violations or notes in the system, whereas another may assume anything notable will be mentioned in other reports).

Consequently, real-world reliability studies of risk scales may conflate inter-rater reliability (the extent to which two raters can agree on a score) with some features of test-retest reliability (the extent to which the offender may provide similar information at two different points in time). However, although the concept becomes a bit broader, it is nonetheless imperative to understand (as a result of all these features), the extent to which we can expect consistency in scores across diverse raters in real-world settings.

To date three studies have examined the interrater reliability for the Static-99 in field applications. Boccaccini et al. (2012) found high levels of interrater reliability in correctional settings in Texas and New Jersey but noted different total scores in 45% of the cases. Murrie et al. (2009) found reasonable inter-rater reliability between the Static-99 scores of professionals working for the petitioner and respondent (ICC = .64) in SVP proceedings, but commented that the score differences varied in the direction you would expect based on the evaluator's role (e.g., whether they represented the State or the offender.) Very interestingly Quesada, Calkins, and Jeglic (2013) compared field raters scores on the Static-99 to scores derived by researchers. Interrater agreement for the total score was excellent (ICC = .92) and ranged from acceptable to excellent for individual items (.62 to .94.) An earlier version of the STABLE tool, the STABLE-2000 demonstrated good reliability (ICC = .89) for total scores (Hanson et al., 2007). To date I am not aware of published studies of the inter-rater reliability of the STABLE-2007.

The present research study was designed to establish the inter-rater reliability level of the Static-99 and STABLE 2007 at a sex offender assessment unit within Correctional Services of Canada to ensure that scoring of these measures in an applied setting met acceptable levels of interrater reliability.

Method

Participants

Participants were 55 adult males who had been convicted of a sexual offence according to the Criminal Code of Canada and given a federal sentence (two years or more). Offenders serving sentences of two years or more are under the jurisdiction of Correctional Services of Canada (CSC). During the period of the study (August 1, 2007 to August 1, 2008), in the Ontario region of CSC all male federal offenders were initially housed at an assessment unit, where they underwent various assessments to determine the most appropriate security-level and program requirements. Sexual Offenders were offered the opportunity to participate in a specialized assessment of their sexual behaviour, which was used to assess risk for sexual recidivism and to make appropriate treatment intensity recommendations. These assessments were completed by Correctional Program Officers. In the present study sexual offenders undergoing a specialized sex offender assessment were asked if they would be willing to participate in a study designed to examine whether the measures used as part of the assessment were repeatable when scored by a different rater for the same individual. Only those participants who agreed to participate and signed the study consent form were interviewed for the study.

Procedure

Participants were asked if they would be willing to participate in a study examining interrater reliability of the measures used by Correctional Program Officers as part of the intake specialized sex offender assessments. It was explained that a second Correctional Program Officer, who was blind to the Static-99 and STABLE-2007 scores completed by the first Correctional Program Officer, would complete a second interview and review of the participants' institutional files and then score the Static-99 and the STABLE 2007 based on their independent file review and interview information for each subject. Participants were informed that the study scoring of the measures would not be included in their official assessment report. Correctional Program Officers alternated the order of the study interview versus the official assessment interview for each subject to mitigate order effects. On average interviews lasted approximately 2 hours. File information available to the Correctional Program Officers included police reports for both sexual and nonsexual charges and convictions, documents from any previous incarcerations in a provincial correctional facility, court documents including the reasons for judgment and any victim impact statements, any prior psychiatric and/or psychological reports, and prior CSC documentation. All offenders had an official criminal record, relevant police reports, and prior CSC documentation if they had ever served a federal sentence. Most participants (> 90%) had additional collateral information from family member interviews.

Six Correctional Program Officers were part of the reliability study. Each possessed either an undergraduate degree in psychology or a college diploma in behavioral assessment and intervention and were trained on the Static-99 and STABLE-2007 by a certified Static-99/STABLE-2007 trainer. All but one of the assessors had at least one year of experience in the unit which means they would have scored approximately 40 Static-99s and 40 STABLE-2007s prior to participating in the study. One assessor had less field experience but was part of the development team for the scales. Each Program Officer rotated taking the role of the second assessor for the reliability study. Correctional Program Officers were blind to each other's scores.

Measures

STATIC-99. The STATIC-99 is an instrument designed to assist in the prediction of sexual and violent recidivism for sexual offenders. Hanson and Thornton (1999) developed this measure based on follow-up studies from Canada and the United Kingdom with a total sample size of 1,301 sexual offenders. The STATIC-99 consists of 10 items and produces estimates of future risk based upon the number of risk factors present in any one individual. The risk factors include prior sexual offences, current non-sexual violence, a history of non-sexual violence, number of previous sentencing dates, age less than 25 years old, having male victims, never lived with a lover for two continuous years, history of non-contact sex offences, having unrelated victims, and having stranger victims.

STABLE 2007. The STABLE 2007 was developed to assess change in intermediate-term risk status, assessment needs, and help predict recidivism in sexual offenders. Hanson and Harris (2000; Hanson et al., 2007) developed this risk assessment instrument based on a large prospective study from Canada, the states of Alaska and Iowa with a total sample size of 997 sexual offenders. The STABLE 2007 consists of 13 items and produces estimates of stable dynamic risk based upon the number of stable dynamic risk factors present in any one individual. The risk factors included are the presence of significant social influences, capacity for relationship stability, emotional identification with children, hostility toward women, general social rejection, lack of concern for others, impulsivity, poor problem solving skills, negative emotionality, sex drive and preoccupation, sex as coping, deviant sexual preference, and cooperation with supervision.

Overview of Analyses

Planned analyses were intraclass correlations (ICC) and percent agreement of identical scores completed by the two independent Correctional Program Officers for the 55 subjects on both the individual items and the total scores for the Static-99 and the STABLE-2007. One-way random effects ICC was used because the raters differed across the different cases (note that in one-way random effects models, there is no separate ICC for consistency versus absolute models). ICC for single measures was reported (average measures refers to cases where the risk score would be estimated as an average of multiple raters, which is not typically how these scales are used or reported in practice). For dichotomous items on the Static-99, kappa is traditionally reported as the inter-rater reliability statistic, but it does not account for the variation in raters. ICCs can be used for dichotomous variables (kappas were calculated for comparison purposes and were always within .01 of the ICC; e.g., .88 versus .89). For the STABLE-2007, given that the coding rules are a bit more subjective and there is a much wider range of total scores, percent agreement is presented for exact total scores, as well as for agreement within 1 and 2 points. Following Cicchetti (1994), ICC values above .75 were considered excellent. ICC values between .60 and .74 were considered good, and between .40 and .59 were considered fair.

Results

 

Table 1: Intra-class correlations for the STATIC-99
STATIC-99 (k = 55)
ICC
95%CI
% agreement
Total Score
.95
.92
.97
78
Young
.90
.84
.94
98
Ever Lived with
.91
.85
.95
96
Index nonsexual violence
.57
.36
.72
82
Prior non-sexual violence
.81
.69
.88
91
Prior Sex Offences
.90
.83
.94
89
Prior Sentencing Dates
.93
.88
.96
96
Non contact sex offence
.71
.55
.82
93
Unrelated victim
.76
.63
.85
89
Stranger victim
.89
.82
.93
96
Male victim
.86
.77
.91
96


Table 2: Intra-class correlations for the STABLE 2007
STABLE 2007 (k = 54)
ICC
95%CI
% agreement
Total Score
.86
.77
.92
72 (within 2 points)
48 (within 1 point)
13 (identical score)
Significant social influences
.37
.12
.58
61
Capacity for relationship stability
.83
.73
.90
91
Emotional ID with children
.81
.67
.90
90
Hostility toward women
.71
.55
.82
76
General social rejection
.61
.41
.75
66
Lack of concern for others
.70
.53
.81
72
Impulsive
.79
.66
.87
72
Poor problem solving
.55
.34
.71
63
Negative emotionality
.51
.28
.68
65
Sex drive/preoccupation
.72
.57
.83
74
Sex as Coping
.67
.49
.89
70
Deviant sexual preferences
.77
.63
.86
80
Cooperation with supervision
.66
.48
.79
72
Note. Due to one rater leaving some items blank, the sample size was 53 cases for General social rejection, Sex as coping, and Sexual drive/preoccupation. For Emotional identification with children, only 39 cases were analyzed, as this item is not scored for offenders with no victims less than 14 years old. For this item, in three cases, one rater scored the item and another omitted the rating; analyses are presented only when both scored the item.


As can be seen from Tables 1 and 2 the agreement was high between the two independent raters for total scores on both the STATIC-99 (.95) and the STABLE-2007 (.86). Total scores were identical between both raters for 78% of cases on the Static-99 and were within 2 points for 72% of the cases on the STABLE-2007 (scores were within 1 point for 48% of cases, and identical in 13% of cases). The ICCs for the individual STATIC-99 items ranged from .57 to .93 with a median of .88. Scores were identical between both raters over 90% of the time on 7 of the 10 individual items. The two STATIC-99 items with the lowest interrater reliability were Non-contact offences (ICC = .71) and Index non-sexual violence (ICC = .57); all other items had ICCs in the excellent range.

The ICCs for the individual STABLE-2007 items ranged from .37 to .83 with a median of .70. Four item ICCs were in the excellent range, six were good, two were fair (Poor problem-solving ICC = .55; Negative emotionality ICC = .51), and one ICC fell just below fair (Significant social influences ICC = .37). Despite the more modest ICCs for the STABLE items, percent agreement was still generally good (although lower than for Static-99R items. Nine out of the 13 items had identical scores more than 70% of the time (median percent agreement = 72%). Capacity for relationship stability had the highest percent agreement with identical scores between assessors 91% of the time. The lowest agreement was for Significant social influences (61%). Overall the inter-rater reliability for field coding of the Static-99 and STABLE-2007 was excellent for total scores and generally ranged from acceptable to excellent for individual items, with a few STABLE items falling in the fair and below-fair regions.

Discussion

Establishing good interrater reliability on assessment measures has the beneficial effect of ensuring accurate and high quality sex offender assessments. Additionally, evidence of high interrater reliability lends support to the credibility of the assessments completed when they are reviewed by external bodies, such as the National Parole Board or the courts during Dangerous Offender Hearings or Civil Commitment Hearings.

The present study provides evidence of the interrater reliability of the Static-99 and STABLE 2007 completed in an applied setting by Correctional Program Officers. Notable are the high levels of agreement for total scores for both the Static-99 and the STABLE 2007 and for many individual items for both measures. Admittedly, inter-rater reliability fell below the threshold of 'good' for 3 STABLE-2007 items. Generally, reliability of individual items on the STABLE-2007 was more modest than Static-99, suggesting greater variability among raters. This is not surprising given that these items require more discretion and subjectivity in assessing compared to the Static-99 items (and also more reliance on the interview, which differed across raters). Despite the lower reliability of some of these individual items, however, the total scores on STABLE-2007 still demonstrated excellent reliability.

For illustrative purposes, the two items on both measures with the lowest levels of interrater agreement will be discussed in more detail. On the Static-99 Non-contact sexual offences and Index non-sexual violence had the lowest level of agreement. An examination of the cases in which raters were discrepant on these items suggested that raters would have benefited from referring to the coding manual for the Static-99 while completing the scale. For example, it appeared that one rater believed that a conviction for "Uttering Threats" did not meet the criteria for Index non-sexual violence despite the inclusion of "Threatening" on the list of offences included in the STATIC-99 coding manual as examples of convictions for non-sexual violence (pg. 28). Similarly, it appeared that the discrepancy on the item Non-contact sexual offences was primarily due to confusion over the difference between convictions for Possession of Child Pornography (a non-contact sexual offence) and Making Child Pornography (a contact offence).

These findings have important implications because in some jurisdictions Static-99 scores weigh heavily in decisions about sex offender placement, treatment, and management. A review of the discrepant total scores on Static-99 revealed that the discrepancy resulted in a different nominal risk category (low, moderate-low, moderate-high, high) in 3 out of 55 cases (5%).

In one case one assessor's total score placed the subject in the moderate-high risk nominal category while the other assessor's total score placed the subject in the high risk nominal category. In two cases the difference between the two assessors' score resulted in differences between the moderate-low risk nominal category and the moderate-high risk nominal category.

On the STABLE 2007 the items with the lowest interrater agreement included Significant Social Influences and Negative Emotionality. Both items appeared to be strongly influenced by discrepant information provided to the interviewer during the independent interviews. Participants identified dissimilar significant social influences to different interviewers, or identified several social influences to one interviewer and few or none to the other interviewer. The discrepancies on the item Negative Emotionality also appeared to be related to discrepant information provided during interviews. It seemed that some participants presented as hostile and ruminating during one interview and then denied similar thoughts and emotions during the next interview.

It is possible that the demand characteristics of the two interviews affected the quality and type of information provided. A more extensive evaluation of the interview notes from both sessions may reveal if offenders who were interviewed for the study felt less compelled to present themselves in a favourable light as they as they were aware that the information would not be used in the official report or to make decisions about their case within Correctional Services of Canada. It is also possible that negative emotionality is more susceptible to natural mood fluctuations, or the differences reflected reactions to different interview styles among the raters. This would provide further evidence for the importance of having good collateral information when completing assessments using the STABLE 2007. Overall, it appeared that several of the differences in STABLE scoring could be attributed to differences in information as opposed to differences in interpretation/application of the coding rules. Consequently, studies where raters score the scale based on the same information would likely achieve higher inter-rater reliability. This study, however, is more realistic of typical field settings where different evaluators tend to conduct their own interviews, which is likely to impact scoring decisions.

A review of the discrepant total scores revealed that the discrepancy resulted in a different nominal risk/need category (low, moderate, high) in 6 out of 54 cases (11%). In all but one case the difference between the two assessors' score was between the moderate risk/need and high/risk need nominal categories (in the other case, a one-point scoring difference placed the offender in the low risk/need category by one assessor and in the moderate risk/need category by another).

Similar to the Quesada et al. (2013) study the present study provides evidence that practitioners can obtain inter-rater reliability levels that are on par with researchers. While encouraging regarding field coding of the Static-99 and STABLE-2007 the present study is based on a small sample (n = 55) in a unit that specializes in sex offender assessments using these tools. Replication is needed with a bigger sample size. Further as Quesada et al. (2013) note, future research should evaluate whether or not there are systematic differences in the way field practitioners and researchers score (e.g., researchers tend to score higher.)

It should also be noted that the Correctional Program Officers involved in the current study had all been trained by a certified trainer, all but one had previously scored at least 40 Static-99s and STABLE-2007s, and they were under the direct supervision of a highly experienced psychologist who was a certified trainer (though the ratings used for this study were before the cases were discussed with the supervisor). This could explain the higher reliability than that found by Murrie et al. (2009). This speaks to the importance of safeguarding quality by ensuring assessors are properly trained, have clinical supervision by an experienced assessor and preferably have a mentorship or peer review system to help address potential rater drift. Hanson, Helmus, and Harris (2015) found that conscientious evaluators in the Dynamic Supervision Project (i.e., those that completed and submitted all forms) were more accurate (AUC in the 0.76 to 0.80 range on the various versions of the STABLE and Static; n = 343) compared to the evaluations submitted by the less conscientious evaluators (AUC in the 0.58 to 0.68 range; n = 421). Consequently it appears that when attention is given to the training, supervision and quality of the assessments, and evaluators are "committed" to the process, risk assessment instruments are both more reliable and potentially more accurate at predicting sexual reoffence, both of which are critical to maximizing public safety when decisions are based on the results of actuarial and dynamic risk assessment tools, while also increasing the fairness and consistency of assessments with profound implications for the civil liberties of offenders.

References

  1. Andrews, D. A., & Bonta, J. (2010). The psychology of criminal conduct. Cincinnati: Anderson Publishing Co.
  2. Barbaree, H., Seto, M., Langton, C., & Peacock, E. (2001). Evaluating the predictive accuracy of six risk assessment instruments for adult sex offenders. Criminal Justice and Behavior, 28, 490-521. doi: 10.1177/009385480102800406
  3. Boccaccini, M. T., Murrie, D. C., Mercado, C., Quesada, S., Hawes, S., Rice, A. K., & Jeglic, E., (2012). Implications of Static-99 field reliability findings for score use and reporting. Criminal Justice and Behavior, 39, 42-58. doi: 10.1177/0093854811427131
  4. Bonta, J. (1996). Risk-needs assessment and treatment. In A. T. Harland (Ed.), Choosing correctional options that work: Defining the demand and evaluating the supply (pp. 18-32). Thousand Oaks, CA: Sage.
  5. Cicchetti, D. V. (1994). Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment, 6, 284-290.
  6. Fernandez, Y., Harris, A. J. R., Hanson, R. K., & Sparks, J. (2014). STABLE-2007 coding manual - revised 2014. Unpublished report. Ottawa, ON: Public Safety Canada.
  7. Hanson, R. K. (2005). Twenty years of progress in violence risk assessment. Journal of Interpersonal Violence, 20, 212-217. doi:10.1177/0886260504267740
  8. Hanson, R. K., & Harris, A. J. R. (2000). The sex offender need assessment ratig (SONAR): A method for measuring change in risk levels. User Report 2000-01. epartment of the Solicitor General of Canada, Ottawa. Available at www.ps-sp.gc.ca/res/cor/rep.
  9. Hanson, R.K., Harris, A.J.R., Scott T.L. & Helmus, L. (2007) Assessing the risk of sexual offenders on community supervision: The Dynamic Supervision project. Ottawa: Public Safety, Canada.
  10. Hanson, R. K., Helmus, L. M., & Harris, A. J. R. (2015). Assessing the risk and needs of supervised sexual offenders: A prospective study using STABLE-2007, Static-99R, and Static-2002R. Criminal Justice and Behavior, 42, 1205-1224. doi:10.1177/0093854815602094
  11. Hanson, R. K., & Thornton, D. (2000). Improving risk assessments for sex offenders: A comparison of three actuarial scales. Law and Human Behavior, 24, 119-136. doi:10.1023/A:1005482921333
  12. Hanson. R. K., & Thornton, D. (1999). STATIC-99: Improving actuarial risk assessments for sex offenders. User Report 1999-02. Department of the Solicitor General of Canada, Ottawa, Canada. Available at www.ps-sp.g.c.ca/res/cire/rep.
  13. Harris, A. J. R., & Hanson, R. K. (2010). Clinical, actuarial and dynamic risk assessment of sexual offenders: Why do things keep changing? Journal of Sexual Aggression, 16, 296-310. doi:10.1080/13552600.2010.494772
  14. Harris, A., Phenix, A., Hanson, R. K., & Thornton, D. (2003). Static-99: Coding rules revised 2003. Ottawa, ON: Solicitor General Canada.
  15. Heilbrun, K. (1997). Prediction versus management models relevant to risk assessment: The importance of legal decision-making context. Law and Human Behavior, 21, 347-359. doi:10.1023/A:1024851017947
  16. Mann, R. E., Hanson, R. K., & Thornton, D. (2010). Assessing risk for sexual recidivism: Some proposals on the nature of psychologically meaningful risk factors. Sexual Abuse: A Journal of Research and Treatment, 22, 191-217. doi:10.1177/1079063210366039
  17. McGrath, R. J., Cumming, G. F., Burchard, B. L., Zeoli, S., & Ellerby, E. (2010). Current practices and emerging trends in sexual abuser management: The Safer Society 2009 North American Survey. Brandon, VT: Safer Society Press.
  18. Murrie, D., Boccaccini, M., Turner, D., Meeks, M., Woods, C., & Tussey, C. (2009). Rater (dis)agreement on risk assessment measures in sexually violent predator proceedings: Evidence of adversarial allegiance in forensic evaluation? Psychology, Public Policy, and Law, 15, 19-53.doi:10.1037/a0014897
  19. Quesada, S. P., Calkins, C., & Jeglic, E. L. (2013). An examination of the interrater reliability between practitioners and researchers on the Static-99. International Journal of Offender Therapy and Comparative Criminology, 58, 1364-1375. doi: 10.1177/0306624X13495504
  20. Rettenberger, M., Matthes, A., Boer, D., & Eher, R. (2009). Prospective actuarial risk assessment: A comparison of five risk assessment instruments in different sexual offender subtypes. International Journal of Offender Therapy and Comparative Criminology, 2, 169-186. doi:10.1177/0306624X08328755

Footnote

1The STATIC family of scales refers collectively to Static-99, Static-99R, Static-2002, and Static-2002R.

Author address

Yolanda M. Fernandez
Correctional Services of Canada
Regional Headquarters (Ontario)
433 Union St.
Kingston, Ontario
K7L 4Y8
Yolanda.fernandez@csc-scc.gc.ca



 

alttext