Reliability of the Tuck Jump Assessment Using Standardized Rater Training

# BACKGROUND The Tuck Jump Assessment (TJA) is a test used to assess technique flaws during a 10-second, high intensity, jumping bout. Although the TJA has broad clinical applicability, there is no standardized training to maximize the TJA measurement properties. # HYPOTHESIS/PURPOSE To determine t...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Kevin Racine, Meghan Warren, Craig Smith, Monica R. Lininger
Formato: article
Lenguaje:EN
Publicado: North American Sports Medicine Institute 2021
Materias:
Acceso en línea:https://doaj.org/article/83cbc60515844ecc99c221791c05b658
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:# BACKGROUND The Tuck Jump Assessment (TJA) is a test used to assess technique flaws during a 10-second, high intensity, jumping bout. Although the TJA has broad clinical applicability, there is no standardized training to maximize the TJA measurement properties. # HYPOTHESIS/PURPOSE To determine the reliability of the TJA using varied healthcare professionals following an online standardized training program. The authors hypothesized that the total score will have moderate to excellent levels of intra- and interrater reliability. # STUDY DESIGN Cross-sectional reliability. # METHODS A website was created by a physical therapist (PT) with videos, written descriptors of the 10 TJA technique flaws, and examples of what constituted no flaw, minor flaw, or major flaw (0,1,2) using published standards. The website was then validated (both face and content) by four experts. Three raters of different professions: a PT, an AT, and a Strength and Conditioning Coach Certified (SCCC) were selected due to their expertise with injury and movement. Raters used the online standardized training, scored 41 videos of participants' TJAs, then scored them again two weeks later. Reliability estimates were determined using intraclass correlation coefficients (ICCs) for total scores of 10 technique flaws and Krippendorff α (K α) for the individual technique flaws (ordinal). # RESULTS Eleven of 50 individual technique flaws were above the acceptable level (K α = 0.80). The total score had moderate interrater reliability in both sessions (Session 1: ICC~2,2~ = 0.64; 95% CI (Confidence Interval) (0.34-0.81); Standard Error Measurement (SEM) = 0.66 technique flaws and Session 2: ICC~2,2~ = 0.56; 95% CI (0.04-0.79); SEM = 1.30). Rater 1had a good reliability (ICC~2,2~ = 0.76; 95% CI (0.54-0.87); SEM = 0.26), rater 2 had a moderate reliability (ICC~2,2~ = 0.62; 95% CI (0.24-0.80); SEM =0.41) and rater 3 had excellent reliability (ICC~2,2~ = 0.98; 95% CI (0.97-0.99); SEM =0.01). # CONCLUSION All raters had at least good reliability estimates for the total score. The same level of consistency was not seen when evaluating each technique flaw. These findings suggest that the total score may not be as accurate when compared to individual technique flaws and should be used with caution. # LEVEL OF EVIDENCE: 3b