ilrhr555 6-common-mistakes-to-avoid-when-interpreting-data ... ·...

2
© 2013 eCornell. All rights reserved. All other copyrights, trademarks, trade names, and logos are the sole property of their respective owners. 1 1. Reading too much into averages Summarizing data using averages can be useful, but also misleading. For example, if 2 of 3 business units have turnover rates near 10% and the third has a turnover rate of 100%, the average turnover rate (about 40%) does not describe turnover in any of the business units accurately. In this case, we might just want to report the three turnover rates separately rather than report the average. 2. Extrapolating beyond the data Sometimes people go beyond the data at hand to make unsupported conclusions. For example, if we look at performance output in the first 6 months on the job, we might see that performance increases twice as fast for those who received training. However, this does not mean that performance will continue to increase twice as fast for the next 6 months. This would be extrapolating beyond the data. We would need to follow employees for another 6 months to see if this pattern continues. 3. Accepting results based on small samples It is usually helpful to break out results (e.g., engagement survey data) by particular groups (e.g., men vs women, hourly vs. salaried). However, we have to be careful when making comparisons when some of the groups are small in size. For example, if we see that 65% of male managers are highly engaged, but only 33% of female managers are highly engaged, we should ask how many managers are in each group. There may be 80 male managers but only 3 female managers, in which case the percentages are highly unstable for females because they are based on such a small number.

Transcript of ilrhr555 6-common-mistakes-to-avoid-when-interpreting-data ... ·...

© 2013 eCornell. All rights reserved. All other copyrights, trademarks, trade names, and logos are the sole property of their respective owners.

 

1  

   

1. Reading  too  much  into  averages  Summarizing  data  using  averages  can  be  useful,  but  also  misleading.  For  example,  if  2  of  3  business  units  have  turnover  rates  near  10%  and  the  third  has  a  turnover  rate  of  100%,  the  average  turnover  rate  (about  40%)  does  not  describe  turnover  in  any  of  the  business  units  accurately.  In  this  case,  we  might  just  want  to  report  the  three  turnover  rates  separately  rather  than  report  the  average.      

2. Extrapolating  beyond  the  data  Sometimes  people  go  beyond  the  data  at  hand  to  make  unsupported  conclusions.  For  example,  if  we  look  at  performance  output  in  the  first  6  months  on  the  job,  we  might  see  that  performance  increases  twice  as  fast  for  those  who  received  training.  However,  this  does  not  mean  that  performance  will  continue  to  increase  twice  as  fast  for  the  next  6  months.  This  would  be  extrapolating  beyond  the  data.  We  would  need  to  follow  employees  for  another  6  months  to  see  if  this  pattern  continues.      

3. Accepting  results  based  on  small  samples  It  is  usually  helpful  to  break  out  results  (e.g.,  engagement  survey  data)  by  particular  groups  (e.g.,  men  vs  women,  hourly  vs.  salaried).  However,  we  have  to  be  careful  when  making  comparisons  when  some  of  the  groups  are  small  in  size.  For  example,  if  we  see  that  65%  of  male  managers  are  highly  engaged,  but  only  33%  of  female  managers  are  highly  engaged,  we  should  ask  how  many  managers  are  in  each  group.  There  may  be  80  male  managers  but  only  3  female  managers,  in  which  case  the  percentages  are  highly  unstable  for  females  because  they  are  based  on  such  a  small  number.        

kshepherd
Typewritten Text
kshepherd
Typewritten Text
ILRHR555: HR Analytics for Business Decisions Cornell University ILR School
kshepherd
Typewritten Text
kshepherd
Typewritten Text
kshepherd
Typewritten Text
kshepherd
Typewritten Text
Tool: The Purpose of Analysis 6 Common Mistakes to Avoid When Interpreting Data
kshepherd
Typewritten Text

  ILRHR555:  HR  Metrics  and  Analytics  Cornell  University  ILR  School  

© 2013 eCornell. All rights reserved. All other copyrights, trademarks, trade names, and logos are the sole property of their respective owners.

 

2  

4. Forgetting  that  most  analyses  provide  estimates  vs.  precise  values  People  might  use  statistics  to  make  statements  such  as  “our  data  suggest  that  if  we  can  reduce  turnover  by  10%,  we  expect  to  see  a  15%  increase  in  gross  sales.”  Our  data  are  almost  never  that  exact,  so  it  is  more  reasonable  to  think  about  these  as  estimates  rather  than  precise  values.  A  more  cautious  statement  might  be  “our  data  suggest  that  if  we  can  reduce  turnover  by  10%,  we  expect  to  an  increase  in  gross  sales.  Our  best  guess  is  that  the  increase  is  15%,  but  it  could  be  anywhere  between  2  and  25%.”      

5. Reading  too  much  into  top  vs.  bottom  comparisons  It  is  common  for  companies  to  report  key  metrics  on  a  scorecard  or  dashboard.  One  way  this  is  done  is  by  reporting  who  the  top  and  bottom  performers  are  on  a  given  metric.  For  example,  a  company  might  report  names  of  employees  who  ranked  in  the  “top  10”  and  “bottom  10”  in  total  sales  volume  for  the  last  quarter.  You  might  conclude  the  names  at  the  top  are  your  best  performers  and  those  at  the  bottom  are  your  poor  performers.  But  if  the  salespeople  have  different  territories,  for  example,  the  high  performers  might  simply  have  larger  territories  with  more  customers,  so  it  is  easier  to  sell  more  than  the  other  individuals  who  have  smaller  sales  areas.      

6. Failing  to  consider  whether  results  are  due  to  chance  Most  HR  data  contains  an  element  of  “random  error”,  meaning  that  data  we  collect  only  approximates  the  “true”  value.  For  example,  different  people  might  interpret  engagement  survey  questions  a  little  differently,  and  this  adds  some  element  of  random  error  to  our  data.  We  use  statistics  to  test  whether  differences  are  “real”  versus  simple  to  due  to  chance  caused  by  random  error.  Be  careful  not  to  accept  simple  “eyeball”  comparisons  between  two  values  –  the  difference  could  be  random  error  rather  than  a  real  difference.