Survey participation is important as this means that employees will provide their insights on the areas that are going well but most importantly on those that need attention. This article will show how to interpret the different data on participation available in the dashboard. For information on how to improve survey participation, see Tips for improving survey participation.
This article will contain:
- Aggregated participation rate
- Survey participation per round
- Engagement question participation
- Participation rate benchmark
- Aggregate score accuracy
To get started, the Engagement overview shows:
Aggregated participation rate
The top part of the participation panel shows the aggregated survey participation rate over a length of time set by the account administrator. This length of time is determined by the “Time for all data” setting, which is either 3, 6, or 12 months, and is counted back from the latest survey round. For more information on data aggregation settings, see Data aggregation resources.
This means that if an employee has participated in a survey during that length of time, they are counted in the participation rate. In other words, this feature answers the question: "How many employees have participated in a survey at least once in the last 3/6/12 months?" (depending on your configuration).
When viewing the Health & Wellbeing (H&W) or Diversity & Inclusion (D&I) dashboards, the same logic will be applied there.
For more information on the H&W and D&I dashboards, see The Health & Wellbeing (H&W) dashboard and Overview of the Diversity & Inclusion (D&I) Dashboard.
Survey participation per round
To view the non-aggregated participation rate of each survey round, click on the Expand icon.
These participation rates are not aggregated, meaning that you are able to see exactly how many employees were surveyed on each round and how many employees answered each round.
Engagement question participation
This section on your participation panel relates to the number of people who have been asked an engagement question within the 'time for all data' window. This is to provide more context around your team's engagement score, as the aggregation windows will be consistent across the entire product.
In this case, the engagement question refers to one of the following questions:
- Main engagement question: How likely is it you would recommend [Company Name] as a place to work?
Additional engagement questions that can be activated:
- ‘Loyalty’ outcome question: If you were offered the same job at another organization, how likely is it that you would stay with [Company Name]?
- ‘Satisfaction’ outcome question: Overall, how satisfied are you working for [Company Name]?
- ‘Belief’ outcome question: How likely is it you would recommend [Company Name] products or services to friends and family?
Clicking on surveyed employees will reveal the list of employees surveyed in the given segment, when enabled. This list refers to the employees who have been asked an engagement question over 'the time for all data' time aggregation period. When an employee leaves, their name will remain in this list as long as your 'time for former employees' aggregation period dictates (1, 3, or 6 months).
Participation rate benchmark
Benchmark for participation works the same as the benchmark for scores and is determined by the context selected in the left menu. When viewing the company scores as a whole, the benchmark is based on the industry or Peakon average, depending on the setting chosen by the administrator.
If the context is set to a particular segment, the benchmark is based on the company participation rate (or whatever benchmark has been configured for that segment by the administrator).
Typically, as a manager logging into Peakon viewing your team dashboard, the participation rate will be benchmarked against the company participation rate.
Aggregate score accuracy
The aggregated score accuracy indicates whether the scores are more or less accurate when taking into account participation rates. The score accuracy will either be low, medium, or high.
- High - Peakon expects that if all employees answered then the score would change by no more than 0.2
- Medium - Peakon expects that if all employees answered then the score would change between 0.2 - 0.5
- Low - Peakon expects that if all employees answered then the score would change by more than 0.5
It may be the case that you have a low participation rate and high score accuracy. This can happen for larger segments where you would only need a small number of responses to give an accurate score. A good analogy to help illustrate this case are polls for general elections. They are normally done on a very small percentage of the population (< 0.1%), yet are extremely accurate.
The score accuracy is designed to encourage action planning on scores in cases where participation rates are lower. In such situations, the score accuracy is a useful indicator of the reliability of scores for action planning initiatives.