The idea of comparing one's own work with that of another organization often provokes discomfort – either because one's own project is perceived as unique or project managers fear (justifiably, unfortunately) that donors will make their funding decisions based on comparative results.
For this reason, purely quantitative comparisons do not serve as a good basis for making a decision. Rather, it is important to interpret the data and to embed it in a context using qualitative statements.
For example, students’ rates of transition into vocational training in structurally weak regions cannot be usefully compared with transition rates in regions where apprenticeships are more widely available. If you want to interpret the figures, you’ll need to take labor-market conditions into account.
Nevertheless, comparisons are a key element of data analysis. Findings can be assessed, discussed and further developed only with the help of comparisons.
The mentor-group supervisors are tasked with collecting monitoring data at regular intervals, and with forwarding this to the project leaders.
To ensure consistent data quality, the mentor-group supervisors are given support particularly before and shortly after the project’s start. The project management team reviews the data as it is being consolidated, and again when evaluating its plausibility. Discussions within the leadership team serve as an additional quality check.
1. Before-after comparison
In before-after comparisons, changes over time are presented.
Example: You want to determine whether grades among students in a class that took part in YEA’s math tutoring services have changed – and in which direction.
Possible conclusions: The class’s overall average grade has improved by 0.6 points within two school years (3.5 to 2.9). To determine the degree to which this can be attributed to YEA, a comparison is needed. For example, grades can be compared within the class (column on the left), between students that participated in the project (column in the middle) and those who did not (column on the right). Otherwise, it could also be that the whole class’s grades improved because they have a new math teacher.
2. Target-actual comparison
Target-actual comparisons contrast the desired objectives with the actual results.
Example 1: You want to determine whether the share of youths able to find a vocational-training place directly after finishing secondary school has developed as hoped.
In the first year, the project did not achieve its objectives. Thus, you might consider recalibrating the project’s content and processes.
At the same time, you should consider whether the objectives were too ambitious and may need to be adjusted. Here too, a comparison with similar projects can be helpful.
After the target value was achieved in the previous year, the transition rate again declines. This shouldn’t be a cause for worry, but definitely pay attention.
The reasons for failing to reach the goal could lie either inside or outside the project itself: Was there a decline in quality? Have the target group’s needs changed in a way that demands a response? Has the number of apprenticeships offered in the region decreased?
A look back at the problem tree can also be helpful here.
Example 2: You want to determine the degree to which mentors are satisfied with the mentor support team.
A total of 40% of the mentors say they’re unsatisfied with the support they receive (columns 3+4). Even without an explicit target value, that’s too high.
Where is the source of this dissatisfaction? Can it be determined whether it is shared by mentors who are in the same mentor group, for example? What can be done to increase their satisfaction?
Example 3: You want to compare participant and mentor expectations with the actual results.
At YEA, the mentors and participating youths together set improvement goals with regard to mathematics and language and literature grades.
Given the results, the mentors and their students could together consider: “Which of our goals have we achieved? Where can we pat ourselves on the back, and where do we need to do more? Would additional measures such as more tutoring offers help?”
3. Comparisons between different project designs
In most cases, a project can be realized in various ways. Comparing these different paths allows for inferences to be drawn regarding the factors contributing to success.
For example, if the project design has changed after an initial evaluation, the results of the first evaluation can be compared with those of the second evaluation: What changes have led to other/better results?
Example: We want to find out if the mentored youths who have participated in an additional application training were more successful in finding vocational training positions.
Possible conclusions: The job-application training seems to have a positive effect on the transition rate. The project managers have previously had a feeling that this might be the case. The data now confirm this conjecture.
If it’s initially unclear which factors are in fact relevant to the analysis, closer examination is necessary:
- What different elements (job-application training, tutoring) and quality factors (trained teachers) make up the project? Can you draw conclusions regarding how each contributes to the project’s success?
- Have the youths who found a place in an apprenticeship program gone through the job-application training?
- Were the youths whose school grades improved significantly those who received tutoring assistance from a trained teacher?
The answers to such questions are important for deriving quality criteria.
In YEA’s case, the project managers decided to use fully trained teachers instead of education students for the tutoring sessions, at least to the extent possible, given available financial resources.
4. Comparisons between target groups
For certain analysis questions, differentiating between target groups or subgroups can be useful in the data evaluation and analysis processes.
Example: We want to find out, if all target and sub-target groups are reached as planned.
Possible conclusions: During its needs assessment, YEA identified youths with a migrant background as a target group with heightened needs. The evaluation shows that YEA is primarily reaching boys, while having significantly less success with girls.
Here, it’s important to ask why. Do the figures reflect the distribution and the needs in the classes currently being served? It may well be that in one class year, there are fewer female students with a migrant background eligible for participation in the mentoring project. Otherwise project managers should consider how to provide this subgroup with better-targeted support, thus enabling participation in the project.
5. Comparisons between projects / Benchmarking
Benchmarking is a special form of cross-organizational learning. A benchmark can be regarded as a standard or point of reference, and benchmarking thus accordingly focuses on standards.
Benchmarking involves a continuous comparison of costs, results or effects with comparable projects either conducted by the same organization or others. Benchmarking can encompass the entire project, or – often more usefully – only a certain aspect.
With few exceptions, it is rare to have a project that cannot be compared with another project, at least in part. Here it’s useful to keep your eyes open and to seek contacts with similar projects.
If you want to carry out a methodologically rigorous benchmarking process, this will quickly become expensive and complex (and is beyond the scope of what we can present here). However, we’d like to highlight a basic idea: Benchmarking can provide a guide for your own project’s further development, because it can be used for cooperative internal learning.
Example: Compare the transition rates in vocational training for three projects.
Possible conclusions: The comparison with two other projects shows that these achieve higher transition rates than YEA does.
Therefore, it must first be determined whether the approaches are in fact comparable: Are they working in the same region or in comparable regions? Are the results representative, or is the year being studied exceptional due to unusual results?
In a second step, YEA seeks through conversations with the other projects to find out what factors produced their successes, and the degree to which these can be adapted. For example, YEA could implement additional training sessions, internship offerings, or a longer or more intensive personal-support period.
Personal assumptions and value judgments often come into play during the analysis. For example, one young person’s decision to become a hairdresser may be regarded as an unsatisfactory result in a vocationally oriented measure by someone who attributes higher value to an academic career. Such misconceptions can never be entirely avoided, but it helps to keep your own bias in check.