Procurement Using 50% Scoring Ratio

This describes a typical limited tender process using standard methods of price / quality measurement, with a pricing ratio set at 50%. It demonstrates that this scoring ratio will almost certainly result in the cheapest price winning the project, even with a very low quality score.

The sample scores used to test this model is as follows:

Bidder NameFee (£)Quality Score (Out of 100%)
Practice A 102,45082
Practice B78,00075
Practice C125,15085
Practice D98,50068
Practice E25,00025
Practice F107,00076

Note that the scoring for Practice E has deliberately been set very low, scoring just 25% for quality but also coming in at less than a third of the next cheapest bid. Unfortunately, such wild variations in price scoring are not unusual when bidding for public sector work. There are few other sectors where any sensible person would accept a tender which was so much lower than the broad average of others; yet, for architectural services, such low-ball bidding is common—and rarely rejected, despite the Public Contract Regulations allowing commissioning bodies to reject “abnormally low” bids. Given that architectural salaries are broadly similar, the only explanation for low fees is that the bidding practice is anticipating spending far less time working on the project than others. There are no innovations in the market which enable practices to significantly reduce the cost of delivering their services without reducing amount of time spent performing it, and therefore the quality of the design which derives from these efforts.

For the purpose of this exercise, the most expensive practice has also scored the highest for quality. This is useful to demonstrate how different scoring methods can achieve a reasonable balance between quality and price, delivering best value for the client.

The following sections explore different methods of scoring and, using the figures above, illustrates how different ratios and scoring methods result in very different outcomes.

Relative to Cheapest Method of Scoring

In our example, the lowest financial bid was £25,000, and the highest £125,150. Scoring was based on a quality / cost ratio of 50:50.

The highest quality score was 85% which, when adjusted to the quality ratio of 50%, results in a quality component of 42.5%.

Using this method of scoring, Practice E (the cheapest) is the winning bidder. Clearly, any practice securing work with a fee of less than a third of the nearest bidder is either going to be unable to service the project properly or will be making a significant loss. Nobody in their right mind would accept such a low tender from, say, a builder, as clearly the quality of the work would be commensurately poor. Yet this happens all the time when it comes to commissioning architectural services.

RankingBidder NameFee (£)Price Score (%)
(max. 50.00)
Quality Score (%)
(max. 50.00)
Total Score (%)
1Practice E (WINNER)25,00050.0012.5062.50
2Practice B78,00016.0337.5053.53
3Practice A102,45012.2041.0053.20
4Practice C125,1509.9942.5052.49
5Practice F107,00011.6838.0049.68
6Practice D98,50012.6934.0046.69

Out of interest, let’s test the same figures using an alternative ratio: 70% quality and 30% price. This gives us the following results:

RankingBidder NameFee (£)Price Score (%)
(max. 30.00)
Quality Score (%)
(max. 70.00)
Total Score (%)
1Practice C (WINNER)125,1505.9959.5065.49
2Practice A102,4507.3257.4064.72
3Practice B78,0009.6252.5062.12
4Practice F107,0007.0153.2060.21
5Practice D98,5007.6147.6055.21
6Practice E25,00030.0017.5047.50

This result isn’t ideal either, as now the most expensive bidder has won the day, with a quality score that’s only marginally higher than the nearest competitor, but a pricing score which is a fifth higher.

Perhaps this suggests that the relative to cheapest method of scoring is never the best one to use?

Relative to Best Method of Scoring

An alternative way of assessing quality is to award all of the available quality points to the best submission. Having established a shortlist of what are, presumably, the most capable qualifying competitors on the market, it is nonsensical that the cheapest price tender receives the full 50% of the price score, but the best submission does not receive the full 50% of the available points for quality.

It may be that assessors have already given the best submission the full available score for quality, but if not, this method assesses all quality scores relative to the maximum percentage available, as well as giving the maximum marks for price to the cheapest bid. In other words, the best quality submission receives the whole 50% available, with all the remaining scores calculated proportionately to this.

It goes some way to preventing the cheapest bid “buying” a project with an inferior submission accompanied by an abnormally low financial submission—but does it ensure that the client is receiving the best value for money?

In this example, and using the same 50:50 ratio, Practice E still wins, having scored 50.00% for price and 14.71% for quality. So, pursuing this method doesn’t seem to make much difference.

RankingBidder NameFee (£)Price Score (%)
(max. 50.00)
Quality Score (%)
(max. 50.00)
Total Score (%)
1Practice E (WINNER)25,00050.0014.7164.71
2Practice A102,45012.2048.2460.44
3Practice B78,00016.0344.1260.14
4Practice C125,1509.9950.0059.99
5Practice F107,00011.6844.7156.39
6Practice D98,50012.6940.0052.69

Mean Narrow Average Method of Scoring

The mean narrow average (MNA) method of scoring discounts the highest and lowest tenders, establishing the mean value of those that remain, and scores all tender prices against the closest to that mean value. Fee bids which are less than half, or more than double, the mean value receive a price score of zero.

With Mean Narrow Average scoring, bidders are compelled to identify the appropriate fee required to service the project rather than cutting prices to buy the job, which could lead to underperformance or claims for additional fees later in the programme. Excessively low—or high—fees are penalised.

For these pricing figures, the mean (average) bid, including the lowest and highest fee submission, was £89,350, and the median was £100,475.

The highest and lowest fee bids have been excluded when calculating the mean average.

Using Mean Narrow Average with a price ratio of 50% results in Practice A being the winning bidder. Intuitively, that seems like a reasonable result: Practice A scored very close the median score (there were two more expensive bids, and three cheaper ones), and scored second highest in terms of quality. The full rankings are as follows:

RankingBidder NameFee (£)Price Score (%)
(max. 50.00)
Quality Score (%)
(max. 50.00)
Total Score (%)
1Practice A (WINNER)102,450.0046.9141.0087.91
2Practice D98,500.0048.9634.0082.96
3Practice F107,000.0044.5538.0082.55
4Practice B78,000.0040.4237.5077.92
5Practice C125,150.0035.1542.5077.65
6Practice E25,000.000.0012.5012.50

Alternative Ratios

To test a few alterative scenarios, I’ve run the same figures as above, but using different price/quality ratios. In most cases, the outcome is the same: Practice A wins, right up to the point where price comprises just 10%. Then, the highest scoring quality submission—and the most expensive bid—is the one that’s successful.

This means that the use of Mean Narrow Average is probably best deployed with a quality/cost ratio of around 60% – 70%.

Quality: 60%, Price: 40%

RankingBidder NameFee (£)Price Score (%)
(max. 40.00)
Quality Score (%)
(max. 60.00)
Total Score (%)
1Practice A (WINNER)102,450.0037.5349.2086.73
2Practice F107,000.0035.6445.6081.24
3Practice D98,500.0039.1740.8079.97
4Practice C125,150.0028.1251.0079.12
5Practice B78,000.0032.3445.0077.34
6Practice E25,000.000.0015.0015.00

Quality: 70%, Price: 30%

RankingBidder NameFee (£)Price Score (%)
(max. 30.00)
Quality Score (%)
(max. 70.00)
Total Score (%)
1Practice A (WINNER)102,450.0028.1557.4085.55
2Practice C125,150.0021.0959.5080.59
3Practice F107,000.0026.7353.2079.93
4Practice D98,500.0029.3747.6076.97
5Practice B78,000.0024.2552.5076.75
6Practice E25,000.000.0017.5017.50

Quality: 80%, Price: 20%

RankingBidder NameFee (£)Price Score (%)
(max. 20.00)
Quality Score (%)
(max. 80.00)
Total Score (%)
1Practice A (WINNER)102,450.0018.7665.6084.36
2Practice C125,150.0014.0668.0082.06
3Practice F107,000.0017.8260.8078.62
4Practice B78,000.0016.1760.0076.17
5Practice D98,500.0019.5854.4073.98
6Practice E25,000.000.0020.0020.00

Quality: 90%, Price: 10%

RankingBidder NameFee (£)Price Score (%)
(max. 10.00)
Quality Score (%)
(max. 90.00)
Total Score (%)
1Practice C (WINNER)125,150.007.0376.5083.53
2Practice A102,450.009.3873.8083.18
3Practice F107,000.008.9168.4077.31
4Practice B78,000.008.0867.5075.58
5Practice D98,500.009.7961.2070.99
6Practice E25,000.000.0022.5022.50

Out of interest, what happens is we reverse the ratio to prioritise cost over quality, using the Mean Narrow Average scoring method? Well, here we go:

Quality: 20%, Price: 80%

RankingBidder NameFee (£)Price Score (%)
(max. 80.00)
Quality Score (%)
(max. 20.00)
Total Score (%)
1Practice D (WINNER)98,500.0078.3313.6091.93
2Practice A102,450.0075.0616.4091.46
3Practice F107,000.0071.2815.2086.48
4Practice B78,000.0064.6715.0079.67
5Practice C125,150.0056.2417.0073.24
6Practice E25,

Surprisingly (at least to me), Practice A still scores very highly, coming second to Practice D which had a similar, but slightly lower price, but the second-to-bottom quality score. Nobody in their right mind would advocate for the commissioning of architectural services based on such a skewed ratio, but this serves to demonstrate that our earlier conclusion that a quality ratio of between 60% and 70% is likely to yield the best outcome for everyone.

A combination of Mean Narrow Average (MNA) and Relative to Best scoring methods could also be used, i.e. where the price score is calculated as the MNA result with the highest quality score receiving all of the points available, but given the success of the simple MNA method, it’s probably unnecessary.

All of these figures have been generated using a live model which you can test with different figures of your choice, here. And if you’re a procurement officer or public client, try putting so real-life tender figures you’ve received into this too, and see whether the outcome would have been any different.


After posting this article on LinkedIn, I’ve been directed to a comprehensive analysis of the various pricing models available to the public sector, written by Rebecca Rees of Trowers & Hamlins, which sets these out far more comprehensively than I could ever hope to do.

You can download the document using the button below.