Open access peer-reviewed chapter - ONLINE FIRST

Best-Value in the Procurement of Highway Projects: Lessons Learned from US Design-Build Projects

Written By

Maria Calahorra-Jimenez and Gustavo Garcia-Melero

Submitted: 02 May 2024 Reviewed: 02 May 2024 Published: 05 June 2024

DOI: 10.5772/intechopen.1005528

Recent Topics in Highway Engineering - Up-to-date Overview of Practical Knowledge IntechOpen
Recent Topics in Highway Engineering - Up-to-date Overview of Pra... Edited by Salvatore Antonio Biancardo

From the Edited Volume

Recent Topics in Highway Engineering - Up-to-date Overview of Practical Knowledge [Working Title]

Dr. Salvatore Antonio Biancardo

Chapter metrics overview

14 Chapter Downloads

View Full Metrics

Abstract

Best value has been used for more than two decades in the procurement of design-build highway projects in the United States. The question of whether this type of procurement selects best-value proposers has been raised by several authors. Considering procurement data from 128 projects procured between 2000 and 2022, this chapter aims to provide insights on the impact that different types of best-value award algorithms have in the selection of design-builders as well as answers to the question of whether or not best-value procurement is actually selecting the best-value proposers. Ultimately, the chapter will reflect on lessons learned that might help highway administrators plan to use or use the best value in procuring highway projects.

Keywords

  • best-value procurement
  • design-build
  • highways
  • award algorithms
  • montecarlo simulation

1. Introduction

The design and construction of highway projects have been traditionally delivered using design-bid-build (DBB) and low-bid procurement. In contrast with the traditional low-bid procurement, best-value enables Departments of Transportation (DOTs) to consider non-cost criteria (e.g., technical qualifications), in addition to cost, to evaluate projects’ proposals [1]. Best-value procurement in design-build (DB) highway projects is “a procurement process where price and other key factors are considered in the evaluation and selection of design-builders to enhance performance and value of construction” [2]. Best-value procurement has been used to select the firms that develop design-build highway projects in the United States [3]. This procurement enables Departments of Transportation (DOTs) to select a proposal by assessing cost and non-cost evaluation criteria [4]. Other criteria besides cost are incorporated to make a better selection that aligns with each DOT’s objectives; given that selecting based only on price is one of “the greatest barriers for improvement” [5, 6].

The formula used to aggregate cost and technical criteria is usually named the award algorithm. According to Molenaar and Tran [7], in highway projects, the award algorithms most commonly used in best-value procurements are three: weighted criteria, adjusted bid, and adjusted score. In all of them, proposers submit a technical and a price proposal.

Different award algorithms might impact expectations about the importance of cost and technical components in the design-builder’s evaluation. For example, Ballesteros-Pérez et al. [8] found that award algorithms are one of the variables that influence bidders’ competitiveness.

After two decades of using best-value procurement in US transportation projects, research has shown that 80% of the projects procured using best-value were awarded to the lowest bidder [9, 10]. Based on these findings, it might be argued that best-value leads to a low-bid selection—and not to a best-value selection [11].

As shown in Figure 1, in this chapter, the authors explore, based on the analysis of historical procurement data, how the type of award algorithm might influence the probability of awarding the contract to the lowest bidder (Section 2) and also whether or not best-value procurement is leading to a low-bid selection (Section 3).

Figure 1.

Chapter structure.

Advertisement

2. How do best-value award algorithms influence the selection?

The authors posed two research questions to understand how the type of award algorithm might affect the probability of the lowest bidder winning the contracts. The first question is does the type of award algorithm—weighted criteria or adjusted bid/score—impact the probability of awarding the contract to the lowest bidder? And the second question does the number of components—two or three—in the weighted criteria algorithm impact the probability of awarding the contract to the lowest bidder?

To answer these questions, the authors followed a four-step approach. First, score data were collected from DOTs across the United States. Score data included the scores given to design-build proposals regarding cost, technical, and qualification components. The aim of collecting these data was to build (in step 2) empirical probabilistic distributions for cost, technical, and qualification scores. Second, exploratory and goodness of fit analyses were conducted. Third, the Monte Carlo simulation was conducted using the probabilistic distributions obtained in step 2. Finally, results from the different scenarios simulated are compared.

2.1 Data collection

Best-value procurement scores were obtained from 128 design-build projects across 13 states in the United States (Table 1). The projects were procured between 2002 and 2020. Procurement’s scores are generally public information made available through Departments of Transportation’s Websites or under request.

State# Projects
Georgia7
Kentucky7
Louisiana5
Minnesota20
Mississippi12
North Carolina25
South Carolina14
Texas3
Virginia4
Washington24
California2
Connecticut2
Ohio3
Total128

Table 1.

Number of projects analyzed per state.

Each project’s procurement might include between two and five bids. Each bid contains one score related to the cost criterion, CostScore; one score regarding one non-cost criterion-cost, NonCostScore1; and, in some cases, might include another non-cost criterion score, NonCostScore2. In this research, NonCostScore1 relates to the score given to the technical criterion (i.e., score given to the technical component of the proposal), which might consider aspects such as “project delivery approach,” “conceptual road plans,” “innovation and added value,” “Environmental impacts & public outreach,” etc., depending on the DOT [12].

NonCostScore2 relates to the score given to the qualifications criterion (i.e., the score given to the qualifications component of the proposal), which is the score that proposers receive in the first phase of the best-value procurement. Some states, such as the South Carolina Department of Transportation (SCDOT), include the qualifications score in the weighted criteria award algorithm as an additional criterion besides technical aspects [12].

In this study, each score is an observation. Thus, a total of 406 observations were analyzed from the 128 projects collected. Scores from different procurements and criteria were harmonized to a scale ranging from zero to one, as was considered by Calahorra et al. [11].

2.2 Data analysis

2.2.1 Variables definition

The authors defined nine [9] variables establishing a difference between the technical scores given to the lowest bidder (lb) and the non-lowest bidders (nlb) (Table 2).

Scores AggregatedScores Disaggregated
Lowest biddersNon-lowest bidders
CostScore1CostScore_nlb
NonCostScore 1NonCostScore1_lbNonCostScore1_nlb
NonCostScore2NonCostScore2_lbNonCostScore2_nlb

Table 2.

Research variables.

Disaggregating the data enables a more accurate representation of the proposers’ behavior that will be captured in the probabilistic distributions created in the second step of the data analysis and, subsequently, used in the simulation.

CostScore for the lowest bidders will not have a probabilistic distribution because, in common practice, it always adopts the value of 1. This means that for the cost criterion, the lowest bidder always gets one [1] as the maximum score in this criterion [13].

An exploratory data analysis (EDA) was conducted to analyze the variables in aggregated and disaggregated terms. The authors obtained the variables’ statistics, checked missing values, removed inconsistent values, and calculated the correlation between aggregated variables.

2.2.2 Goodness of fit analysis

Statistical analysis was developed to obtain the probabilistic distributions that best fit each of the variables defined in Table 2. The Akaike Information Criteria (AIC) technique was used to measure the sample’s fitness with a set of hypothesized distributions (Tables 10, 12 and 14). The authors utilized the Gamlss R package [14] to determine the empirical distributions that best fit the data using the AIC technique. The analysis resulted in five probability distributions for each of the five variables: NonCostScore1_lb (Figure 5, Table 14), NonCostScore2_lb (Table 9), CostScore_nlb (Figure 3, Table 11), NonCostScore1_nlb (Figure 3, Table 13), and NonCostScore2_nlb (Table 9).

The simulation process in the next step used these probabilistic distributions as input for the analysis of scenarios.

2.3 Simulation

First, weighted criteria and adjusted/bid-adjusted score award algorithms were simulated. Second, the case of the weighted criteria award algorithm with two and three components was explored.

2.3.1 Weighted criteria and adjusted bid/score award algorithms

This section aims to determine whether using the weighted criteria or the adjusted bid algorithm influences the probability of awarding the contract to the lowest bidder. To this end, the authors created a theoretical procurement process with two bidders. One bidder was the lowest, and the other was the non-lowest bidder.

This situation was analyzed in two different cases. First, of the weighted criteria award algorithm and, second, the case of the adjusted bid.

Case 1: Weighted criteria.

In this case, the best-value weighted criteria award algorithm is defined with two components, one for cost and the other a non-cost criterion. The final score (FS) for each proposer is the result of the award algorithm formula in each case, as indicated by Eqs. (1) and (2).

FS_lb=Wc1+Wnc1NonCostScore1_lbE1
FS_nlb=WcCostScore_nlb+Wnc1NonCostScore1_nlbE2

Where:

FS_lb is the final score for the lowest bidder.

FS_nlb is the final score for the no-lowest bidder.

NonCostScore1_lb, CostScore_nlb, NonCostScore1_nlb are the scores’ probability distributions obtained from the goodness of fit analysis.

Wc and Wnc are the weights given to each criterion. The weights are fixed for each scenario simulation. In each case, four scenarios were simulated, varying the weights of cost between 70%, 60%, 50%, and 40%.

In each weights scenario, five sub-scenarios were created to allow the comparison with the adjusted bid case. Each sub-scenario represents a different relationship between the cost component of the lowest bidder and the non-lowest bidder, as indicated in Table 3:

Subscenarios of costThe non-lowest bidder’s cost proposal [CostScore_nlb] is
15% higher than the lowest bidder
210% higher than the lowest bidder
315% higher than the lowest bidder
420% higher than the lowest bidder
525% higher than the lowest bidder

Table 3.

Sub-scenarios for cost Score_nlb.

The simulation was coded in R language and utilized 10,000 final score observations in each weight scenario to calculate the probability of the lowest bidder winning using Eq. 3.

ProbFSlbFSnlb>0E3

Case 2 Adjusted bid award algorithm.

In this case, the adjusted bid award algorithm considers two components, one for cost and the other a non-cost criterion. The final score (FS) for each proposer is the result of the award algorithm formula in each case, as indicated by Eqs. (4) and (5).

FS_lb=1/NonCostScore1_lbE4
FS_nlb=Cost_nlb/NonCostScore1_nlbE5

Where:

FS_lb is the final score for the lowest bidder.

FS_nlb is the final score for the no-lowest bidder.

NonCostScore1_lb, NonCostScore1_nlb are the scores’ probability distributions obtained from the goodness of fit analysis.

CostScore_nlb is by the scenarios listed in Table 3.

The simulation was coded in R language and utilized 10,000 final score observations in each sub-scenario of CostScore_nlb scenario to calculate the probability of the lowest bidder winning using Eq. 6.

ProbFSnlbFSlb>0E6

2.3.2 Weighted criteria two components and three components

This section aims to determine whether including two non-cost criteria instead of one in the weighted criteria award algorithm influences the probability of awarding the contract to the lowest bidder. To this end, the authors created a theoretical procurement process with two bidders. One bidder was the lowest, and the other was the non-lowest bidder.

This situation was analyzed in two different cases. First, the case of two non-cost criteria, and second, the case of one non-cost criterion,

Case 1: Two non-cost criteria.

In this case, the best-value weighted criteria award algorithm considers three components, one for cost and the other two for two non-cost criteria. The final score (FS) for each proposer is the result of the award algorithm formula in each case, as indicated by Eqs. (7) and (8):

FS_lb=Wc1+Wnc1NonCostScore1_lb+Wnc2NonCostScore2_lbE7
FS_nlb=WcCostScore_nlb+Wnc1NonCostScore1_nlb+Wnc2NonCostScore2_nlbE8

Where:

FS_lb is the final score for the lowest bidder.

FS_nlb is the final score for the no-lowest bidder.

NonCostScore1_lb, NonCostScore2_lb, CostScore_nlb, NonCostScore1_nlb, and NonCostScore2_nlb are the scores’ probability distributions obtained from the goodness of fit analysis.

Wc, Wnc1, and Wnc2 are the weights given to each criterion. The weights are fixed for each scenario simulation. In each case, 14 scenarios were simulated, varying the weights according to the values included in Table 4.

Weight ScenarioCase 1: Two non-cost criteriaCase 2: One non-cost criterion
Cost
[Wc]
NonCost1
[Wnc1]
NonCost2
[Wnc2]
Cost
[Wc]
NonCost1
[Wnc1]
140%10%50%40%60%
240%20%40%40%60%
340%30%30%40%60%
440%40%20%40%60%
540%50%10%40%60%
650%10%40%50%50%
750%20%30%50%50%
850%30%20%50%50%
950%40%10%50%50%
1060%10%30%60%40%
1160%20%20%60%40%
1260%30%10%60%40%
1370%10%20%70%30%
1470%20%10%70%30%

Table 4.

Weight scenarios.

In each weights scenario, the probability of the lowest bidder winning was calculated by using Eq. 9

ProbFSlbFSnlb>0E9

Case 2: One non-cost criterion.

In this case, the best-value weighted criteria award algorithm considers two components, one for cost and the other a non-cost criterion. The final score (FS) for each proposer is the result of the award algorithm formula in each case, as indicated by Eqs. (10) and (11).

FS_lb=Wc1+Wnc1NonCostScore1_lbE10
FS_nlb=WcCostScore_nlb+Wnc1NonCostScore1_nlbE11

Where:

FS_lb is the final score for the lowest bidder.

FS_nlb is the final score for the no-lowest bidder.

NonCostScore1_lb, CostScore_nlb, and NonCostScore1_nlb are the scores’ probability distributions obtained from the goodness of fit analysis.

Wc and Wnc are the weights given to each criterion. The weights are fixed for each scenario simulation. In each case, 14 scenarios were simulated, varying the weights according to the values included in Table 4.

The simulation was coded in R language and utilized 10,000 final score observations in each weight scenario to calculate the probability of the lowest bidder winning using Eq. 9.

2.4 Results

Results from the simulations run considering best-value procurements with weighted criteria, adjusted bid/score, and weighted criteria with two and three components’ algorithms are summarized in the sections below.

Table 5 summarizes the results from the Montecarlo simulation related to the weighted criteria and adjusted bid/score award algorithms case.

ScenariosProbability lowest bidder to win
Weight of costCost of no-low bidder regarding the lower bidderCase 1: Weighted Criteria AlgorithmCase 2: Adjusted bid algorithm
40%+5%66.67%69.10%
+10%75.60%78.90%
+15%80.20%83.60%
+20%87.00%89.20%
+25%92.50%93.60%
50%+5%70.40%69.10%
+10%81.30%78.90%
+15%89.40%83.60%
+20%94.50%89.20%
+25%97.10%93.60%
60%+5%76.50%69.10%
+10%89.50%78.90%
+15%95.30%83.60%
+20%98.00%89.20%
+25%99.30%93.60%
70%+5%84.90%69.10%
+10%95.70%78.90%
+15%98.80%83.60%
+20%99.60%89.20%
+25%100.00%93.60%

Table 5.

Simulation results.

Table 5 shows that the weighted criteria award algorithm leads to a lower probability of the lowest bidder winning if the weight of cost is 40%. In the rest of the scenarios of cost, the probability for different sub-scenarios of cost leads the adjusted bid algorithm to provide lower probabilities for the lowest bidder to win.

Table 6 summarizes the results from the Monte Carlo simulation related to the weighted criteria with two- and three-component cases.

ScenarioCase 1: Two non-cost criteria [Equations 1, 2]Case 2: One non-cost criterion [Equations 4, 5]Variation between Case 1 and Case 2
[PC1]-[PC2]/[PC2]
WeightsProbability lowest bidder to win [PC1]WeightsProbability lowest bidder to win [PC1]
CostNon
Cost1
Non
Cost2
CostNon
Cost1
140%10%50%88.50%40%60%72.85%21.48%
240%20%40%87.30%40%60%72.85%19.84%
340%30%30%83.40%40%60%72.85%14.48%
440%40%20%81.40%40%60%72.85%11.74%
540%50%10%77.70%40%60%72.85%6.66%
650%10%40%93.70%50%50%78.40%19.52%
750%20%30%90.60%50%50%78.40%15.56%
850%30%20%86.30%50%50%78.40%10.08%
950%40%10%82.60%50%50%78.40%5.36%
1060%10%30%97.70%60%40%85.07%14.85%
1160%20%20%93.80%60%40%85.07%10.26%
1260%30%10%89.90%60%40%85.07%5.68%
1370%10%20%99.50%70%30%93.60%6.30%
1470%20%10%96.20%70%30%93.60%2.78%

Table 6.

Simulation results 2.

Table 6 shows that the case of considering two non-cost criteria increases the probability of awarding the contract to the lowest bidder in all the scenarios. The highest increase is 21.48% in the case of 40% weight to the cost criterion and 60% to the noncost2 criterion. As the weight of cost increases and the weight of the noncost2 criterion decreases, the difference between the probability of awarding the contract to the lowest bidder in case 1 and case 2 is reduced up to a minimum of 2.78%.

2.5 Discussion

This section addresses the questions posed at the beginning of the chapter.

2.5.1 Does the type of award algorithm impact the probability of awarding the contract to the lowest bidder?

DOTs aiming to use a weighted criteria algorithm to integrate cost and non-cost evaluation criteria are suggested to use weights for a cost of 50% or lower. If not, the effort that this type of evaluation requires is not worth it. The use of weighted criteria in these cases does not make a difference regarding the impact of non-cost criteria in the evaluation.

2.5.2 Does the number of components in the equation impact the probability of awarding the contract to the lowest bidder?

This study aimed to determine whether including two non-cost criteria instead of one in the weighted criteria award algorithm influences the probability of awarding the contract to the lowest bidder. Results from the Montecarlo simulation indicated that including a second non-cost criterion varies the probability of awarding the contract to the lowest bidder between 21.48% and 2.78%, depending on the weights given to the different criteria. This result suggests that the number of non-cost criteria has an influence on the likelihood of the lowest bidder being awarded the contract, as it does the ranges of the weights and scores ranges applied to those criteria, according to previous research [11].

Having around 80% of the best-value highway projects awarded to the lowest bidder is a reason to explore the impact different settings in the award algorithm might have on the likelihood of having the lowest bidder awarded the contract. Specifically, for the number of non-cost criteria to include in the weighted criteria formula, this research found that if the weight given to cost is lower than 50%, including two non-cost evaluation criteria instead of one increases the probability of awarding the contract to the lowest bidder in around 20.5%.

Further, Pomerol and Barbar-Romero [15] indicated that the weighted sum algorithm implies that one unit lost in one criterion is exactly compensated by one unit gained in the other. Thus, having multiple non-cost criteria might make it difficult for decision-makers to evaluate the trade-off that they are getting between cost and non-cost criteria.

Notwithstanding these results, DOTs might select several non-cost criteria in the award algorithm formula to obtain information about the proposers in all those areas. This research adds new data for decision-makers to consider when deciding how many criteria to include in the award algorithm.

As suggested by this research results, the number of criteria considered in the weighted criteria award algorithm influences the probability of awarding the contract to the lowest bidder. However, it is important to consider that these results depend on probabilistic distributions (for cost and non-cost criteria) obtained from historical data. Future research should explore what might be the results if the score distributions are different. Further, if it is found that the use of other distributions decreases the probability of the lowest bidder winning, future research should address what steps might be taken in actual practice to move the general trend to those distributions.

This research’s findings contribute to the knowledge of alternative project delivery and procurement methods by showing how the number of criteria considered in the weighted criteria award algorithm influences the probability of awarding the contract to the lowest bidder. These results fill the gap existing in current practice regarding recommendations on the number of non-cost criteria to consider in the weighted criteria award algorithm, being helpful for DOTs crafting DB requests for proposals for highway construction projects.

Advertisement

3. Is best-value leading to a low-bid selection?

The authors posed one research question to explore whether best-value procurement is leading to a low-bid selection.

To answer this question, the research followed a three-step approach—first, data collection; second, best-value selection categorization; and third, descriptive statistical analysis.

3.1 Data collected

The authors collected data from design-build best-value bidding results from 275 transportation projects. The projects ranged from 2002 to 2021 and included 14 states across the United States. Best-value bidding results are information publicly available on DOTs’ websites and include the following data for each project: state, year, project name, proposed cost, and technical scores given to the proposal placed in the project’s procurement, and who was the winner in each case. From the data collected, the researchers reviewed and filtered all the projects with the objective of selecting only the ones that had complete information about the procurement and the selection decision. As a result, 113 projects were considered. Of those, 20% were procured using a weighted criteria award algorithm, and 80% adjusted bid/adjusted score.

3.2 Best-value selection type categorization

To characterize the type of selection that was conducted in each procurement, the researchers defined three categories: Cost-driven selection: the proposer that won the contract was the lowest bidder, and their technical proposal was not given the highest technical score among the proposers in that procurement. Best-value selection: the proposer that won the contract was the lowest bidder, and their technical proposal was given the highest technical score among the proposers in that procurement. Quality-driven selection: the proposer that won the contract was not the lowest bidder, and their technical proposal was given the highest technical score among all the proposers in that procurement.

3.3 Descriptive statistical analysis

The descriptive statistical analysis considered two nominal variables: the type of award algorithm that could take two values weighted criteria or adjusted bid/score, and the type of selection that could take three values: cost-driven, best-value-driven, or quality-driven. Cross-tabulation tables were used to understand the relationship between types of awards algorithms and types of selection.

3.4 Findings and discussion

Eighty-one percent of the 113 projects analyzed were awarded to the lowest bidder in either a cost-driven selection or best-value selection. Sixty-three percent of the projects were awarded to the firm that provided the best technical proposal.

Further, Figure 2 shows two main trends:

Figure 2.

Type of selection per type of best-value award algorithm. Note: Percentages calculated from the total sample of 113 projects.

The percentage of cost-driven or best-value-driven selections is similar when compared between award algorithms. In weighted criteria, the percentage of cost-driven and best-value-driven selections is 9%. In adjusted bid/adjusted score, the percentage ranges between 29% and 34%. These values are substantially higher than the ones obtained for quality-driven selections (3% and 17%, depending on whether the weighted criteria or the adjusted bid/score were used).

The weighted criteria algorithm provides similar percentages of cost-driven and best-value-driven selections (9%). However, the adjusted bid/adjusted score algorithms provide a higher percentage of best-value selections as compared with cost-driven selections (34% versus 29%).

Previous research has shown that after two decades of using best-value procurement, 80% of the projects were awarded to the lowest bidder [9, 10]. Based on these data, it might be argued that best-value procurement works as the low-bid procurement with a selection only based on cost.

The analysis of 113 best-value design-build projects supports previous research on the fact that around 80% of the projects were awarded to the lowest bidder (in this research, the percentage was 81%). However, this research refutes the argument that best-value works as the low-bid procurement, considering that based on the results.

Independently of the award algorithm, less than 30% of the projects procured with best-value procurement provide a cost-driven selection.

When using the weighted criteria award algorithm, in 50% of the projects where the lowest bidder was selected, that firm was also the one that received the highest score in its technical proposal.

In the case of the adjusted bid/score algorithm, 53% of the projects awarded to the lowest bidder were awarded to the firms that received the highest technical score.

The aggregate percentage (weighted criteria plus adjusted bid/score) of cost-driven selections versus best-value selection and quality-driven selection together results in 38% versus 62%.

Advertisement

4. Conclusions

Results of these analyses show that the weighted criteria award algorithm with two components in the equation provides lower probabilities for the lowest bidder to win the contract as compared with the adjusted bid or weighted criteria with three components algorithms. It is important to consider that this applies in the case that the weight given to the cost component is lower than 50%.

Further, the analysis shows that 43% of the best-value design-build projects led to an actual best-value selection with the best technical and economical proposal being chosen. Thus, results suggest that there is still room for improvement, and public administrators need to be cognizant of the effect that the type of award algorithms, the weights given to each criterion and the score systems utilized have an impact in selection.

Advertisement

Acknowledgments

The authors wish to acknowledge the state highway agency personnel who supported this research. The authors would like to thank the economic support and assigned time provided by California State University, Fresno, and specifically the Lyles College of Engineering.

Advertisement

A. Appendices

A.1 Statistics summary and probability distributions

This section includes a description of the data collected based on the exploratory data analysis (EDA) conducted and the results of the goodness of fit analysis that provided the probabilistic distributions that best fit the cost scores and non-cost scores used in the analysis (Tables 7 and 8). Following the description and goodness of fit analysis, this section shows the results of Monte Carlo simulation performed in cases 1 and 2 described in Table 4.

CostScoreNonCostScore1NonCostScore2
n36940618
Minimum0.40680.16530.5600
First quartile0.84250.78110.5782
Median0.93410.86000.6145
Mean0.90690.81990.6202
Third quartile10.91950.6580
Maximum110.7110

Table 7.

Aggregated variables. Statistics summary.

CostScoreNonCostScore1NonCostScore2
_nlb_nlb_lb_nlb_lb
n254254115105
Minimum0.40680.16530.24750.57700.5620
First quartile0.80130.78440.80350.61120.5630
Median0.88440.85600.87680.64600.5740
Mean0.86470.81880.84240.63690.5984
Third quartile0.93760.92500.92250.66620.5820
Maximum0.99970.995210.68000.7110

Table 8.

Dissaggregated variables: Statistical summary.

CostScore and NonCostScore1 distributions were found left-skewed (median > mean).

In the case of the disaggregated variables, the statistical summary is shown in Table 8. CostScore_lb is not included in the table, given that it always adopts the value of 1.

CostScore_nlb, NonCostScore1_nlb, and NonCostScore1_lb distributions were found left-skewed (median > mean). The small sample of NonCostScore2 did not allow for creating an empirical distribution.

The goodness of fit analysis resulted in the probabilistic distributions that best fit the different variables (Table 9). AIC and fitted distribution values are included in this article’s Appendix. In the case of the variable NonCostScore2, a triangular distribution was adopted, given the sample size. The distribution was truncated between 0 and 1. The authors adopted a triangular distribution because, according to Johnson [16], it can be considered as a proxy for the beta distribution, and the beta distribution has been suggested by Law and Kelton [17] as “a rough model in the absence of data.”

AggregatedDisaggregated
Low biddersNon-low bidders
CostScore1CostScore_nlb
Distribution: Box-Cox Power Exponential
Parameters:
Mu:-0.116678
Sigma: −2.437
Nu: 5.5734
Tau: 2.0596
NonCostScore1NonCostScore1_lb
Distribution: Box-Cox Cole and Green
Parameters:
Mu:0.865307
Sigma: −2.2464
Nu: 4.8876
NonCostScore1_nlb
Parameters: Generalized Gamma
Parameters:
Mu:-0.07610
Sigma: −2.6457
Nu: 39.29
NonCostScore2NonCostScore2_lb
Distribution: Triangular
Parameters:
Min: 0.5620
Max: 0.7110
Mode: 0.60
NonCostScore2_nlb
Distribution: Triangular
Parameters:
Min: 0.5770
Max: 0.6800
Mode: 0.64

Table 9.

Probability distributions for the cost score and non-cost score variables.

A.2 Fitted distributions

A.2.1 Variable: CostScore_nlb

See Tables 10 and 11, Figure 3

DistributionAIC
Box-Cox Power Exponential Original−529
Box-Cox Power Exponential−529
Generalized Gamma−521
Generalized Beta Type 2−519
Box-Cox Cole and Green Original−509

Table 10.

Top 5 distributions.

ParameterEstimateStd. Errort valuePr(>|t|)
Mu−0.1166780.007052−16.55<2e-16
Sigma−2.4370.053−45.98<2e-16
Nu5.57340.538610.35<2e-16
Tau2.05960.32736.2931.39e-09

Table 11.

Fitted distribution.

Figure 3.

Box-cox power exponential original distribution and CDF.

A.2.2 NonCostScore1_nlb

See Tables 12 and 13, Figure 4.

DistributionAIC
Generalized Gamma−383
Box-Cox Power Exponential−381
Box-Cox Power Exponential Original−381
Generalized Beta Type 2−381
Box-Cox Cole and Green Original−360

Table 12.

Top 5 distributions.

ParameterEstimateStd. Errort valuePr(>|t|)
Mu−0.076100.01068−7.1271.09e-11
Sigma−2.64570.1265−20.92<2e-16
Nu39.2910.443.7640.000208

Table 13.

Fitted distribution.

Figure 4.

Generalized gamma distribution and CDF.

A.2.3 NonCostScore1_lb

See Tables 14 and 15, Figure 5.

DistributionAIC
Box-Cox Cole and Green−197
Box-Cox Cole and Green Original−197
Box-Cox Power Exponential−195
Box-Cox Power Exponential Original−195
Box-Cox t−195

Table 14.

Top 5 distributions.

ParameterEstimateStd. Errort valuePr(>|t|)
Mu0.8653070.00951790.92<2e-16
Sigma−2.24640.1114−20.17<2e-16
Nu4.88760.78386.2368.17e-09

Table 15.

Fitted distribution.

Figure 5.

Box-cox Cole and Green distribution and CDF.

References

  1. 1. Gransberg DD, Molenaar KR, Scott S, Smith N. Implementing best-value procurement in highway construction projects. In: Alternative Project Delivery, Procurement, and Contracting Methods for Highways. ASCE Press; 2007. pp. 60-79
  2. 2. Scott S, Molenaar KR, Gransberg DD, Smith NC. NCHRP Report 561. Best-Value Procurement Methods for Highway Construction Projects. Washington D.C.: Transportation Research Board of the National Academies; 2006
  3. 3. Anderson SD, Russell JS. NCHRP Report 451. Guidelines for Warranty, Multi-Parameter, and Best Value Contracting. Vol. 451. Washington D.C.: Transportation Research Board of the National Academies; 2001
  4. 4. der Yu W, Wang KW. Best value or lowest bid? A quantitative perspective. Journal of Construction Engineering and Management. 2012;138(1):128-134
  5. 5. Egan J. Rethinking the report of the construction task force. Construction. 1998;38:7
  6. 6. Clement RT, Reilly T. Making Hard Decisions. Third ed. Mason, OH: Cengage Learning; 2014
  7. 7. Keith M, Tran D. NCHRP Synthesis 471. Practices for Developing Transparent Best Value Selection Procedures. Washington D.C.: Transportation Research Board of the National Academies; 2015
  8. 8. Ballesteros-Pérez P, Skitmore M, Pellicer E, Zhang X. Scoring rules and competitive behavior in best-value construction auctions. Journal of Construction Engineering and Management. 2016;142(9):04016035
  9. 9. FMI (Fails Management Institute). Design-Build Utilization. Combined Market Study. 2018. Available from: https://dbia.org/wp-content/uploads/2018/06/Design-Build-Market-Research-FMI-2018.pdf
  10. 10. Gaikwad SV, Calahorra-Jimenez M, Molenaar K, Torres-Machi C. Challenges in engineering estimates for best-value design-build projects: An analysis of bid dispersion in U.S. highway projects. Journal of Construction Engineering and Management. 2021;147(7):04021065
  11. 11. Calahorra-Jimenez M, Torres-Machi C, Chamorro A, Alarcón LF, Molenaar K. Importance of noncost criteria weighing in best-value design–build US highway projects. Journal of Management in Engineering. 2021;37(4):1-12
  12. 12. Molenaar K, Torres-Machi C. Efficiency Study of Design-Build Program [Internet]. 2020. Available from: https://rosap.ntl.bts.gov/view/dot/56640
  13. 13. Scott S, Molenaar KR, Gransberg DD, Smith NC. Best-Value Procurement Methods for Highway Construction Projects. Best-Value Procurement Methods for Highway Construction Projects. Washington, D.C.: Transportation Research Board; 2006
  14. 14. CRAN. Gamlss. Dist: Distributions for Generalized Additive Models for Location Scale and Shape. R programming language. Version 5.3–2. 2021
  15. 15. Pomerol JC, Barba-Romero S. Multicriterion Decision in Management: Principles and Practice. New York: Springer; 2000
  16. 16. Johnson D. The triangular distribution as a proxy for the beta distribution in risk analysis. The Journal of the Royal Statistical Society Series Statistical. 1997;46(3):387–398
  17. 17. Law AM, Kelton WD. Simulation Modelling and Analysis. New York: MacGraw Hill; 1982

Written By

Maria Calahorra-Jimenez and Gustavo Garcia-Melero

Submitted: 02 May 2024 Reviewed: 02 May 2024 Published: 05 June 2024