Share Linkedin icon Twitter icon Facebook icon Whatsapp icon

13 August 2020

Webinar - Life sciences capital project benchmarking

Download Report

Linesight recently hosted a life sciences webinar, in which Jeff Peragallo (Head of Life Sciences – US) and Nigel Barnes (Head of Life Sciences – EMEA) were joined by Steve White, Leader of Global Estimating and Control at Merck, to discuss how benchmarking provides insights into peer and market alignment, can identify gaps and opportunities for improvement in project scope, cost, schedule and quality, and provide a clear baseline from which to improve project performance.

Below are the questions and answers from the session, including a number of questions that we didn’t get to during the webinar, and the recording from the session is available to download using the button above. 

Q: You mention cost and schedule, are you considering area classification metrics as well, i.e. production space vs. plant room?

A: Yes – we are collecting data on area types, usage, and classifications.

Q: How do you weigh different factors (area, location, construction duration & year etc.) in benchmarking?

A: We will make use of an internationally-used index, as well as our own global experience and looking at data on an elemental level.

Q: Is there a geographical priority or project type priority for this effort?

A: No, there is no particular focus. Though the pharma industry tends to be colocated in the same geographies and tends to be focused on similar projects right now.

Q: When during the project cycle is the benchmarking data captured?

A: Preferably at project completion, but information is interesting from key stage gate reviews, to allow growth comparisons.

Q: How long is this process and when will data be available to view?

A: We are currently collecting the data and are looking to have a first report out to participants in Q4. It will then be an ongoing activity, including participant roundtables.

Q: Given that no two projects are alike, and company accounting processes vary, can you really expect to provide effective benchmarking?

A: Yes, the more project data we have, the chance of identifying similar projects increases. Furthermore, if we look at projects on an elemental basis, you can compare elements that are the same/similar rather than the whole project, building up a full picture.

Q: What types of projects are being factored in?

A: Though we are interested in all types, there is a focus on bio/manufacturing, labs (quality and research), and warehousing/cold rooms.

Q: Will the user be able to identify our own projects against others? And do you have a minimum number of projects for a metric to be generated (i.e. the need to be statistically relevant)?

A: To maintain confidentiality, the data will be presented without any client data. We see that a minimum of five projects is needed to be relevant, but obviously, more than this is always beneficial and offers better insight. 

Q: Will the benchmarking consider both conventional build and modular/mobile construction?

A: Yes, we are collecting this information.

Q: How do you plan to validate the cost/schedule information being provided? Misinformation will distort the data.

A: Right now, we are dependent on the clients providing the information. However, the data should quickly show outliers, and inquiries will then be made to better understand the data point.

Q: What cost structure will be used (uniformat, CSI)?

A: CSI.

Q: In regard to older project data that is uploaded to the system, is there an auto escalation used for the regions to show potential current costs?

A: We use data for projects five years old or younger, and the data is escalated using industry norms.

Q: Is there a set of standard definitions established, to which all participants would have to measure against to provide their data? People measure things differently.

A: Yes, we have provided some definitions, and going forward would like to align participants on standard measures that we can all agree on.

Q: Are you going to look at tracking or obtaining information on projects about usage of contingency cost and schedule in order to categorize the success or distress that the project encountered, which the data reflects?

A: Yes. As noted in response to another question, if we look at costs from the stage gate reviews against final outturn costs, we can measure the use of contingency.

Q: Do you anticipate this being a one-time collection scenario, or is the plan for this to be an ongoing program?

A: This is planned to be an ongoing, continuous exercise.

Q: How do you ensure the input data authenticity so that it doesn't skew the overall benchmarked data?

A: Right now, we are dependent on the clients providing the information and working with us in roundtable discussions. As noted in response to another question, the data should quickly show outliers, and inquiries will then be made to better understand the data point.

Q: Will filters be included to allow for execution strategies, i.e. single source, single or two-stage tender?

A: Regarding execution strategies, right now we are collecting information on project delivery (specifically design-bid-build, design-build, EPCM/CM (at risk)/GC), so we also get into the contracting and payment strategies.

Q: How do you adjust benchmark values based on data collected to date to cater for ‘social distancing’ on the negative side and greater use of Off-site Fabricated Process System Skids on the positive side?

A: By looking at both cost and schedule benchmark data in the correct granularity, we will be able to see the impacts of the points noted.

Q: How will you address the lack of detail in final cost data? Projects are funded based on detailed estimates, but projects are run as contracts.

A: The majority of life sciences projects allow the clients to request the correct level of detail required from their delivery partners, leading to us receiving the correct level of detail.

Q: How does the benchmarking track market factors relative to tender award values?

A: By looking at the costs at the stage gate reviews versus final outturn cost, we can track market tender factors.

Q: Is the benchmarking for greenfield or brownfield projects?

A: Yes. We are currently looking at greenfield, brownfield, and retrofit.

Q: When we are looking at soft costs and given different methods of accounting, will these be able to be reflected solely on hours? Rather than costs as a way of normalizing?

A: Yes, we are requesting both hours and costs accordingly.

Q: How will you ensure 'anonymous' projects are truly anonymous (e.g. an anonymous project in Macclesfield)?

A: We will have projects by region within country; no specific cities will be mentioned.

Q: Will you be looking to capture sustainability levels within projects, given how this is becoming such a hot topic for being carbon neutral?

A: Not in this initial round. There are items in the questionnaire in which we would expect the participant to identify sustainability, if it had an impact on the scope/delivery of the project.

Should you have any further queries or if you would like more information on participating, please don’t hesitate to reach out to Jeff and/or Nigel.

Share

Related Insights