Evaluation

Showcase

Below are some examples of projects listed by type of activity or need expressed by clients.

SCIENCE AND TECHNOLOGY PRODUCERS

Summative Evaluation of the Alberta Prion Research Institute (2016)

Client: Alberta Prion Research Institute

The Prion Institute provides financial support for research capacity building, research and development projects including funding of facilities and equipment, and knowledge mobilization activities in the fields of prion diseases and other neurological diseases with prion-like characteristics. A total of $46.7 million were invested in research projects, infrastructure and personnel support through the Prion Institute between its creation in April 2005 and August 2015.

Science-Metrix was mandated to undertake the first-ever evaluation of the Alberta Prion Research Institute. This summative evaluation, conducted in 2015/16, focused on identifying successes, impacts and challenges associated with the delivery of the Prion Institute’s research funding programs for the duration of its existence. The evaluation was commissioned by the Prion Institute in accordance with the reporting requirements of the 2012–2015 funding agreement between the Ministry of Innovation and Advanced Education and AI Bio. The findings and recommendations contained in the final report will contribute to inform strategic planning and decision-making for the Prion Institute going forward.

Science-Metrix was responsible for carrying out an external and independent evaluation, while the Prion Institute and AI Bio provided advice and oversight. A mix of quantitative and qualitative data were collected through five evaluation methods, including a document and literature review, a bibliometric analysis, semi-structured interviews, case studies and a survey of funded researchers.

Methodologies

The Prion Institute established a set of 10 outcomes at its inception in 2005. These outcomes have changed over time, primarily in 2012 to coincide with the establishment of the Institute’s second funding agreement with the Government of Alberta. Currently there are three expected outcomes for the Prion Institute, but the 10 originally established outcomes align with the following three themes of the current outcomes.

  1. Capacity-building
  2. Knowledge generation
  3. Knowledge transfer, exchange and translation

Given the summative nature of the evaluation, the focus was on assessing the extent to which the Institute’s three expected outcomes had been met. The evaluation also addressed a number of process-oriented questions of interest to the Prion Institute. A combination of five qualitative and quantitative methods were employed, as follows.

  • Documentation and literature review
  • Bibliometric analysis
  • Interviews
  • E-survey
  • Case studies

The methodologies were each purposefully chosen to fit the nature of the data to be collected and the composition of the data sources, including the individuals to be interviewed or surveyed.

Read the report

Evaluation of Forest Ecosystems Science and Application Program Sub-Activity (2013)

Client: Natural Resources Canada (NRCan)

Description: The objective of the FESA Program Sub-Activity is to increase scientific knowledge on forest ecosystems and support stakeholders in their sustainable forest management policies and practices. As part of this Program Sub-Activity, NRCan conducts research, national assessments and monitoring to develop, synthesize and integrate scientific knowledge. This knowledge is used by the appropriate government jurisdictions, industry, and other stakeholders to develop forest management practices and policies and by NRCan and other federal government departments to meet international reporting obligations, form Canada’s negotiating positions on international environmental issues related to forests, and counter misconceptions of Canada’s forest practices. The evaluation encompassed the five-year period from fiscal year 2007/08 to 2011/12 and examined FESA's objectives and activities, focusing on relevance and performance issues.

Multiple indicators and lines of evidence were used to address the main evaluation questions:

Phase IDesign: Prior to the evaluation, an evaluation assessment was conducted by Science-Metrix to develop a program profile for FESA, including the adjustment/confirmation of the logic model. The evaluation assessment and subsequent planning phase also served to collect, analyze and assess available data for use in the subsequent evaluation (i.e., data quality assessment), including the identification of potential interviewees, financial data, and project-level data. The evaluation assessment not only ensured that the information necessary to complete the work was collected and reviewed, but also improved the buy-in and understanding of various participants who would be involved in the evaluation. During the design phase, further development and refinement of the evaluation framework, questions, and program logic model were made. Most of the evaluation tools were developed during this phase.

Phase IIFieldwork and Technical Reporting: Based on the use of multiple indicators and lines of evidence to address the main evaluation questions, the second phase consisted of the implementation of the evaluation tools. Four data-collection methodologies were developed and implemented in order to provide evidence from multiple types (quantitative and qualitative) and sources (primary and secondary) of data. Findings identified from each line of evidence were then summarized and presented by evaluation question in four separate technical reports. In addition, a technical report summarizing findings from the database analysis conducted by NRCan was provided to Science-Metrix.

Phase IIIIntegration and Reporting: All data collected as part of the fieldwork, including that already collected during the evaluation assessment phase and by NRCan (database analysis), were integrated into a final evaluation report.

The following lines of evidence were used in this evaluation:

Database Analysis: NRCan’s Strategic and Evaluation Division conducted a review of quantitative data (including financial information) contained within FESA’s internal planning tool to provide insight into the efficiency and economy of the sub-activity.

Document Review: A thorough review of approximately 200 documents, secondary literature, files and sub-activity data was conducted to provide an overall understanding of the FESA Program sub-activity, as well as details on program needs, alignment with government priorities and other information to inform relevance issues. These documents primarily consisted of departmental documents (strategic plans, annual reports, audits, etc.) and program documents (financial and administrative databases, performance monitoring, communication outputs, etc.).

Survey of Component Leads: A web survey was conducted with 56 component leads out of a total survey population of 93. The main objective of the survey was to collect information regarding achievement of component objectives and subsequent outcomes, participation of partners/collaborators, level of stakeholder satisfaction, etc. The survey primarily addressed performance issues to fill data gaps relating to outcomes, but relevance issues were also briefly examined. Views on cost-effectiveness and program delivery were also collected.

Interviews and Focus Groups: A total of 97 interviews were conducted with stakeholders both internal (i.e., individuals who played a significant role in the design and delivery of FESA) and external (including federal and provincial representatives; industry; international experts; academia; NGOs). Internal stakeholders provided valuable insight on relevance, efficiency and economy, as well as on factors that impacted program performance and lessons learned. External stakeholders helped evaluators gauge the levels of awareness and stakeholder support for the program and offered valuable observations of achievements, impacts, and alternatives. Additionally, five focus groups were conducted to gain an in-depth understanding of the needs of external stakeholders and how FESA activities relate to these needs.

Bibliometrics and Webmetrics: Bibliometric and webmetric methods were used to analyze the scientific performance and use of peer-reviewed publications, grey literature and other types of outputs (e.g., tools, frameworks, databases) of the FESA Program Sub-Activity.

The evidence suggested that while there is an ongoing need for the FESA sub-activity, a few adjustments should be made to best fulfill that need. As such, six recommendations were made, notably to clarify its mandate and objectives in certain areas and improve communication with both internal and external stakeholders.

Read the report

SCIENCE-BASED POLICY AND REGULATION

Evaluation of the Office of Energy Efficiency (2014)

Client: Natural Resources Canada

Description: This report presents the findings of the evaluation of Natural Resources Canada’s (NRCan’s) Energy Efficiency Programs. The programs are administered by NRCan’s Office of Energy Efficiency (OEE), part of the Energy Sector. These programs were last evaluated in 2009/10. The evaluation covers program activities from 2009/10 to 2013/14, comprising $1.1 billion in NRCan funding. The objective of the Sub-Program is to increase energy efficiency, resulting in energy savings and reduced greenhouse gas emissions across targeted sectors of the Canadian economy (housing, buildings, vehicles, equipment, and industry). To do this, the Sub-Program has activities that target the energy-using behaviour of consumers and businesses.

The evaluation employed the following multiple lines of evidence:

Document and Performance Measurement Data Review
Evaluators reviewed OEE’s Performance Measurement data, key program documents and completed impact studies and surveys that covered the evaluation period. The findings from this exercise assisted SED in calibrating the data collection instruments so as to focus primary data collection on specific evaluation questions, program activities, and issues where there were information gaps or where findings needed to be further explained. This information was used to inform the final data collection tools, interview and case study selections and to corroborate evaluation results. As additional documents were collected through interviews, they were incorporated into the document review to ensure they were considered in the final evaluation analysis.

Interviews
Key informant interviews were conducted to respond to the majority of the evaluation questions and to complement the other methods. A total of 100 semi-structured interviews were conducted across the five program sectors to provide in-depth qualitative information on activities, context, relevance, progress towards outcomes, and economy and efficiency including delivery alternatives. This included 20 internal program interviewees, 72 external interviews with partners, users, and non-users of OEE programs, and 8 interviews with other organizations who are not directly involved in the programs but could speak to energy efficiency needs and views of their membership vis-à-vis OEE Energy Efficiency programs. All interviews were semi-structured, lasting approximately 1 hour.

Case Studies 
Fifteen case studies were conducted, across the five OEE sectors. The case studies provided an in-depth review of individual initiatives to more fully understand what occurs from program delivery to achieving results for particular instances. Case studies were selected to assess key causal mechanisms in the program theory and to confirm theories of implementation and change (e.g., the effectiveness of regulation in a particular instance, or the effectiveness of training).

  • Ongoing activities: Ten standard-effort case studies (i.e., 5 interviews (mostly external), review of documents and data provided by OEE and interviewees) focused on ongoing activities that are updates to key activities that are part of the program’s theory of change: providing training, information development, and regulations, codes and standards; and
  • New Activities: Given the need to provide information on activities that have not been evaluated before (i.e., part of the 2011 mandate), five case studies were selected to have a higher level of effort (i.e., 10 interviews (mostly external), review of documents and data provided by OEE and interviewees) to assess the roll-out and implementation of new program activities identified in the 2011 mandate for the ecoENERGY Efficiency programs

Surveys
Stakeholder and client surveys provided quantitative data on the views and experiences of target audiences that have used OEE products and services, including impacts of programming on their behaviour. SED worked closely with OEE to augment their already planned studies to capture data for the evaluation. To preserve neutrality, SED had final approval on specific evaluation questions added for the evaluation and advised on data collection. Target groups included building and industrial facilities managers, home builders, homeowners, ENERGY STAR program participants, and trucking companies (carriers).

Read the report

Evaluation of the Enhanced Feed Ban (2013)

Client: Canadian Food Inspection Agency (CFIA)

Description: In accordance with the Treasury Board Secretariat (TBS) Policy on Evaluation, the primary objective of this evaluation was to assess the relevance (continued need, alignment with government priorities, and alignment with federal government roles and responsibilities) and performance (achievement of expected outcomes, as well as demonstration of efficiency and economy) of the Enhanceed Feed Ban (EFB) and provide recommendations to improve program effectiveness and efficiency, as necessary.

The Enhanced Feed Ban (EFB) is a part of the Government of Canada's Bovine Spongiform Encephalopathy (BSE) Program, which is a horizontal initiative led by the Canadian Food Inspection Agency (CFIA). The main objective of the EFB is to accelerate Canada's progress in BSE management by preventing more than 99% of potential infectivity from entering the feed system as well as to enhance risk management of transmission of BSE in the cattle herd. To this end, the EFB aims to:

  • Strengthen animal feed restrictions through amendments to the relevant regulations;
  • Ensure compliance with control measures around prohibited materials and specified risk material (SRM) removal; and
  • Increase the level of verification and confidence that SRM is segregated from feed, fertilizer and pet food and that prohibited materials are not fed to ruminants.

The removal of SRM from animal feed is an important animal health protection measure and an indirect public health protection measure.

Evaluation methodology

This evaluation used a combination of quantative and qualitative methods, as follows:

  • Program data review
  • Document and literature review
  • Program documentation andf file review
  • Internal and external stakeholder interviews
  • Analysis and integration of data

Read the report

Evaluation of Canadian Blood Services Grant and Contribution Programs (2013)

Client: Health Canada and the Public Health Agency of Canada

Description: Within the Canadian health system, a critical role of Health Canada is to act as funder and information provider, supporting organizations through grants and contributions to help meet overall health system objectives. The non-profit Canadian Blood Services (CBS) is one of these organizations supported through federal funds (as well as provincial and territorial funding), operating at arm’s length from government. Related to these broad lines of work, Health Canada supports two specific CBS programs through the Grant and Contribution Program under evaluation: the Organ and Tissue Donation and Transplantation (OTDT) Program (five-year contribution, $17.9 million total) and the Blood Research and Development (R&D) Program ($5 million annual grant). The evaluation period for both programs was from 2008/09 to 2012/13. The evaluation covered activities carried out by CBS under the federal components of the OTDT and R&D Programs (i.e., those that were funded from Health Canada’s Grant and Contribution Program).

In accordance with the 2009 Treasury Board Policy on Evaluation and related Directives, the evaluation focused on core issues of Relevance and Performance (effectiveness, efficiency and economy) of the two CBS programs.

Evaluation methodology

Multiple lines of evidence were used to address 10 evaluation questions.

Literature, document and file review (including the review and analysis of administrative data): This review of program documents, files, databases, and governmental and departmental documents (e.g., budgets, reports on plans and priorities, performance reports, RMAF, etc.) was a key line of evidence to address several core issues. For this particular evaluation, more than 700 documents and files were reviewed.

Web surveys (2): Two surveys were conducted with CBS and OTDT external stakeholders. These surveys were particularly important to gather information on the achievement of expected outcomes.

Key informant interviews (18): Key informant interviews were conducted with Health Canada management and staff (i.e., internal stakeholders who have played a significant role in the oversight of this grant and contribution program), as well as representatives from CBS (i.e., the recipients). Key external stakeholder and/or experts groups outside of these two organizations, such as partners (e.g., CIHR, CIHI, associations, representatives from provinces/territories), beneficiaries, and subject experts––particularly those who have played a direct role in program activities––were also consulted.

Overall, the evaluation found that the CBS Grant and Contribution Programs continue to address a demonstrable need and remain responsive to the needs of Canadians. Three recommendations were made to Health Canada to strengthen the program’s objectives, develop contingency plans and facilitate the shift toward R&D.

Read the report

RESEARCH FUNDING

Genome Canada Five-Year Evaluation (2015)

Client: Genome Canada

Description:Genome Canada, established in 2000, is a not-for-profit organization that invests in large-scale genomics initiatives in sectors of strategic and economic importance to Canada, aiming to strengthen genomics research and technical capacity in Canada, and foster multi-sectorial partnerships nationally and globally. With a view to generating economic and social benefits for Canadians, its target sectors include health, agriculture, environment, forestry, fisheries and energy and mining. Genome Canada has also worked to ensure that genomics research efforts consider underlying ethical, environmental, economic, legal or social aspects (GE3LS).

The purpose of the second five-year evaluation of Genome Canada, was to assess the organization’s relevance and retrospective performance in the context of the Canadian research and innovation system from 2009/10 to 2013/14. The report also comprises a prospective dimension as it seeks to inform management and other stakeholders on how best to implement the organization’s strategic direction (Strategic Plan 2012–2017).

Evaluation methodology

Five data collection methods were used as part of this evaluation.

Management and delivery review: As part of the review, 24 interviews were conducted, and a document and file review of over 200 documents related to Genome Canada’s activities was performed. The purpose of the management and delivery review was to examine the history, processes and performance of Genome Canada across the evaluation period (2009–2014). Interviews were particularly useful to inform issues of continued need, efficiency and economy, whereas documents were one of the main sources of evidence – including quantitative evidence (i.e., financial and output data) – on the achievement of outcomes, as well as efficiency and economy issues.

Survey: Web surveys were conducted with five stakeholder groups, namely the Principal Investigators (PIs; n=53), co-applicants and other investigators (co-PIs; n=153), GE3LS PIs and leaders (n=52), highly qualified personnel (HQP; n=153) and other stakeholders (e.g., partners, collaborators, current and potential end-users; n=137) involved in Genome Canada-supported projects. These surveys mainly sought to collect information on performance issues to address data gaps relating to outcomes. Views regarding cost-effectiveness and delivery were also collected.

Bibliometrics: Using various indicators (e.g., scientific output; specialization, citation impact), Science-Metrix assessed the scientific performance of Genome-Canada’s funded researchers in genomics and in each of the strategic sectors, namely agriculture, health, environment, fisheries/aquaculture, forestry and energy/mining. This performance was also examined within the broader Canadian context (e.g., comparison to non-funded researchers, benchmarking with other countries), in an attempt to determine Genome Canada’s contribution to the national standing in genomics over the years. The level of national and international scientific collaboration of Genome Canada and of individual researchers was also measured, as an indicator of the organization’s effectiveness in coordinating genomics research efforts.

Case studies: Eight projects funded by Genome Canada were examined in depth as part of the case study method. The selection of case studies focused on emerging and natural resource sectors, namely agriculture, environment/energy/mining, fisheries, and forestry in order to address particular needs relating to the implementation of the 2012-2017 Strategic Plan. Each case involved conducting two or three interviews with relevant stakeholders (e.g., project contributors, partners and/or end users). A project-level document review was also conducted, examining applications, Genome Canada Five-Year Evaluation Evaluation Report March 2014 43 Science-Metrix Inc. quarterly, interim and annual reports as well as information such as collaboration agreements, scientific publications and any other related outputs as provided by Genome Canada and interviewees. The purpose of the case studies was to provide insight on key themes that span across projects and that relate to the success and impact of large-scale genomics projects. The resulting cross-case analysis is presented in Appendix 2.

International comparative review: The data collection and analysis for the international comparative review were based on two methods: a literature review of five organizations comparable with Genome Canada and targeted interviews with representatives from three of these organizations. The literature review component focused on identifying and extracting relevant information and data from the organizations’ websites and other relevant sources such as grey literature, funding announcements, etc. A total of four interviews were also conducted with individuals knowledgeable about their organization and the country’s overall support mechanisms for genomics research. This line of evidence was used to assess the continued need for national support of genomics research and contribution of Genome Canada to Canada’s global leadership in this field, as well as to position Genome Canada in the global context according to elements such as operating environment, strategy development, design and delivery of programs or projects, and best practices.

Read the report

Evaluation of the Canada Excellence Research Chairs (CERC) Program (2014)

Client: Social Sciences and Humanities Research Council (SSHRC)

Description: Launched in 2008, the Canada Excellence Research Chairs (CERC) Program supports Canadian universities in building a critical mass of expertise targeted within the four priority research areas of the Government of Canada’s Science & Technology (S&T) Strategy, in support of Canada’s growing reputation as a global leader in research and innovation. As a tri-agency initiative of the Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council (NSERC) and the Social Sciences and Humanities Research Council (SSHRC), the Program is administered by the Chairs Secretariat, housed within SSHRC.

The evaluation of the CERC Program was conducted by SSHRC in collaboration with Science-Metrix and covered the initial five years of the Program, from inception in 2008/09 to 2013/14 (the end of phase 1 of the second competition). The evaluation addressed key questions of relevance, performance and efficiency, in accordance with section 42.1 of the Financial Administration Act and the Treasury Board Policy on Evaluation (2009).

Evaluation methodology

Ten evaluation questions were addressed via eight data collection methods: document review, review of administrative and performance data, cost-efficiency analysis, interviews, international comparison study, web survey, bibliometric analysis, and case studies of CERC units. Methodological challenges were mitigated in a proactive manner, resulting in the collection of robust evaluation data that was triangulated across multiple lines of evidence for each evaluation question. The impact of any limitations (e.g., small population size, current stage in the program lifecycle) is noted in this executive summary when related to specific findings or recommendations.

Read the report

Evaluation of Caltech Commitment for the Gordon and Betty Moore Foundation (2013)

Client: Gordon and Betty Moore Foundation

Description: In 2002, the Gordon and Betty Moore Foundation made a long-term commitment of $300 million in potential grants to the California Institute of Technology (Caltech). The stated goal of this Commitment was to support the institution in “advancing its position at the forefront of higher education, technological development, and scientific research.” In 2012, Science-Metrix conducted an external evaluation of this Commitment, examining 24 of the 29 grants in the portfolio to date. The Foundation especially sought to gain an understanding of the significance of scientific achievements as well as the catalytic value of the Commitment funding approach. Overall this evaluation aimed to provide conclusions and recommendations regarding the Commitment’s impact for science, for Caltech and for the Foundation.

Evaluation methodology: Multiple lines of evidence were used to conduct the evaluation:

  • Document and file review: About 250 documents were reviewed including administrative and financial documents, annual reports, grant applications and related documentation, as well as scientific and grey literature.
  • Interviews (53): To validate documentary information and to further explore grant outcomes and other issues, interviews were conducted with the following personnel.
    • 21 interviews with grant Principal Investigators
    • 9 interviews with management (at Caltech and at the Foundation)
    • 23 interviews with case-study-specific grant stakeholders
  • Case studies (8): Eight selected grants were explored in detail to generate a deeper understanding of the complexity of the funded initiatives, and the significance of their scientific achievements. The case studies involved:  
    • Targeted bibliometric analyses (4 of the 6 case studies)
    • Additional grant-specific document and literature reviews
    • Additional interviews with grant stakeholders

In addition to the above, Science-Metrix was responsible for preparing and facilitating an Expert Panel meeting. The Expert Panel provided expert judgement on the scientific achievement of the Caltech Commitment, reviewed and validated the preliminary findings of the evaluation, and elaborated on preliminary recommendations. Science-Metrix prepared the material for this panel to review, and analyzed the results of the panel meeting for inclusion into the final evaluation report.

Overall, the evaluation found that nearly all of the grants funded under the Commitment realized, or will realize, their originally stated outcomes. Of all of the grants examined, four have heightened potential to be transformational for science. The results stemming from the majority of grants were found to be highly attributable to the Commitment. The Commitment also had a remarkable impact on Caltech, by facilitating Caltech’s ability to concentrate its resources on specific research areas and by directly allowing Caltech to maintain or advance its position in education, research, and technology.

PERFORMANCE EVALUATION

Comprehensive Evaluation of the Bank’s Development Results 2004–2013 (2016)

Client: African Development Bank

At the end of 2013, the Independent Development Evaluation team of the African Development Bank Group (AfDB or the Bank) launched a Comprehensive Evaluation of the Bank’s Development Results (CEDR) over the previous decade. The purpose of the CEDR was two-fold:

  1. Provide an independent, credible and evidence-based assessment of the Bank’s development results between 2004 and 2013 and, in particular, the extent to which the Bank’s interventions have made a difference in Africa (accountability).
  2. Identify lessons and recommendations on the Bank’s performance to inform the implementation of the Bank’s new strategic priorities known as the Hi 5s (learning).

As the synthesis phase of the CEDR, Science-Metrix’ evaluation completed the CEDR process. In order to ensure representativeness, the core of this synthesis was based on the conduct of 14 Country Strategy and Program Evaluations (CSPEs), accounting for close to 60% of the Bank’s portfolio, based on approvals, over the period of 2004–2013. The evaluation covered all lending and non-lending operations during this period. The 14 countries were selected on the basis of matching as closely as possible the composition of the overall Bank’s portfolio in terms of regions, language, income and fragility status.

The evaluation was theory based. The theory of change (ToC) was developed based on a thorough review of relevant documentation including the Bank’s policies, operational strategies and guidance documents, evaluations and assessments, and comparable documents from major development partners. The final report addressed two sets of issues. First, it identified factors enabling or hindering the achievement of results by responding to the question “How has the Bank managed itself?” This set of issues was examined by focusing on selectivity, efficiency, partnerships, leverage, analytical capacity, and managing for development results. Second, the report identified Development Results by responding to the question “What has been achieved by the Bank?” According to these overarching issues, evaluation questions and indicators were set out in the evaluation matrix with underlying assumptions queried from the ToC as well as from the sector outcomes and impact pathways.

This synthesis relied on multiple lines of evidence to respond to the evaluation questions, as follows.

  • Context factor reviews consisted of reviews of the Bank’s performance based on the ToC. These equivalents to country performance case studies were conducted as an integral part of the CSPE process in all 14 countries selected.
  • Detailed project results assessments (PRAs) were conducted for completed projects and ongoing projects close to completion (169 projects) across each of the 14 countries.
  • Two background reports completed context factor reviews and project results assessments as inputs into the synthesis, a portfolio review and a qualitative review.
  • Eight past topic or sector evaluation reports were included as well and used for triangulating evidence from the core sources above.

Read the report

NETWORKING, PARTNERSHIP AND KNOWLEDGE TRANSFER

Evaluation of the Collaborative Research and Innovation Opportunities (CRIO) program (2016)

Client: Alberta Innovates – Health Solutions (AIHS)

The Collaborative Research and Innovation Opportunities (CRIO) program was designed by Alberta Innovates – Health Solutions (AIHS) to facilitate multidisciplinary research teams in addressing health issues and system challenges facing Alberta. CRIO aimed to support multidisciplinary teams and to involve end-users in knowledge transfer activities to enhance the applicability of the research results.

AIHS mandated Science-Metrix to conduct a formative evaluation of CRIO. The evaluation examined design and delivery mechanisms used by AIHS as well as progress towards outcomes at the mid-point of the program. The scope of the evaluation focused on the 2013–14 cohort of award recipients, covering a total of 45 awards.

Methods

The evaluation collected data via five methods.

  • A document review covered internal program data as well as a small sample of CRIO proposals and Research Management Plans. External peer-reviewed and grey literature was also scanned.
  • A bibliometric analysis was conducted to provide valuable insights regarding the production of peer-reviewed publications by CRIO-supported researchers from the 2013–14 cohort.
  • An online survey (response rate 63.2%) was conducted of the CRIO leads and co-leads to gather their perspectives on team structure, CRIO management, achievement of outcomes and prospects for their research.
  • Interviews were conducted with 31 individuals, both internal and external to AIHS.
  • Three case studies, one for each CRIO funding category (Project, Program, Team) focused on successes and the contribution of CRIO funding to case progress.

The evaluation concluded that CRIO’s objective to engage end-users was achieved, but that the extent of engagement depended greatly on the nature of the research. More specifically, grants with a public health focus tended to be more successful in this aim than projects with a clinical or commercial focus.

Although commercialization was not relevant for all CRIO research teams, some did advance their commercial potential through CRIO funding. Even in these cases, products were still far from market, and several suggestions were provided on how to facilitate more effective commercialization outcomes.

The evaluation also concluded that CRIO facilitated an improved capacity to collaborate and to conduct cross-sector research. This was clearly illustrated through increased interdisciplinarity, leveraged dollars, new opportunities for trainees, enhanced network interactions, frequent team meetings and the creation of high-impact knowledge products.

Practical applications of the research outputs mainly included “improved educational and skill level of the workforce” and “changes in efficiencies and effectiveness of public service delivery.” However, it was too early to identify system-wide policy or practice changes due to CRIO activities, and the majority of CRIO impact is anticipated in a period after funding has ended. Furthermore, there was limited evidence showing that CRIO grants emphasized a sustainability plan. This could jeopardize the extent to which long-term outcomes can be achieved.

Recommendations

In light of the above findings, five recommendations were made to assist AIHS inform future decision-making regarding strategic alignment, relevance, design and performance.

  1. Clarify the intended objectives of CRIO and the CRIO Portfolio and communicate them to its diverse range of program stakeholders to ensure alignment between program and stakeholder objectives.
  2. Clearly articulate the current roles, responsibilities and expectations of program managers to relevant AIHS personnel and to CRIO grantees. Explore the literature for ideas on how to expand the current roles.
  3. Consider re-structuring performance data reporting/collection to enhance the utility of the data collected.
  4. Consider implementing process changes to improve progress towards desired outcomes.
  5. Consider conducting bibliometric analyses at the summative evaluation stage using a control group.

These will provide a comparison for the baseline established through this evaluation and will provide more direct evidence upon which make conclusions on the scientific impact of the CRIO investment.

The UK’s Performance in Physics Research—National and international perspectives (2014)

Client: The Institute of Physics (IOP)

Description: The Institute of Physics (IOP), in collaboration with the Engineering and Physical Science Research Council (EPSRC) and the Science and Technology Facilities Council (STFC), sought to commission a comprehensive assessment of the performance and impact of UK physics research in order to demonstrate its value at the highest level.

The objectives of the study were to:

  • Compare the UK’s physics research output with that of leading countries and other scientific disciplines
  • Compare the UK’s physics research impact with that of leading countries and other scientific disciplines
  • Explore patterns of international collaboration among UK physics researchers
  • Identify the determinants of UK physics research success and its impacts

This report, prepared by Science-Metrix made use of a portfolio of established data collection methods, encompassing both primary and secondary sources, in order to answer nine study questions in full, supported by concrete evidence and rigorous analysis: 

  • Bibliometric methods were used to compile robust data on research output, impact, and collaboration in physics research, its sub-areas as well as in related research areas in the UK and other leading countries. The bibliometric study used indicators based on counts of scientific papers and citations to papers (e.g., number of papers, specialisation index (SI), average of relative citations (ARC), average of relative impact factors (ARIF), international and national collaboration rates).
  • Interviews (12) were carried out with UK physics researchers, active in four UK physics research sub-areas, to discuss the impacts of their research.
  • Case studies (4) focused on these sub-areas in order to compile and compare information on the variety of impacts from UK-based research.

The project took place over a span of 16 months (December 2012 to March 2014). The bibliometric indicators are based on counts of scientific papers and citations to papers indexed in the Web of Science database produced by Thomson Reuters.

Read the report