Climate warning
Image of Verra

Verra

Climate warning

More than 90% of rainforest carbon offsets are found to be worthless!

A joint investigation by the Guardian, Die Zeit and SourceMaterial has found that carbon offset projects run by Verra, the largest provider of forest carbon offsets, were largely ineffective in reducing deforestation and had little impact on reducing carbon emissions. The investigation found that only a small number of Verra's rainforest projects showed evidence of deforestation reductions, with further analysis indicating that 94% of the credits had no benefit to the climate. It also raised concerns about human rights issues in at least one of the offsetting projects.

Verra, which operates several leading environmental standards for climate action and sustainable development, disputes the findings and argues that the conclusions reached by the studies are incorrect and question their methodology. The findings pose serious questions for companies that are depending on offsets as part of their net-zero strategies.

Excerpt from the Guardian article:
Barbara Haya, the director of the Berkeley Carbon Trading Project, has been researching carbon credits for 20 years, hoping to find a way to make the system function. She said: “The implications of this analysis are huge. Companies are using credits to make claims of reducing emissions when most of these credits don’t represent emissions reductions at all. Rainforest protection credits are the most common type on the market at the moment. And it’s exploding, so these findings really matter. But I’m starting to give up on that. I started studying carbon offsets 20 years ago studying problems with protocols and programs. Here I am, 20 years later having the same conversation. We need an alternative process. The offset market is broken.”





Do you agree?

134 more agrees trigger scaled up advertising

Pinned by We Don't Have Time

Verra

66 w

Greetings; Here is a longer and more nuanced response we sent to the reporters a few hours after their deadline passed but well before the stories posted: After reviewing your questions and double-checking with members of our technical team, I can reiterate what I said initially: your conclusions about Verra baselines simply don’t add up. The academic exercises we discussed on our call, and which you cite, are philosophically interesting, but even the authors have layered in caveats that preclude your conclusions, which flow from six fundamental errors. Let me recap the errors as I see them. Error #1: Mistaking New Tools for Magic Bullets On our only call together, we discussed new synthetic modeling tools that Verra and others are experimenting with to assess the impact of human activities on forests. You seemed to be trying to understand the role these tools may and may not play, as well as their advantages, disadvantages, and how they may or may not inform the reassessment of REDD+ baselines going forward. In your questions, however, however, you’ve glommed onto a handful of academic exercises that use synthetic modeling in ways that are philosophically interesting but would never pass muster in a bona fide carbon methodology. The fact is that synthetic controls, counterfactual pixels, and a slew of other new tools are meaningless when detached from the realities of what’s happening on the ground – the old GIGO principle. This does not mean the concepts have no value. Indeed, as we discussed: - Verra already recognizes a variant of synthetic controls in two methodologies, but that recognition came after several years of piloting and public consultation, as I’ll loop back to below. - Verra has also proposed the use of risk mapping based on variables that have proven to be generally predictive at scale, but that approach does not involve synthetic modeling. - Finally, several groups have proposed ways of incorporating synthetic modeling in REDD+ baselines, but these are also facing GIGO challenges and are not ready for prime time. The bottom line is that synthetic modeling has value but isn’t a magic bullet. Wrongly applied, it can serve the interests of ideologues and opportunists while sidelining pragmatists seeking viable solutions. You, however, have mislabeled these exercises as “real world” counterfactuals while dismissing methodologies built on decades of piloting, review, and consultation as “fairy tales.” Error #2: Cherry-Picking and Magnifying the Minority Related to the above: the basic concepts underlying these approaches are simple, but the tricky parts are: 1. identifying the indicators that accurately reflect local drivers of deforestation, 2. ensuring the data accurately captures those indicators, and 3. understanding the difference between causes, effects, and mere correlation. The authors of the papers you’re citing have, by their own admission, given these components short shrift, which means they have not “estimated how much deforestation was prevented by the projects,” as you claim. They have merely shown that different approaches yield different results. Error #3: Conflating the Measurement of Deforestation with Impact Evaluation Some REDD+ projects are designed to impact only the project area, but most are designed to generate positive activities that spread into surrounding areas. These “positive externalities” are well-documented, but you’ll miss them if you rely on synthetic modeling without corresponding scenario analysis and process tracing. Error #4: Ignoring the Mission of Standard-Setting Bodies As a standard-setting body, Verra’s role is to review all available research in context and identify the sweet spot where most experts align. It is not to defend or attack individual studies. Although we have recently begun to propose new methodologies ourselves, we have traditionally acted as a forum through which entities that want to produce a methodology can do so by exposing their ideas to bona fide experts through iterative rounds of expert review and public consultation. By following this approach and then making its documentation publicly available, Verra provides foundational methodologies on which some buyers may layer in additional filters – such as the Climate, Community, and Biodiversity (CCB) Standards, the proposed ABACUS label, or their own proprietary filters and preferences. Intermediaries such as South Pole and CoolEffect apply filters based on their own additional criteria, as do sophisticated buyers such as Salesforce and many others. Error #5: Seeing Offsets as See-Saws You’re continuing to insist that every reduction achieved through the Voluntary Carbon Market (VCM) results in an increase someplace else, which is simply not true. In a compliance market, offsetting is only permitted for residual emissions, and the voluntary market provides a vehicle for going above and beyond that to drive overall emissions down deeper and faster than companies can realistically achieve internally – and not, as you seem to believe, an excuse for doing nothing. There is debate over what can be realistically achieved internally, and the Voluntary Carbon Markets Integrity (VCMI) initiative is working on identifying science-based criteria for what constitutes carbon neutrality. Verra supports that initiative as well as broader calls for more transparency in corporate disclosures, but you seem intent on holding Verra accountable for policing claims – which exhibits a profound (and, I suspect, willful) ignorance of the nature of that challenge. Ecosystem Marketplace conducted an analysis of buyers in 2016 and found that companies that voluntarily purchased offsets tended to do so as part of a structured reduction strategy, and the fundamental laws of supply and demand render it impossible for emitters to offset their way out of this mess. Error #6: Ignoring the Nature of the Challenge Building on the above, you have chosen to ignore the near-universal acceptance of the need to emphasize deep reductions now while gradually building up the capacity to pull greenhouse gases from the atmosphere. The Intergovernmental Panel on Climate Change (IPCC) tells us we must dramatically scale up Nature-Based Solutions (NBS) – and specifically REDD+ – to meet the climate challenge, and analysis shows that we’ll have to reforest 50 hectares of forest for every hectare of we lose in a given year to break even – or wait 50 years for that hectare to recover. REDD+ is a necessary transition mechanism, and scaling up requires, among other things, moving from site-specific modeling to a more standardized approach that incorporates the newest technologies. That’s the central challenge we’re dealing with here, but you keep insisting that an oversimplified application of new and evolving tools for developing standardized approaches automatically generates “findings” that are superior to site-specific modeling. The Promise and Pitfalls of Synthetic Modelling Synthetic modeling comes from the social sciences, where researchers have used it to isolate the effects of an “event or intervention of interest [on an] aggregate unit, such as a state or school district.” It works not by comparing the impacted city or state to a comparable unit but to a synthetic city or state modeled from multiple states, school districts, or other population centers. VCM stakeholders have been experimenting with the application of synthetic modeling to deforestation for over a decade, and last year Verra approved a new methodology for projects that reduce emissions by promoting improved forest management (IFM) in the United States. Synthetic modeling proved effective with IFM because IFM consists of standardized interventions carried out across a relatively homogenous region – in contrast to REDD+ projects, which prescribe site-specific interventions for site-specific drivers of deforestation. Despite the relative simplicity of IFM, it still took several years of piloting and multiple rounds of expert review and public consultation for the American Forests Foundation and The Nature Conservancy to develop the dynamic performance benchmarks (DPBs) that were eventually approved under Verra. Recent advancements in remote sensing and artificial intelligence have ushered in a new era of digital measurement, reporting, and verification (DMRV), which has enabled several groups to present strategies for incorporating DPBs into REDD+ methodologies. All these efforts, however, are struggling to overcome the “tricky” challenges alluded to above because the drivers of deforestation are woven into local economies and thus vary greatly from country to country and region to region. Verra faced a similar challenge in developing the new risk mapping tool designed to underpin the nesting of projects in jurisdictional REDD+ programs. In this case, research shows some indicators are somewhat predictive globally in the short term, but the variability from region to region is such that local weighting will be necessary. Even then, risk mapping is one component in a larger methodology and not a methodology in itself. Among the many challenges to implementing DPBs in REDD+ are data collection and the identification of reliable indicators – called “covariates” – that can be used to synthesize counterfactual rates of deforestation. If you look at the covariates in the IFM methodologies, you will see how specific they are. Limitations in the Literature As I have stated on our call and again here, Verra is reticent about praising or critiquing academic exercises produced in good faith to inform broader discussion – especially if the authors already provide multiple caveats, as is the case here. Indeed, we welcome and encourage such exercises because they help us identify the sweet spots where most experts align. I’ll remind you that Guizar-Coutino, while not evaluating baselines, found that deforestation was 47 percent lower in project areas than in their counterfactual pixels, while degradation rates were 58 percent lower. They concluded, "Our results indicate that incentivizing forest conservation through voluntary site-based projects can slow tropical deforestation and highlight the particular importance of prioritizing financing for areas at greater risk of deforestation.” In our call, I pointed out this was different from West et al, but that neither was conclusive. Nonetheless, you have insisted on presenting these baseline extrapolations as gospel, so we have no choice but to point out some obvious shortcomings – which, again, the authors mostly acknowledge. First, all three papers skirt the three tricky issues that have so far prevented a wider incorporation of synthetic modeling into REDD+ baselines: 1. The data comes from low-resolution Landsat imagery, including, in the case of West et al, Global Forest Watch, which deviates substantially from “official” numbers and adamantly states that it wasn’t designed for that purpose. 2. The covariates are general and not tested for specific regions, and 3. The authors don’t claim to have looked at what is driving the changes. Second, real-world evidence contradicts the synthetic models. Swallow et al, for example, pointed out that the true real-world rates of deforestation (as opposed to the synthetic models) in project reference regions exceeded the rates projected in the original baseline assessments. This deviates substantially from West’s synthetic controls and controverts his thesis. Indeed, West et al 2020 shows that the synthetic controls do a poor job of projecting deforestation in many of the projects. Third, the authors select project areas based on how well they fit their approach rather than objective criteria. West et al dropped about 25 percent of projects or project areas from the selection due to a poor fit of synthetic controls, and in the case of Guizar-Coutiño et al, your final analysis included less than half of the projects they initially looked at. Fourth, West et al acknowledge that synthetic controls are derived from multiple, scattered sites that are smaller than the project area. In West et al 2020, they acknowledge that smaller sites cannot be said with certainty to behave the same as the large project area. Fifth, even if the synthetic modeling was accurate, the findings wouldn’t hold up because a project baseline is not the same as the number of credits issued.

8
  • Verra

    66 w

    Greetings; Here is a longer and more nuanced response we sent to the reporters a few hours after their deadline passed but well before the stories posted: After reviewing your questions and double-checking with members of our technical team, I can reiterate what I said initially: your conclusions about Verra baselines simply don’t add up. The academic exercises we discussed on our call, and which you cite, are philosophically interesting, but even the authors have layered in caveats that preclude your conclusions, which flow from six fundamental errors. Let me recap the errors as I see them. Error #1: Mistaking New Tools for Magic Bullets On our only call together, we discussed new synthetic modeling tools that Verra and others are experimenting with to assess the impact of human activities on forests. You seemed to be trying to understand the role these tools may and may not play, as well as their advantages, disadvantages, and how they may or may not inform the reassessment of REDD+ baselines going forward. In your questions, however, however, you’ve glommed onto a handful of academic exercises that use synthetic modeling in ways that are philosophically interesting but would never pass muster in a bona fide carbon methodology. The fact is that synthetic controls, counterfactual pixels, and a slew of other new tools are meaningless when detached from the realities of what’s happening on the ground – the old GIGO principle. This does not mean the concepts have no value. Indeed, as we discussed: - Verra already recognizes a variant of synthetic controls in two methodologies, but that recognition came after several years of piloting and public consultation, as I’ll loop back to below. - Verra has also proposed the use of risk mapping based on variables that have proven to be generally predictive at scale, but that approach does not involve synthetic modeling. - Finally, several groups have proposed ways of incorporating synthetic modeling in REDD+ baselines, but these are also facing GIGO challenges and are not ready for prime time. The bottom line is that synthetic modeling has value but isn’t a magic bullet. Wrongly applied, it can serve the interests of ideologues and opportunists while sidelining pragmatists seeking viable solutions. You, however, have mislabeled these exercises as “real world” counterfactuals while dismissing methodologies built on decades of piloting, review, and consultation as “fairy tales.” Error #2: Cherry-Picking and Magnifying the Minority Related to the above: the basic concepts underlying these approaches are simple, but the tricky parts are: 1. identifying the indicators that accurately reflect local drivers of deforestation, 2. ensuring the data accurately captures those indicators, and 3. understanding the difference between causes, effects, and mere correlation. The authors of the papers you’re citing have, by their own admission, given these components short shrift, which means they have not “estimated how much deforestation was prevented by the projects,” as you claim. They have merely shown that different approaches yield different results. Error #3: Conflating the Measurement of Deforestation with Impact Evaluation Some REDD+ projects are designed to impact only the project area, but most are designed to generate positive activities that spread into surrounding areas. These “positive externalities” are well-documented, but you’ll miss them if you rely on synthetic modeling without corresponding scenario analysis and process tracing. Error #4: Ignoring the Mission of Standard-Setting Bodies As a standard-setting body, Verra’s role is to review all available research in context and identify the sweet spot where most experts align. It is not to defend or attack individual studies. Although we have recently begun to propose new methodologies ourselves, we have traditionally acted as a forum through which entities that want to produce a methodology can do so by exposing their ideas to bona fide experts through iterative rounds of expert review and public consultation. By following this approach and then making its documentation publicly available, Verra provides foundational methodologies on which some buyers may layer in additional filters – such as the Climate, Community, and Biodiversity (CCB) Standards, the proposed ABACUS label, or their own proprietary filters and preferences. Intermediaries such as South Pole and CoolEffect apply filters based on their own additional criteria, as do sophisticated buyers such as Salesforce and many others. Error #5: Seeing Offsets as See-Saws You’re continuing to insist that every reduction achieved through the Voluntary Carbon Market (VCM) results in an increase someplace else, which is simply not true. In a compliance market, offsetting is only permitted for residual emissions, and the voluntary market provides a vehicle for going above and beyond that to drive overall emissions down deeper and faster than companies can realistically achieve internally – and not, as you seem to believe, an excuse for doing nothing. There is debate over what can be realistically achieved internally, and the Voluntary Carbon Markets Integrity (VCMI) initiative is working on identifying science-based criteria for what constitutes carbon neutrality. Verra supports that initiative as well as broader calls for more transparency in corporate disclosures, but you seem intent on holding Verra accountable for policing claims – which exhibits a profound (and, I suspect, willful) ignorance of the nature of that challenge. Ecosystem Marketplace conducted an analysis of buyers in 2016 and found that companies that voluntarily purchased offsets tended to do so as part of a structured reduction strategy, and the fundamental laws of supply and demand render it impossible for emitters to offset their way out of this mess. Error #6: Ignoring the Nature of the Challenge Building on the above, you have chosen to ignore the near-universal acceptance of the need to emphasize deep reductions now while gradually building up the capacity to pull greenhouse gases from the atmosphere. The Intergovernmental Panel on Climate Change (IPCC) tells us we must dramatically scale up Nature-Based Solutions (NBS) – and specifically REDD+ – to meet the climate challenge, and analysis shows that we’ll have to reforest 50 hectares of forest for every hectare of we lose in a given year to break even – or wait 50 years for that hectare to recover. REDD+ is a necessary transition mechanism, and scaling up requires, among other things, moving from site-specific modeling to a more standardized approach that incorporates the newest technologies. That’s the central challenge we’re dealing with here, but you keep insisting that an oversimplified application of new and evolving tools for developing standardized approaches automatically generates “findings” that are superior to site-specific modeling. The Promise and Pitfalls of Synthetic Modelling Synthetic modeling comes from the social sciences, where researchers have used it to isolate the effects of an “event or intervention of interest [on an] aggregate unit, such as a state or school district.” It works not by comparing the impacted city or state to a comparable unit but to a synthetic city or state modeled from multiple states, school districts, or other population centers. VCM stakeholders have been experimenting with the application of synthetic modeling to deforestation for over a decade, and last year Verra approved a new methodology for projects that reduce emissions by promoting improved forest management (IFM) in the United States. Synthetic modeling proved effective with IFM because IFM consists of standardized interventions carried out across a relatively homogenous region – in contrast to REDD+ projects, which prescribe site-specific interventions for site-specific drivers of deforestation. Despite the relative simplicity of IFM, it still took several years of piloting and multiple rounds of expert review and public consultation for the American Forests Foundation and The Nature Conservancy to develop the dynamic performance benchmarks (DPBs) that were eventually approved under Verra. Recent advancements in remote sensing and artificial intelligence have ushered in a new era of digital measurement, reporting, and verification (DMRV), which has enabled several groups to present strategies for incorporating DPBs into REDD+ methodologies. All these efforts, however, are struggling to overcome the “tricky” challenges alluded to above because the drivers of deforestation are woven into local economies and thus vary greatly from country to country and region to region. Verra faced a similar challenge in developing the new risk mapping tool designed to underpin the nesting of projects in jurisdictional REDD+ programs. In this case, research shows some indicators are somewhat predictive globally in the short term, but the variability from region to region is such that local weighting will be necessary. Even then, risk mapping is one component in a larger methodology and not a methodology in itself. Among the many challenges to implementing DPBs in REDD+ are data collection and the identification of reliable indicators – called “covariates” – that can be used to synthesize counterfactual rates of deforestation. If you look at the covariates in the IFM methodologies, you will see how specific they are. Limitations in the Literature As I have stated on our call and again here, Verra is reticent about praising or critiquing academic exercises produced in good faith to inform broader discussion – especially if the authors already provide multiple caveats, as is the case here. Indeed, we welcome and encourage such exercises because they help us identify the sweet spots where most experts align. I’ll remind you that Guizar-Coutino, while not evaluating baselines, found that deforestation was 47 percent lower in project areas than in their counterfactual pixels, while degradation rates were 58 percent lower. They concluded, "Our results indicate that incentivizing forest conservation through voluntary site-based projects can slow tropical deforestation and highlight the particular importance of prioritizing financing for areas at greater risk of deforestation.” In our call, I pointed out this was different from West et al, but that neither was conclusive. Nonetheless, you have insisted on presenting these baseline extrapolations as gospel, so we have no choice but to point out some obvious shortcomings – which, again, the authors mostly acknowledge. First, all three papers skirt the three tricky issues that have so far prevented a wider incorporation of synthetic modeling into REDD+ baselines: 1. The data comes from low-resolution Landsat imagery, including, in the case of West et al, Global Forest Watch, which deviates substantially from “official” numbers and adamantly states that it wasn’t designed for that purpose. 2. The covariates are general and not tested for specific regions, and 3. The authors don’t claim to have looked at what is driving the changes. Second, real-world evidence contradicts the synthetic models. Swallow et al, for example, pointed out that the true real-world rates of deforestation (as opposed to the synthetic models) in project reference regions exceeded the rates projected in the original baseline assessments. This deviates substantially from West’s synthetic controls and controverts his thesis. Indeed, West et al 2020 shows that the synthetic controls do a poor job of projecting deforestation in many of the projects. Third, the authors select project areas based on how well they fit their approach rather than objective criteria. West et al dropped about 25 percent of projects or project areas from the selection due to a poor fit of synthetic controls, and in the case of Guizar-Coutiño et al, your final analysis included less than half of the projects they initially looked at. Fourth, West et al acknowledge that synthetic controls are derived from multiple, scattered sites that are smaller than the project area. In West et al 2020, they acknowledge that smaller sites cannot be said with certainty to behave the same as the large project area. Fifth, even if the synthetic modeling was accurate, the findings wouldn’t hold up because a project baseline is not the same as the number of credits issued.

    8
    • Ingmar Rentzhog

      65 w

      Thank you so much for your long answer @Verra. It is very important that we provide as much transparency as possible about this topic.

      3
    • We Don't Have Time

      67 w

      Dear Kia Krond Thank you for getting your climate warning to level 2! Verra responded to this article on their website, see this link: https://verra.org/verra-response-guardian-rainforest-carbon-offsets/ We have reached out to Verra and asked if they want to issue any further response. I will keep you updated on any progress! /Adam We Don't Have Time

      1
      • Tabitha Kimani

        67 w

        That's why the audits in the climate change realm are very necessary to avoid misleading people.

        1
        • Marco Rodzynek

          67 w

          Once real time data monitoring and digital verification happens, it will work, well it has to

          8
        • Marco Rodzynek

          67 w

          That’s why we need the nature data alliance

          5
          • Ingmar Rentzhog

            67 w

            Thanks to sharing this informative post and article. I would really want to read the answer to this criticism from Verra themselves. It is scary if this investigation is right.

            3
            • Ajema Lydiah

              67 w

              this is worth stop the cruelty

              • Peter Kamau

                67 w

                Why the lie???⚠️⚠️⚠️

                Welcome, let's solve the climate crisis together
                Post youtube preview with preloading
                youtube overlay

                Write or agree to climate reviews to make businesses and world leaders act. It’s easy and it works.

                Write a climate review

                Voice your opinion on how businesses and organizations impact the climate.
                0 trees planted

                One tree is planted for every climate review written to an organization that is Open for Climate Dialogue™.

                Download the app

                We plant a tree for every new user.

                AppleAndroid