Article Alert**: Lessons learned from invasive plant control experiments.

Although I haven't been around this year to highlight many of the articles that I had wanted to, I want to briefly summarize a meta-analysis that I have alluded to elsewhere on the blog.

Dr. Karin Kettenring and Dr. Carrie Reinhardt-Adams, both invasion biologists with academic roots in the American upper midwest, recently asked the question, "How are practitioners and scientists assessing invasion control?"  As a restorationist, I've often wondered this almost as commonly as (or concurrently with) how do restoration practitioners assess project success? Too often it seems that plants and herbicide meet in an idyllic setting, spend a few days changing one another's chemistry and then plants...and people... move on.  It always seemed like there was perhaps a percent cover 'before and after,' but really the state of information on how effectively biological invasions have been suppressed never seems to move too far outside of specific systems. So long and thanks for all the glyphosate...

A line from Hulme (2006) cited in the authors' lead-in really frames up the impetus for their assessment: "much research to date is primarily concerned with quantifying the scale of the problem rather than delivering robust solutions." 

So onward, into the database world and in pursuit of their goal, the pair undertook the daunting task of searching ISI Web of Knowledge, Scopus, Ecology Abstracts, Science Direct, JSTOR, Digital Dissertations and assorted grey literature for invasive plant control studies.  Casting this wide of a net for information in an age of online databases and hundreds of ecology journals by itself commendable, but the authors used a fine filter and thinned the pool of papers from 10,000 unique publications.  They found 355 papers between 1960 and 2009 that met their full search criteria - loosely summarized, papers that quantitatively assessed the control of invasive plants.

Their cumulative findings were not surprising to anyone with either a foot in management or in research:

-Most invasion control studies are small scale and short term, with 51% of studies lasting one growing season or less. Two of but many limitations on what type of inference might be made from in situ invasion control studies*.

-Post-invasion control, revegetation is rarely used and if it is, it goes unassessed.

-When one invader is removed, it is often replaced by another or reinvasion from the initial species occurs.

-Invasion control costs are poorly, if ever, quantified. This includes both economic costs and collateral damage to native species.

So what to do about the state of affairs? The authors suggest involving land managers with the research process through adaptive management and other embedded collaborations.

My interpretation of these results is nearly identical to the authors: think bigger, make better friends, and work together to push forward the effectiveness of invasion control

One thing I will note is that many invasion control projects never become peer-reviewed publications, but circulate within organizations who use the information to perform their own adaptive management. For example, in Washington, the King County Noxious Weed Board does a variety of things to control state-listed noxious weeds on public and private land, but their results (and those of other conservation districts, noxious weed agencies and state agencies) would only rarely make it into to the peer-reviewed press.

Full citation:

Kettenring, KM and C. Reinhardt-Adams. 2011. Lessons learned from invasive plant control experiments: a systematic review and meta-analysis. Journal of Applied Ecology, 48(4): 970-979.


*On a personal note, I have recently leaped into research for my PhD project, noticing that most experimental designs and analyses used in invasion control research suffer from several flaws. Because treatments are often large-scale, agriculture-style affairs, pesudoreplication is abundant and sub-sampling without pooling is also common. Because there are often multiple treatment combinations, multiple years, and sometimes uneven sample sizes, when building linear models for analysis of these data, some researchers have fallen into a few traps: using incorrect sums of squares, incorrect degrees of freedom, and not partitioning variability within blocks or random factors associated with the treatment units.


**This piece was largely authored in late August 2011 when the paper was brand spanking new. Lag times happen...but so do good papers in restoration ecology. Read, learn, be merry.

Greatest Hits