Measuring Our Influence as Conservation Scientists

I am a conservation scientist.  Like any other scientist, I develop and test hypotheses, trying to figure out how the world works.  Once I learn something, I publish my results in academic journals where other scientists can evaluate and build upon what I’ve learned.  Because I’m a conservation scientist, however, I also need make sure the people who directly impact prairie conservation (ranchers, land managers, policy makers, etc.) get my information and use it to improve the way grasslands are managed and restored.  If I fail to influence the actions of others in positive ways, I fail as a conservation scientist.

It doesn’t matter how much we learn about employing prescribed fire effectively if we’re not able to help others use the lessons we learn.

In science, keen observational skills and creativity often spark innovations, but rigorous collection of data is required to see whether a great idea actually makes sense or not.  While I’ve had some good ideas, I’ve also come up with plenty of grassland management and restoration strategies that turned out to be duds.  In each case, I learned a little more about prairie ecology and our land stewardship improved as a result.

I’m proud of the work I’ve done over the years to develop new and better ways of restoring and managing prairies.  I know those strategies are effective because I’ve spent a tremendous amount of time testing them, through both observation and rigorous data collection.  My computer is full of spreadsheets and graphs showing how prairie species and communities respond to various treatments.

I’m also proud of the work I’ve done to share what we’ve learned with others, but until recently, I’ve done very little to evaluate the effectiveness of that work.  I’m not alone – most of my colleagues in the world of conservation science do a great job of measuring the natural world and its responses to human activities, but do very little to evaluate whether their work is actually influencing conservation.  It’s fairly ridiculous when you think about it.  We would never think of devoting ourselves to a new invasive species control technique without testing its effectiveness, but for some reason we’re satisfied to rely on blind optimism that our outreach strategies are changing the world.

Come on, folks!  We’re scientists!  We love data, and we’re good at developing and testing ideas.  Why do we apply that passion and aptitude to only part of our work?  Why aren’t we testing whether our ideas are reaching the intended audience and influencing on-the-ground conservation work?  How can we adjust and improve our outreach strategies if we don’t have any data to work from?

To be fair, measuring outreach impacts requires a very different kind of scientific approach than most of us are comfortable with.  Instead of counting plants or observing behavior of birds, bees or bison, we have to assess the attitudes, motivations, and actions of people. Many of us took our career paths because we prefer the company of birds, bees and bison to people, but that doesn’t give us leave to just ignore people altogether – especially when the success or failure of our work hinges upon their actions.

Fortunately, we don’t have to work alone.  There are lots of scientists who are already good at studying people, and many of them are happy to work with us.  I’ve had very enthusiastic responses from those I’ve asked advice from, and their input has been very helpful.

We should probably take some of the energy we spend studying animals and put it towards studying the way people respond to our outreach efforts.

Whether you’re a scientist who actively shares your results with your target audience, or someone who relies on others to translate and transmit that information, there are some basic questions we should all be trying to address.  This is far from a comprehensive list, but it’s a start.

Defining Audience and Message

What lessons and messages from my work are most important?

Who is the audience for those?

What messengers/media will best reach the audiences?

What are the current attitudes/actions of my audience?  What are the main drivers of those those attitudes and actions?

Who are the credible voices my audience looks to for guidance?

How can I reach those credible voices?

Evaluating Success

Are my messages reaching my target audience?

How many people in that audience am I reaching?

Are my messages changing attitudes and/or actions?

At what scale, and to what degree am I making a difference?

Which messages, messengers, and media are most effective for reaching each of my audiences?

Many of us host field days, at which we can share what we’re learning with others.  How many of us are assessing the effectiveness of those field days and other outreach strategies?

I’ve spent a lot of time thinking about audiences and messages, and it’s really helped me focus both my research and outreach more effectively.  Recently, I’ve also started trying to answer some of the questions in the above “Evaluating Success” category.  I’m making some progress, but I need to do much more.

I can tell you how many presentations I’ve given over the last two years (40) and how many people were in those audiences (3,447).  I’ve also been keeping track of calls and emails asking for advice on prairie restoration and management.  Unfortunately, while I have a lot of numbers, I can’t easily translate them into acres of improved management or enhanced habitat quality.

I have, however, made at least some progress toward measuring conservation impact on the ground.  Much of that success came from survey work by one of our first Hubbard Fellows, Eliza Perry.  Eliza conducted interviews with some land managers and private lands biologists who had attended field days at our Platte River Prairies.  Among her many findings were that almost all respondents said what they learned from us had influenced their work, and they conservatively estimated that over 330,000 acres of land had been restored or managed differently because of that influence.  Beyond that, Eliza was able to identify key factors that led to our success and suggest ways to improve our effectiveness.

In addition, Eliza surveyed readers of The Prairie Ecologist Blog and I conducted a follow-up survey three years later.  Those surveys helped quantify the demographics of readers (e.g., about 2/3 of respondents have direct influence on prairie management).  The surveys also measured the degree of influence the blog has on readers’ understanding of prairies and approach to managing or restoring prairies (when applicable).  We even got a rough estimate of the number of acres on which management had been influenced by the blog (over 300,000).

Being able to quantify outreach impact, even when the numbers are fuzzy and incomplete, has been really helpful.  It helps me justify my job, for one thing, and assures both me and my supervisor that the time I spend writing, giving presentations, and consulting with others has value.  Most importantly, it helps me assess what is and isn’t working and adjust accordingly.

While it’s still not fully within my comfort zone, I’m trying hard to make sure I’m measuring the effectiveness of our outreach efforts, just as I do our prairie management and restoration work.  I would love to hear from people who are trying to do the same thing, especially if you’ve found effective evaluation strategies.  As more of us focus on measuring the success of our outreach work, we’ll be able to learn from each other and establish some common metrics.  Hopefully, we’ll also become more effective at translating what we’re learning into large scale and meaningful conservation impact!

How Science Works and Why It Matters

As a scientist and science writer, I’m concerned about the way science is perceived by the public.  I think some big misunderstandings about how science works are creating distrust and dismissal of important scientific findings.  That’s a huge problem, and I’d like to try to help fix it.

Let’s start with this: Science is a process that helps us understand and explain the world around us.  That process relies on repeated observations and experiments that continuously change our understanding of how things work.

Scientists often come up with results that conflict with those of other scientists.  That doesn’t indicate that something is wrong; it’s exactly how science is supposed to work.  When scientists disagree about something, more scientists get involved and keep testing ideas until a consensus starts to emerge.  Even at that point, ideas continue to be tested, and either gain more acceptance (because of more supporting evidence) or weaken (because conflicting results are found).

There is no endpoint in science.  Instead, ideas move through various steps of acceptance, depending upon how much evidence is collected to support them.  You can read much more about how the process works here.

We are lucky to have easy access to immense amounts of information today.  However, it can be be very difficult to know which statements are supported by good science and which are just opinions amplified by people with an agenda and a prominent platform.  Today’s world, for example, still includes people who earnestly believe the earth is flat, despite overwhelming evidence to the contrary.

Media coverage of science often increases confusion.  How many times have you heard or read a media story about how a particular substance either cures or causes cancer?  In most cases, the scientist being interviewed tries to explain that their work is just one step in a long process of evidence gathering and doesn’t prove anything by itself.  That scientist might as well be talking to an empty void.  The headline has already told the story and pundits are shaking their heads and complaining about how scientists can’t ever agree.  (Please see paragraph three above.)

Unfortunately, confusion about how science works means the public often doesn’t pay attention when scientists actually do agree on things.  Loud voices can easily sway public opinion on important topics because it’s hard to know who to believe.  Often, we believe those who say things we want to be true.

Let me ask you three questions:

Do you believe that childhood immunizations are safe and effective?

Do you believe that rapid climate change is occurring as a result of human activity?

Do you believe that food derived from products containing Genetically Modified Organisms (GMOs) is safe for human consumption?

The scientific community has clearly and strongly stated that the answer to all three of these questions should be yes.  Despite that, many people will answer yes to one or two of these questions, but not all three.  If you’re one of those people, I have another question for you.

If you trust the scientific community and the scientific process on one or two of these topics, why not on all of them?

This post is not about vaccines, global warming or GMOs.  I’m not trying to tell you what to think. Instead, I’m inviting you TO think.

If you’re a scientist, are you spending enough time thinking about how to talk to a public that is skeptical of science?  Being right isn’t enough when there are louder voices shouting that you’re wrong.  How do you expect the public to find the real story when your results are hidden in subscription-only journals and written in technical jargon-filled language?  What can you, personally, do to help others understand what science is, why it’s important, and what it can tell us?

If you’re someone who believes the science on some topics, but not others, are you comfortable with the reasons behind that?  Do you think science has been polluted by money and agendas, or do you think money and agendas are trying to discredit science?  Have you spent enough time reading articles that contradict your position and evaluating the credentials of those on each side?  Is it possible that long-held beliefs are preventing you from looking at evidence with clear eyes?

While individual scientists may have biases, the scientific process has no agenda other than discovery.  Scientists are strongly incentivized to go against the grain – both employers and journal publishers get most excited by research that contradicts mainstream ideas.  Because of that, ideas that gain overwhelming scientific consensus should be given extra credibility because they have withstood an onslaught of researchers trying to tear them down.

Can scientists be wrong?  Yes, of course – scientists are wrong all the time, and they argue back and forth in pursuit of knowledge.  That’s a good thing.  Saying that science is untrustworthy because not all scientists agree is like saying that we shouldn’t eat fruit because some of it isn’t ripe.

We desperately need credible science in order to survive and thrive on this earth.  Sustaining that credibility is the responsibility of both scientists and the public.  Scientists must provide accessible and clear information about what they’re learning, but the public also needs to be a receptive and discerning audience.

There is a torrent of news and data coming at us every day.  As you process that information, think like a scientist.  Question everything, including your own assumptions.  Form an opinion and then test it by looking for information that might disprove it.  Most importantly, even when you’re confident in your viewpoint, keep your mind open to new evidence and alternate perspectives.

Finally, remember that science is a continual and cumulative process.  Conflicting research results don’t indicate weakness, they drive scientists to keep looking for answers.  Science shouldn’t lose your trust when scientists disagree.  Instead, science should earn your trust when scientists reach consensus.

 

Special thanks to Anna Helzer for helpful feedback on this piece.