The Mindful Sceptic's Guide to Evaluating Research
10 essential steps for navigating complex evidence
These days, I must navigate a world awash with information, much of it claiming to be based on robust research. But how do I separate the wheat from the chaff? How do I ensure the evidence I use to inform my understanding of complex environmental and social issues is reliable and credible?
As a mindful sceptic, I am unlikely to wing it.
I prefer a systematic approach to evaluating research, which becomes functional and essential. So here is my ten-step critical analysis framework that goes beyond mere doubt. Each step embodies the essence of mindful scepticism—a balanced approach that combines rigorous scrutiny with open-minded inquiry.
By applying these steps, I can cultivate a deeper understanding of the evidence presented, whether it's a peer-reviewed paper on climate change, a report on sustainable urban development, or even a pronouncement from the Donald.
Evaluating research empowers me to engage more meaningfully with the pressing issues of our time, make informed decisions, and contribute to constructive dialogue.
It’s straightforward enough. I even managed to whittle the process down into ten steps.
If you want to know what they are without reading how to do them, here they are
Authorship
Publication Source
Citations and References
Research Methodology
Peer Review
Currency and Timeliness
Objectivity and Bias
Peer Feedback and Impact Factor
Accessibility and Open Access
Consistency with Existing Knowledge
Read on to see the ten steps in action.
Was my academic research any good?
A peer-reviewed research paper is the primary way to communicate the outcomes of highly trained individuals trying to answer a question with evidence.
How they do this is a long process. The fastest publications appear in months; most take years from inception to publication. All along the way, there are endless opportunities for errors, some controllable, others in the lap of the probability gods.
There are errors of commission that involve taking inappropriate actions, doing something incorrectly, or making a wrong decision. For example, in the lab, a researcher might add 10ml of a titrate instead of 20ml by mistake, which is a direct and incorrect intervention.
Errors of omission occur when necessary actions or steps are neglected or left undone. For example, a researcher might fail to allocate replicates to treatments randomly, perhaps because they never learned that this is a crucial step in any experiment. Omission errors happen through a lack of action or oversight, leading to potential adverse consequences due to what was not done rather than what was done incorrectly.
And researchers are people. They make mistakes.
In other words, we can’t just assume. Evaluating research sources is crucial to ensure the credibility and reliability of the information a mindful sceptic might use to help make a considered decision.

Google Scholar has found 84 publications with my name on them. This is about right, as I used to religiously curate the number of peer-reviewed publications on my research list because publications are the core currency for advancement in an academic career.
A crude rule of thumb is that a productive academic should publish at least four papers annually. I met that rate across the 21 years of publishing with primarily data-driven publications—some observations were collected and analysed—so I guess I was productive.
Google also tells me that these papers have been cited 5,380 times. That means over 5,000 other publications have referred to one or more of my 84 research papers. That is a good result, considering my most recent paper was way back in 2008 when my academic career ended, and I went on to other currencies.
However, the question a mindful sceptic asks of this self-aggrandisement is this.
Was any of it any good?
Let’s see.
I will walk you through the basics of how to evaluate research and apply each evaluation step to my most cited paper, where I was responsible for the errors:
Dangerfield, J. M., McCarthy, T. S., & Ellery, W. N. (1998). The mound-building termite Macrotermes michaelseni as an ecosystem engineer. Journal of Tropical Ecology, 14(4), 507-520.
Here is what the Abstract of the paper says it was about.
Many organisms create or alter resource flows that affect the composition and spatial arrangement of current and future organismal diversity. The phenomenon called ecosystem engineering is considered with a case study of the mound building termite Macrotermes michaelseni. It is argued that this species acts as an ecosystem engineer across a range of spatial scales, from alteration of local infiltration rates to the creation of landscape mosaics, and that its impacts accrue because of the initiation of biophysical processes that often include feedback mechanisms. These changes to resource flows are likely to persist for long periods and constrain the biological structure of the habitat. The value of ecosystem engineering is discussed as a holistic way of understanding the complexity of tropical ecology.

First, let’s see the ten steps in more detail.
How to Evaluate Research Sources
Step 1 | Authorship
Examine the author's credentials, academic qualifications, and subject expertise. Authors with relevant qualifications or affiliations with reputable institutions are more likely to produce credible research. Check the author's publication history because established researchers often have a track record of contributing to reputable journals and academic publications.
Author credibility is important because… it correlates with reliability, helping you determine whether the findings are trustworthy.
Step 2 | Publication Source
Assess the reputation of the journal or conference where the research is published. Peer-reviewed and reputable journals maintain high standards for the quality of research they publish. Consider the reputation of the publisher. Well-known publishers often have rigorous editorial processes, ensuring the reliability of the research they disseminate.
Publication source is important because… the reputation of the journal or conference acts as a first-line quality filter, ensuring that the research you're considering meets standards of rigour and relevance.
Step 3 | Citations and References
A well-researched paper should include a substantial number of citations from credible sources. Evaluate the quality of the sources cited to gauge the depth of the research. A comprehensive list of references indicates thorough research and reliance on established literature. Pro Tip—watch out for heavy use of references to the author's work.
Citations are important because… a well-cited paper demonstrates that the research is grounded in existing knowledge, allowing you to trace the development of ideas and verify claims made in the study.
Step 4 | Research Methodology
Examine the methodology section to ensure it is well-defined and appropriate for the research question. Rigorous methodologies enhance the study's reliability. If data is involved, assess the sample size and its representativeness. A larger, diverse sample strengthens the validity of the research findings.
Methodology is important because… it is the backbone of any study; understanding it helps you assess whether the conclusions drawn are justified and applicable to real-world environmental and social challenges.
Step 5 | Peer Review
If the research is published in a peer-reviewed journal, it has undergone scrutiny by experts in the field. Peer review enhances the credibility of the study. For conference papers, check if they underwent a peer-review process before acceptance. Some conferences have rigorous review procedures, but many do not.
Peer review is vital because… it is supposed to act as a quality control mechanism, assuring that experts in the field have scrutinised the research for accuracy and validity before it reaches you.
Step 6 | Currency and Timeliness
Consider the publication date to ensure the information is current. Recent research is often more relevant in rapidly evolving fields, but evidence should not be dismissed on date alone. Fine wines mature with age. Check for subsequent updates or follow-up studies. Research that builds upon or refines earlier work adds value to the overall understanding of the topic.
Timeliness is essential because… rapidly evolving fields like environmental science and social studies need current information for making informed decisions.
Step 7 | Objectivity and Bias
Assess the author's potential biases because we all have them. Objectivity is crucial in research, so be aware of any affiliations, funding sources, or personal biases that may influence the study. Look for a balanced presentation of evidence. Research that acknowledges alternative viewpoints demonstrates a commitment to objectivity.
Objectivity is important because… recognising potential biases helps you interpret the research findings more accurately, ensuring a balanced view of complex environmental and social issues.
Step 8 | Peer Feedback and Impact Factor
Evaluate the impact factor of the journal. Journals with higher impact factors are considered more influential in their respective fields. Consider the number of citations and downloads the research has received. High citation rates indicate that the research has impacted the academic community.
Impact factor is important because… the reception of research within the academic community can indicate its significance and potential influence on policy and practice in environmental and social spheres.
Step 9 | Accessibility and Open Access
Check the accessibility of the research. Open-access publications make research freely available, promoting wider dissemination and transparency.
Access is important because… open research promotes transparency and allows for wider scrutiny and application of findings, crucial for addressing global environmental and social challenges.
Step 10 | Consistency with Existing Knowledge
Assess how well the research aligns with existing knowledge in the field. Research that builds on established theories or challenges prevailing ideas contributes meaningfully to the discourse.
Consistency is important because… understanding how new research fits into the existing body of knowledge helps you build a comprehensive and nuanced view of environmental and social issues rather than relying on isolated studies.
So, how does my paper on an ecosystem engineering termite stack up against the ten evaluation steps?
Authorship—Pass
There are three authors on the paper; all have PhDs, one was a full Professor at the time of publication, and together, they have several hundred publications.
Publication Source—Pass
The Journal of Tropical Ecology is published by Cambridge University Press as a peer-reviewed journal with more than 40 volumes since 1985.
Citations and References—Pass
There are 65 references cited in the paper covering a good range of authors and knowledge of termites and ecosystem engineers.
Research Methodology—Fail
Turns out that there are no methods described in this paper. It is an elaborate hypothesis statement rather than an experimental or observational test. Nothing wrong with a deep dive to generate a hypothesis built on past knowledge but there is no new data.
Peer Review—Pass
Journal of Tropical Ecology is peer-reviewed.
Currency and Timeliness—Pass
Whilst the publication date makes this historical research the topic was current at the time and the current citations of the paper include 23 papers published in 2023
Objectivity and Bias—Pass
There are no obvious reasons for bias beyond the obvious enthusiasm of the authors for systems thinking and the importance of organisms to biophysical processes.
Peer Feedback and Impact Factor—Pass
The Journal of Tropical Ecology has an impact factor of 1.6 (2022) making it a mid-ranking ecology journal. The paper has 434 citations with a slight increase in the annual rate over time.
Accessibility and Open Access—Partial Pass
It is not an open-access paper, but it is downloadable from archives.
Consistency with Existing Knowledge—Pass
Consistent with what were new ideas of ecosystem engineering that remains a topic of research.
On a simple pass/fail metric, we get an 8.5/10, which suggests that the paper meets the evaluation criteria, lacks bias, and is relevant. The only problem is that the ‘fail’ is on Research Methodology, the most important aspect of research. If the methodology is wrong, the evidence loses credibility. However, the ‘wrong’ was not an error of omission or commission.
The paper intended to “argue that this species acts as an ecosystem engineer across a range of spatial scales” and as a way of “understanding the complexity of tropical ecology”. We didn’t make any mistakes because the purpose was to set and justify a working hypothesis. The paper was cited over 400 times suggests other researchers use the argument to set up their research, which is always a good sign.
A mindful sceptic will always complete some version of this evaluation checklist.
However, as this example shows, evaluation takes effort and is challenging.
When all ten steps are taken, you will know whether a research paper will likely be good. But there is also a shortcut. It’s called experience.
Once you have read some science literature, you begin to get a feel for it. Good papers read well, the writing feels knowledgeable and confident, the ideas are clear, and there is little pretence.
Authors who know their stuff sound like they do.
The likelihood is you can trust them.

Challenges in Research Evaluation
I completed the checklist in a short time.
That makes sense. I remember the work and wrote the paper. Completing the checklist without prior knowledge takes a lot of time.
I need the skills to complete the evaluation steps. The hardest part is understanding and evaluating the research methodology. There are books on it. You can struggle without a deep understanding of the scientific method and some numeracy skills.
Objectivity and bias are another challenging evaluation step. Authors can be very good at hiding bias, sometimes consciously and sometimes because of dissonance. We are all human.
External factors also impact the reliability and credibility of scholarly work. There are many of these, but I’ll highlight three: the proliferation of predatory journals, the pressure to publish, which can tempt researchers to prioritise quantity over quality, and the accessibility and dissemination of research.
A mindful sceptic recognises these challenges and accepts that peer review is a timely shortcut to the evaluation. Peer-reviewed research has a good chance of being more reliable.

Academia, Peer-review and Research Evaluation
I was an academic researcher and teacher for 25 years.
It was a blast and a huge privilege to hide away, sheltered in the ivory towers and free to scratch your intellectual itches.
My training and the institutions that employed me encouraged any research to undergo peer review. Peer review is the cornerstone of scholarly publishing, ensuring the quality, validity, and credibility of scientific contributions and providing some protection from the brutality of the real world. Your academic peers objectively critiqued your work, and you had to return the favour.
Nobody gets paid for this. We all did it because the quality control system was supposed to ensure evidence was as reliable as possible.
Typically, the peer-review process begins when an author submits a manuscript to a journal. The journal editor then assigns the manuscript to one or more expert reviewers, often researchers or academics with expertise in the relevant subject area. These reviewers carefully evaluate the manuscript based on various criteria, including the clarity of the research question, the soundness of the methodology, the appropriateness of data analysis, the validity of results, and the significance of the findings.
Peer reviewers assess the scientific content, the manuscript's adherence to ethical standards, and its overall contribution to the existing body of knowledge. Based on their evaluation, they may recommend acceptance, revision with major or minor changes, or rejection. The anonymity of the reviewers is a common practice to encourage unbiased assessments and open critique.
Authors receive reviewer feedback and may need to revise their manuscript in response to the comments and suggestions provided. This iterative process helps improve the overall quality of the research. Once the revisions are made, the manuscript undergoes a final review to ensure the author adequately addresses the reviewers' concerns.
The peer-review process acts as a filter, allowing only high-quality and methodologically sound research to be disseminated within the scholarly community. While it is not without its challenges, such as potential biases and time constraints, the peer-review process remains essential in upholding academic research's integrity and advancing knowledge within various disciplines.
It’s not perfect, but it usually works.
Practical Tip for Evaluating Research
If you don’t have time or the skills to complete the ten-step evaluation process but would still like one tool to apply a quick, mindful sceptic lens to the evidence, then establish if the research has been published in a peer-reviewed publication.
If yes, then it has most of the credibility you will need.
You could also follow our guidelines for Evidence Reviews, but that is another story.
Hope all this helps.
Applying Mindful Scepticism to Everyday Information
While I have focused here on evaluating academic research, the same skills of a mindful sceptic are invaluable in navigating the daily tsunami of information. News articles and social media posts, product claims, and political statements all need to be once over with a critical eye.
Consider your daily news feed. How often do you encounter headlines designed to provoke an emotional response rather than inform?
Apply the authorship and source evaluation steps here.
Who wrote the article?
What's their expertise?
Is the news outlet known for factual reporting or sensationalism?
Examine the reporting methodology just as you'd scrutinise a research paper's methodology. Do verifiable facts back claims? Are multiple perspectives presented?
Social media has its equivalent of the peer review process as likes and shares, which only occasionally correlate with accuracy. For instance, when encountering a viral post about an environmental issue, apply the same rigour you would to a scientific claim. Check the source, look for citations, and consider the author's potential biases or agenda.
Product claims, especially those related to sustainability or health, warrant similar scrutiny. Is there evidence to support the claim? Has this evidence been independently verified? Remember, just as in academic publishing, there are "predatory journals" in the commercial world – entities that prioritise profit over truth.
In all these cases, consistency with existing knowledge is crucial. Does the information align with what you already know to be true? Is there compelling evidence for why this new information should supersede existing understanding?
Lastly, consider the power of your critical thinking. Just as researchers design experiments to test hypotheses, you can create small tests in your daily life. If a new "eco-friendly" product claims to be more effective than traditional alternatives, try it out, compare results, and conclude.
Soon enough, it is second nature.
Something to try
The Daily News Challenge
For one week, choose a daily news article related to an environmental or social issue. Apply the following steps from our evaluation process:
Identify the author and their credentials
Determine the publication source and its reputation
Check for citations and references
Assess the objectivity and potential biases
Write a brief (2-3 sentence) summary of your findings at the end of each day. How reliable is the information? What questions remain unanswered? After a week, reflect on how this practice has changed your perception of daily news consumption.
If that is a bit arduous, try this one.
The Social Media Fact-Check
Next time you see a viral post about a scientific claim or environmental issue on social media, put on your mindful sceptic hat:
Trace the claim back to its source
Evaluate the credibility of that source using our 10-step process
Search for corroborating or conflicting evidence from other reputable sources
Assess how the social media post may have altered or sensationalised the original information
Write a short post summarising your findings. How does the reality compare to the viral claim? Share your analysis with friends or family to promote critical thinking in your network.
Key Points
Research evaluation is not just for academics. It's a crucial skill for navigating our information-rich world. By assessing the credibility of sources and the reliability of evidence, we can make better-informed decisions about everything from environmental policies to personal choices. This systematic approach to evaluation helps cut through the noise and misinformation that often clouds essential issues.
While rigorous evaluation processes exist, experience and peer review provide reliable shortcuts. Despite its imperfections, the academic peer review system remains one of our best tools for quality control in research. When time or expertise is limited, knowing that a piece of research has undergone peer review provides a reasonable assurance of its credibility, though not a guarantee of its infallibility.
Research methodology is the cornerstone of credibility. How research is conducted—its methods, sample sizes, controls, and analysis—determines the reliability of its conclusions. Without sound methodology, even the most impressive-looking results may be meaningless. Understanding this helps us distinguish between robust findings and questionable claims, which are particularly crucial in environmental and social science debates.
Critical thinking skills developed through research evaluation can be applied to everyday information consumption. The same principles that help us assess academic papers can be used to evaluate news articles, social media posts, and product claims. This systematic approach to information helps develop a more nuanced understanding of complex issues and supports better decision-making in all aspects of life.
In the next issue of Mindful Sceptic
What happens when a fish scientist declares koalas extinct?
Our next issue dives into the delicate art of intellectual humility, where we learn to respect expertise while keeping our critical thinking intact. Find out why the most competent people you'll meet are often the first to admit what they don't know and how this surprisingly powerful trait can transform your approach to complex environmental issues.
Science sources
One of the gifts that academics are given is the chance to discover new things and establish new evidence. I was incredibly fortunate because I could conduct research in some fantastic places and study some extraordinary creatures.
Going with the ecosystem engineers paper, here are some of the others I penned on termites.
Dangerfield J.M., Schuurman G. (2000) Foraging by fungus-growing termites (Isoptera, Macrotermitinae) in the Okavango Delta, Botswana. Journal of Tropical Ecology 16(5): 717-731
Ellery W.N., McCarthy T.S. & Dangerfield J.M. (2000) Floristic composition in the Okavango Delta, Botswana as an endogenous product of biological activity. In: Biodiversity in wetlands: assessment, function and conservation. B. Gopal, W.J. Junk & J.A. Davis (Eds) pp 195-226
McCarthy T.S., Ellery W.N., Dangerfield J.M. (1998) The role of soil biota in shaping flood plain morphology on the Okavango alluvial fan, Botswana. Earth Surface Processes and Landforms 23: 291-316
Dangerfield J.M., McCarthy T.S., Ellery W.N. (1998) The mound building termite Macrotermes michaelseni as an ecosystem engineer. Journal of Tropical Ecology 14:1-14
Ellery W.N., McCarthy T.S. & Dangerfield J.M. (1998) Biotic factors in mima mound development: Evidence from floodplains of the Okavango Delta, Botswana. International Journal of Ecology and Environmental Sciences 24: 293-313
Schuurman G., Dangerfield, J.M. (1997) Dispersion and abundance of Macrotermes michaelseni (Isoptera: Macrotermitinae) colonies in the Okavango delta of Botswana: Is intraspecific competition important? Journal of Tropical Ecology 12: 39-49
Dangerfield J.M. (1997) Abundance and diversity of soil macrofauna in northern Botswana. Journal of Tropical Ecology 13: 527-538
Dangerfield J.M., Mosugelo D.K. (1997) Termite foraging on toilet roll baits in semi-arid savanna, south-east Botswana. Sociobiology 30(2): 133-143
Schuurman G., Dangerfield J.M. (1996) Mound dimensions, internal structure and potential colony size in the fungus growing termite Macrotermes michaelseni (Isoptera: Macrotermitinae). Sociobiology 27(1): 29-38
Dangerfield J.M., Veenendaal E., Riddoch B., Black H. (1993) Termites and land use in south-east Botswana: Variety and abundance of termite surface features. Botswana Notes & Records 24: 165-180
Dangerfield J.M. (1990) The distribution and abundance of Cubitermes sankurensis (Wassmann)(Isoptera; Termitidae) within a miombo woodland site in Zimbabwe. African Journal of Ecology 28: 15-20