Performing scientific review is a laborious and error-riddled task. As it turns out, there are as many opinions about science as scientists, and coming to a consensus can be challenging. The below strategies are at the core of our services at Traverse Science. We believe they enable teams to conduct reviews that are high quality, easily replicated, and 5x faster than internal R&D teams. These strategies are the foundational pillars of our work:
90% of our clients feel unable to keep up with the state of the science in their field. And with good reason – publication rates are accelerating daily, and the average publication contains more data. So not only are there more papers to read, but each paper has also more to comprehend in it! Because of this, reviews are more important than ever. Good reviews condense and clarify the state of the science. The best reviews are objective, useful summaries that help you understand the landscape require and visual data.
Visual data are easy to interpret, meaning any data that cannot be visualized are less valuable because they are more difficult to understand and communicate. When conducting structured review, it is important to prioritize future visualization during database formation and data extraction. This requires answers to review questions to be either a number or a word/phrase and not both. Organize notes about an answer in a separate cell.
For example, if collecting data about the sample size of the trial, don’t try to force all the details into a single cell. Instead, separate the data to allow only one relevant detail per cell as shown to the right.
Prioritizing visual data enables readers to synthesize larger amounts of data intuitively. These visualizations can then translate results so that larger groups can easily understand the take-home message.
2. Systematize Objectivity
Objectivity is central to science, but rarely codified. When conducting a structured review, objectivity is about putting a system in place that enables multiple reviewers to answer the same question, the same way, every time. These systems make reviews more efficient and precise by reducing subjective judgment calls. It requires the ability not only to answer questions objectively but to ask objective questions in the first place. To do this, compartmentalize and describe each question fully.
Most questions are more complex than they appear. For example:
“Was the study double-blinded? Answer yes/no/not applicable”
This question appears simple but is not in practice. There is not enough context to be sure that every reviewer will be able to answer in the same way. If the reviewer were to search google for a definition of double-blind trials, they might conclude that double-blinding is a practice that requires both the study participants and experimenters to be ignorant of who receives the placebo or real treatment.
In reality, “experimenters” is probably a large group of scientists, each with varying roles and blinding. This includes the principal investigator, administrative staff, statisticians, trial managers, laboratory technicians, research staff, and more. If the trial subjects and only one of the experimenter groups are blinded, does that mean the trial was “double-blinded?” Maybe, maybe not.
Instead, compartmentalize the question by its constituent parts. A compartmentalized version of this question could look like a checklist:
By increasing the level of compartmentalization in the data, we are able to have a much clearer view of who was and was not blinded. This gives us a stronger understanding of the study and potential sources of error. For example, the blinding of several staff (project manager, research staff, and lab techs) is not mentioned. Does this still mean the study is double-blind?
Since the answer for those staff is neither yes nor no, the answer is “not specified”. And the true answer about double-blinding is “kinda”. Since researchers tend to omit the things they don’t do in their methods, “not specified” is one of the most common entries in our databases.
But doesn’t this blow up the scope of the review?
We did just turn one question into seven questions, after all. In our experience, the answer is no. Expanding questions like this improves internal agreement on the first round of extraction. This, in turn, reduces the amount of time spent on conflict resolution while also reducing decision fatigue. Ultimately, we save time by being precise and compartmental from the start.
When developing extraction variables in a review, it is important to be as descriptive as possible. Clearly define variables of interest, including appropriate terminology and formatting. If compartmentalization is done well, being descriptive is 80% of the way there.
For example, when we asked the question “Did the study employ a cross-sectional design?” we encountered two different responses depending on the reviewer:
Some of our team members interpreted this as how the outcomes were measured (all at a single moment in time).
Others interpreted it with respect to its epidemiological definition, where both the exposure and outcome are measured at a single point in time.
This resulted in small but meaningful errors in our review that we had to resolve by more descriptively defining “cross-sectional”.
How many reviewers do you really need?
It may feel burdensome to provide lengthy descriptions and definitions in the review protocol, but doing so will prevent drift between reviewers and promote objectivity. Even if it takes an entire page of written directions to adequately describe a small aspect of the protocol, write that page. Additionally, writing clear definitions of your variables assists other stakeholders in your review by improving transparency. Clear, descriptive definitions allow a clear understanding of how the data was extracted so anyone evaluating the review can truly understand what was asked. Ultimately, this helps ensure that their interpretation is in line with the reviewer’s intent.