After a year of arduous work you just finished your systematic review and you are ready to celebrate being over with it... and right away they ask you to do it again. Yuck.
I have been doing (systematic) reviews for almost 20 years now and each time I finish one I really enjoy it (the finish, I mean). Don't know about you, but to me the idea of repeating the review immediately after I finished it is boring. I need to do something else (even a review on another topic) for a year or two to get even slightly interested in updating a review I have previously done. Yet, this is the very idea behind a living systematic review – perpetual updates.
Living systematic review = frequently updated systematic review.
How frequently is a matter of discussion. Some would say that if the review is updated once a year it is living. The others would not agree and maintain that for the review to live it may never be on idle, it has to be updated continuously. And here is the main trouble with living reviews: not enough resources to do them.
You may be familiar with the project management triangle, the iron triangle, or some other name it travels under. It depicts the relation between the scope of the project, the quality of the project’s result, the time it takes, and the resources required to complete the project (see picture). The basic concept is that one has to always sacrifice one of them: a large scope project with a high quality product done fast requires large resources, a large scope project with a high quality product in the setting with limited resources will take more time to complete, a large scope project done fast in a setting with limited resources requires lowering the quality of a final product, etc. You get the idea.
There are still many questions about the best methods to do living systematic reviews, but the main issue seems to be the time constraint: how to do a resource-intensive and time-consuming systematic review repeatedly and fast?
Humans are not so great at completing repetitive tasks quickly (oh, so boring!), but computers are really good at it. For that very reason, there has been a lot of interest in the systematic review community in using computers to automate as many tasks in the review process as possible. Some tasks are relatively easy to automate – for instance, computers have already been used for searching bibliographic databases for over 30 years. Other tasks, like the selection of studies, including screening database records or extracting the data from published papers, have been rather difficult to achieve. In recent years the progress in applications of artificial intelligence (machine learning, natural language processing) has given us hope that if people can produce self-driving cars they should be able to make computers read with "understanding" a relatively uncomplicated text of a scientific report.
We, humans, are still not sure how to jump the obstacles that living reviews pose, but we are working on it. The main solution to time constraints seems to be making the machines perform the reviews for us.
If you are interested in the automation of literature reviews and the application of artificial intelligence in evidence synthesis, join the LinkedIn group on AI in Evidence-Based Healthcare
Associate Professor at McMaster University, Health Research Methods, Evidence, and Impact, Member of GRADE Working Group.
Related blog posts: