Resources > WEBINARS

Conducting A Systematic Review with Laser AI

Conducting A Systematic Review with Laser AI

Complete a systematic review with Laser AI

Whether you're a current user or exploring Laser AI, learn how the tool can improve your systematic review workflow by watching our webinar and/or reading the Q&A from the event.

January 30, 2024
Thank you!
You can watch the full webinar here:
Oops! Something went wrong while submitting the form.
To watch the webinar please try again or reach out to us via the chat option below.

We wanted to start the new year off with a bang and with new clients and opportunities, we decided to provide an in-depth look at the Laser AI tool. 

What was covered in our the webinar:

Artur Nowak, co-founder & CTO and Iga Czyz, the product designer of Evidence Prime, went through:

  1. Each step of the systematic review process on the Laser AI tool, highlighting how the tool makes the each step easier.
  2. We revealed new updates and functionalities thanks to our latest Helium version.
  3. We had many questions that ended off the webinar and are working on a blog that will address any unanswered questions.

Why watch?

The webinar is a great chance for current and new users of the tool to get an in-depth look at Laser AI.

For Current Users: If you're already using Laser AI, this webinar will help improve your workflow, by showing you new functionalities from our Helium version, including automatic reference distribution & Data Extraction Tables. 

For New Users: This is a great opportunity to understand Laser AI’s potential for systematic reviews and decide if it aligns with your needs.

If you missed the live event don't worry, we have managed to upload the edited version onto our YouTube channel for you and the video is attached below.

We were thrilled to see the level of engagement from our attendees. There were many insightful questions, some of which were addressed live, while others remained unanswered. However, every query holds value to the Laser AI team, and we are committed to providing you with thorough responses. 

Below, we have compiled the questions posed by the attendees, along with the team's comprehensive answers.

First we'll have the set of questions that went unanswered in the webinar. At the end of this post, you'll find the questions that were answered live, with the link to the point of the webinar where you can watch the answer.

Q1 - Do we have to provide a reason for exclusion for the Title and Abstract Screening phase? Usually, we only list a reason from the Full-Text Screening stage. 

Multiple exclusion reasons are optional at the Title & Abstract Screening phase. However, we recommend adding more than one exclusion button with relevant highlights. Doing this will benefit and speed up the screening process. By adding more than one exclusion button, a user can add different positive and negative highlights that can be used to filter references and select a batch to include or exclude. 

We understand that the concern of having multiple exclusion buttons at the Title & Abstract Screening stage creates conflicts, but we have an automated feature that is helpful in this case. With our automatic conflict resolution method, managers won’t need to worry about making the decision on conflicts that may arise here. The manager can set the rules for automatic conflict management, and that is that. Take a look at this example to understand how to set up automatic conflict resolution and see it in action.

Q2 - How many users can access Laser AI per institution?

Laser AI does not have any formal boundaries or limitations for the number of users in an organisation at this point (February 2024).

Q3 - Is it possible to do the Title & Abstract screening from a smartphone or a tablet?

A user can log into Laser AI from their smartphone and tablet and view elements of the project; however, making decisions and screening is not possible.

Q4 - Can a form be exported and/or reused across projects?

Yes, The user can save an array of templates in the tool for future Laser AI projects, including:

Q5 - Is the extraction form automatically populated by AI for the articles, and the user verifies if it's correct?

Some concepts or data can be extracted using the AI model, but the user must verify and confirm them. In Laser AI, the user can extract data with the AI model in two ways:

In both instances, the user can accept or remove any suggestion. 

Q6 - How secure are the analyses created with Laser AI with regard to upcoming updates of the system, i.e. are updates developed in such a way that older versions of reviews are upwardly compatible?

New changes to Laser AI are always reviewed for compatibility with current projects; there is no threat of losing data in current projects. In certain cases, new features may only be available for new projects. 

Q7 - Is it suitable for medical devices?

Yes, Laser AI can be used for any review across multiple use cases. 

Q8 - Once the work is complete in Laser AI, is it easy to export the data, and import it to RevMan Web?

Laser AI can export the following files: 

Once the files are exported locally, the team can upload them to RevMan web. 

Q9 - Do we need to upload the study PDF in order to extract data? Or is it possible to use the template and enter the data using a PDF from an outside source?

At least one PDF has to be uploaded, but a user can extract data from an outside source e.g., a supplementary PDF. In this case, however, extraction will not be supported by the AI model.

Q10 - Some of the values read from the table are percentages, not variances. How do you get it to label them correctly?

During automatic data extraction from tables, the extractor determines the type of data presented in each row and column, matching it to existing data extraction fields

Q11 - What about data reporting after data extraction? Will this task be automated and AI-assisted?

We currently do not have reports supported in Laser AI.

Q12 - Has Laser AI had any challenges with reading (extracting text) from PDFs that the publisher protects?

In terms of password protection, this will need to be removed before uploading to Laser AI. 

Q13 - It sounds like the AI embedded in the platform involves machine learning algorithms to assist and streamline processes, but not to analyse and interpret data. Is that correct? (When AI analyses or interprets the data, this opens up questions on whether we can have confidence that the systematic review upholds high standards so users of the evidence still have trust in it, and so lots of us would like to see extra validation, etc., but sounds like that's not the case for Laser AI?)

Trust and transparency are of high importance to us at Laser AI. The tool uses machine learning algorithms to assist and streamline the process. For example, by showing users the most likely records that could be included first or suggesting relevant parts of the text that should be extracted into the data extraction form. 

However, the tool does not make any judgment calls. The final decision is always left up to the team. 

Q14 - For the different elements of AI used within the platform, is there publicly available validation available?

We test various Laser AI features as well as new approaches to systematic review, using AI. The results are presented in prestigious scientific journals and international conferences:
ISPOR
IQWiG
Cochrane
The Lancet
Elsevier

Additionally, you can find all of Artur Nowak’s (Cofounder of Evidence Prime) publications here. 

Q15 - Have you compared AI extractions with manual extractions? What is the accuracy of the AI?

The average time to extract a single article depends on various factors, like the complexity of the data extraction form, used models and user experience. We performed and published a study  that compared the manual extraction process with a semi-automated extraction that was conducted in Laser AI. 

Q16 - When extracting from tables, does it also pick up any tale footnotes? ie where they specify why N is different, or if one line does not conform to the table titles?

The model extracts data from cells. A user should define what type of data belongs to each column and row. There is also the possibility to extract additional data (which could be something from footnotes) for all relevant extracted rows by adding them in the field below the table.

Q17 - Is there connectivity with EndNote? Will any PDFs on EndNote be linked as well?

We are not integrated with EndNote, however, if a user uploads an RIS file from EndNote and a batch of PDFs from EndNote, our model will automatically match the PDFs to the appropriate references. 

Q18 - Can you upload a lot of PDFs at the same time, or do you have to upload them individually like in Rayyan?

A user can import a batch of PDFs into Laser AI. Our model will automatically match the PDF to the appropriate references.

Q19 - How often is the model trained and updated?

During Title & Abstract Screening, the model is continuously learning from the decisions that the researcher makes.

In Data Extraction, the team are constantly working on and adding models that support the extraction of new fields and concepts. 

As a whole, the tool is constantly being updated by the Laser AI team. This is our latest update - Helium.

Q20 - Does it provide literature prisma flow to be used in the systematic review protocols? Does it allowed to extract the papers en citations?

Yes, we generate a PRISMA flow diagram, supported by the PRISMA 2009 version. We are working on implementing PRISMA 2020.

Q21 - Are you planning to make it possible to highlight text in the pdf already during full-text screening that will show up during data extraction?

Yes! This feature is in our roadmap. We are working on allowing users to tag references and save highlights in a similar way to how we have it now in the Data extraction stage.

Q22 - Is Laser AI able to track and merge/link individual studies that are mentioned in multiple papers?

Our AI model automatically extracts data from PDFs like the study identifiers, sample size, and interventions that are commonly used in the studification process. Currently, we don’t have an option to merge studies, but we are designing a separate module for the studification phase, and this will be implemented in the near future.

Q23 - It is probably a great tool for RCTs. What about other designs? What if the data are in buried in the text and are in a different format that required for extraction?

The Laser AI model performs well regardless of whether the data is extracted from RCTs or other studies. The model can extract data from tables and free text as well. The single data extraction field includes three sections: 

Q24 - Could you please offer social media-Accounts on Bluesky (I can provide you a code for this) and also on Mastodon? twitter and LinkedIn are no longer sources of choice.

We will look into these platforms, but for the time being, we post on the following social media platforms

YouTube
LinkedIn 
Facebook
X

Q25 - Are the templates set up to asses bias by outcome?

As a project manager, a user can decide which data extraction fields should be connected with others. This means that the user can connect outcome data with fields related to the risk of bias assessment.

Q26 - Would it be possible for RoB tools to set rules and have it partially automated? It supports quality evaluation of articles?

Our AI model extracts information related to sequence generation, allocation concealment, and blinding, making a part of the Risk of Bias assessment easier.

Q27 - Are there other built-in evaluation tools, such as Mixed Methods Appraisal Tool, or QUADAS-2. If not, can users create a template and reuse it on subsequent projects?

A user can create templates and reuse them in future projects. We are in the process of building reusable templates to help speed up the process.  

Here are the questions asked and answered live. The accompanying links will take you to the point in time of the webinar where the answer is provided.

  1. Does Laser AI take the GRADE methodology into account?
  1. Will Laser AI deduplicate across imported files?
  1. Is Laser AI focused on certain review types (e.g. drug studies), or can you create any review type, e.g. also for observational studies?
  1. For the screening (title abstract), are these done by humans, or by AI?
  1. Can you have it so you have multiple reasons for exclusions?
  1. How does Laser AI differ from Covidence, Distiller SR and CAPTIS?
  1. Are the results of the machine learning available to users so users can set in their protocol and/or report in their final manuscripts threshold statistics for stopping screening?
  1. Interesting the feature on filters for MeSH terms. Is it possible to assign records to be screened to specific screeners in the project? For example, studies on pediatrics to screeners who have such a clinical. Background
  1. Is there any automated TiAb and full-text screening using AI?
  1. Are the diff study formats available by default or have to be created
  1. This clear presentation illustrated many innovative features compared to other competitors for screening and data extraction. Which is the (one) main strengh of Laser AI?
  1. In extraction, what if you have multiple publications for the same review - how does AI extraction work? will it "read" all associated publications?
  1. Is this compatible with risk of bias 2? 
  1. Does Laser AI learn about what to include or exclude along the process of selection?
  1. Can it also digitise data points from graphs?
  1. How are you looking at consensus of extraction - for double extraction?

We'd like to thank our community for the engagement and questions that help us grow. Stay tuned for more exciting webinars by signing up to our newsletter.

Speakers:

Iga Czyz, the product designer at Laser AI
Iga Czyż
Product Designer

Product designer on the Laser AI team, where she strives to make the world of systematic reviews a little more user-friendly. Iga has a background in pharmacy and design.

Shelby Storme as a freelance digital marketing lead
Shelby Storme Kuhn
Digital Marketing Lead

As a passionate writer with a strong drive for strategic growth, Shelby leverages storytelling techniques to provide value for Evidence Prime's audience.

Laser AI's MSc Pharmacist, Ewelina Sadowska.
Ewelina Sadowska
MSc, Pharmacist

Evidence Synthesis Specialist at Evidence Prime. She is responsible for testing new solutions in Laser AI and conducting evidence synthesis research.

Ewa Borowiack, Laser AI's evidence synthesis specialist
Ewa Borowiack
Evidence Synthesis Specialist

Evidence Synthesis Specialist with 15 years of experience in conducting HTA reports, systematic reviews, and targeted literature reviews. At Evidence Prime, she provides methodological knowledge to the designers and the software development team.

Artur Nowak, a founder of Laser AI
Artur Nowak
Co-founder, CTO

Co-founder and the CTO of Evidence Prime. He helps the brightest minds answer the most challenging questions in healthcare through his work in the area of artificial intelligence, especially in the context of systematic review automation. Meet Artur at ISPOR Europe 2023.