AI Trained on Social Media Videos Can Spot Rare Infant Seizure Disorder
Researchers have developed an AI model using social media videos to detect infantile epileptic spasms, offering a fast and accessible diagnostic tool.
Researchers have developed an AI model to detect infantile epileptic spasms (IES) using smartphone and social media videos. This innovative tool could offer a fast and accessible diagnostic method for the rare neurological disorder, improving early detection and patient outcomes.
Infantile epileptic spasms syndrome (IESS) affects approximately one in 2000 to 2500 infants. Early detection is crucial for effective treatment, but symptoms are often mistaken for everyday movements in children. This difficulty in detection can lead to delayed diagnosis and poorer outcomes. Smartphone videos have been recognized as a valuable tool for identifying seizures, but a lack of specialist availability often limits their use.
Previous attempts at creating AI to recognize seizures have been hindered by the limited amount of training data available, especially for rare disorders like IESS. To address this, researchers at the Universitätsmedizin Berlin turned to social media as an unconventional but rich data source. Their study, published in Nature, compiled 991 confirmed IES events from 141 videos and trained a vision transformer AI model to recognize seizure patterns.
The dataset was compiled from YouTube videos uploaded before 2022. Videos were carefully selected to include only children under the age of two showing clearly visible seizures, and a neurologist verified each case. The clips were divided into five-second segments labeled as seizure or non-seizure, with additional footage of healthy infants included to balance the training set.
Three separate datasets were used to validate the model: newer social media clips, control videos of typical infant behavior, and clinical video-EEG recordings. Videos were standardized to 30 frames per second and 224×224 resolution, with augmentations like rotation and flipping applied. A pretrained Hiera Vision Transformer model, originally built for action recognition, was fine-tuned using low-rank adaptation (LoRA) for binary classification.
The tool's performance was evaluated using standard metrics, area under the curve (AUC), sensitivity, specificity, accuracy, and false alarm rate (FAR), with five-fold cross-validation. Ethical procedures were followed, including privacy protections and consent waivers for retrospective data.
The results of the study were impressive. The AI model achieved an AUC of 0.96, with 82% sensitivity and 90% specificity on the training set. External testing on smartphone videos from 26 infants showed even stronger results (AUC 0.98, 89% sensitivity, 100% specificity), with a FAR of just 0.75%. The model's FAR rose slightly in clinical settings when using video-EEG data from 21 seizure-free infants, largely due to lower image quality and obstructions like EEG caps. However, it maintained strong accuracy (AUC 0.98, 80% sensitivity), highlighting the importance of video clarity for real-world use.
This research reveals a promising path for earlier, more accessible diagnosis of IESS. By using social media videos, the AI model addresses the challenges of data scarcity and specialist shortages. It has the potential to accelerate time-to-diagnosis and improve outcomes. Further prospective clinical trials are needed to fully assess its effectiveness in everyday medical settings.
Frequently Asked Questions
What is infantile epileptic spasms syndrome (IESS)?
IESS is a serious neurological disorder affecting infants, characterized by sudden, brief muscle spasms that can be mistaken for everyday movements. Early detection is crucial for effective treatment.
Why is early detection of IESS important?
Early detection of IESS is crucial because it allows for timely intervention, which can improve patient outcomes and reduce the risk of long-term complications.
How does the AI model detect seizures?
The AI model is trained to recognize seizure patterns in videos by analyzing movements and other visual cues. It uses a vision transformer model fine-tuned with low-rank adaptation (LoRA) for binary classification.
What data sources were used to train the AI model?
The AI model was trained using 991 confirmed IES events from 141 videos sourced from social media platforms like YouTube, supplemented with clinical data and control videos of typical infant behavior.
What are the next steps for this research?
Further prospective clinical trials are needed to fully assess the effectiveness of the AI model in everyday medical settings and to ensure its reliability and accuracy.