<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://meldproject.github.io//feed.xml" rel="self" type="application/atom+xml" /><link href="https://meldproject.github.io//" rel="alternate" type="text/html" /><updated>2026-03-25T15:50:20+00:00</updated><id>https://meldproject.github.io//feed.xml</id><title type="html">MELD Project</title><subtitle>Updates from the Multi-centre Epilepsy Lesion Detection (MELD) Project.</subtitle><entry><title type="html">MELD PostOp</title><link href="https://meldproject.github.io//studies/MELD_postop/" rel="alternate" type="text/html" title="MELD PostOp" /><published>2026-03-25T00:00:00+00:00</published><updated>2026-03-25T00:00:00+00:00</updated><id>https://meldproject.github.io//studies/MELD_postop</id><content type="html" xml:base="https://meldproject.github.io//studies/MELD_postop/"><![CDATA[<h3 id="automated-segmentation-of-post-surgical-resection-cavities-on-mri---a-meld-study">Automated Segmentation of Post-Surgical Resection Cavities on MRI - a MELD study</h3>

<p>1 in 5 epilepsy patients experience seizures caused by structural abnormality in the brain. For these patients, resection surgery to remove the epileptogenic zone is one of the best chances of cure, and complete resection of the epileptogenic zone is important for achieving seizure freedom. 
Quantitative assessment of resection completeness requires:</p>
<ul>
  <li>accurate delineation of the postoperative resection cavity, and</li>
  <li>comparison with the preoperative lesion</li>
</ul>

<p>However, existing delineation approaches are often time-consuming, labour-intensive, and difficult to generalise across datasets and imaging protocols.</p>

<p>To address this challenge, we developed MELD-PosTOp, a deep learning tool that automatically segments resection caivites directly from postoperative 3D T1-weighted MRI scans, without requiring additional preprocessing or additional modalities.</p>

<p>The model was trained using 965 postoperative scans with annotations, sourced from the MELD Focal Epilepsies dataset and the open-source EPISURG dataset</p>

<figure>
<img src="/images/MELD_postop_overview.jpg" alt="MELD Graph processing pipeline." />
<figcaption>MELD-PostOp overview. The Prototype Cohort (n=285) was used to train the prototype model. The prototype model was applied to an inference cohort. All predictions underwent visual quality control (QC). Failed cases were manually refined using nnInteractive and ITK-SNAP. The masks that passed QC and manually edited masks were combined with the Prototype Cohort to form the MELD-PostOp Train Cohort (n=965), which was used to train the final MELD-PostOp model. Model performance was evaluated on the Stratified Test Cohort and Independent Test Cohort. MELD-PostOp performance was compared with the Epic-CHOP and ResectVol. 
</figcaption>
</figure>

<h3 id="results">Results</h3>

<p>MELD-PostOp demonstrates fast, reproducible, and generalisable segmentation performance across large and heterogeneous imaging cohorts. MELD-PostOp detected 135/137 resection cavities (98.5%), and 128 segmentations achieved meaningful overlap with ground truth segmentation (Dice &gt; 0.5). No statistically significant performance differences were shown across: sex, age (adult vs paediatric), pathology (HS vs non-HS), surgical lobe (temporal vs extra-temporal), surgical side, MRI field strength, and image isotropy. Runtime was 17 seconds per image, compared to &gt;10 minutes for other tools.</p>

<figure>
<img src="/images/MELD_postop_results.jpg" alt="MELD-PostOp results" />
<figcaption> [A] Raw postoperative T1w MRI scans, ground truth manual resection masks, and automated segmentation from Epic-CHOP, ResectVol and MELD-PostOp models. The number displayed beneath each predicted mask indicates the Dice Similarity Coefficient (DSC) and the 95th percentile Hausdorff distance (HD95) relative to the manual mask. [B, C] Box and scatter plots of DSCs (B) and HD95 (C) across models in the combined test cohorts (n=137). MELD-PostOp achieved a significantly higher median DSC and HD95 than Epic-CHOP and ResectVol (p &lt; 0.001).</figcaption>
</figure>

<h3 id="installation--usage">Installation &amp; Usage</h3>
<p>Instructions for installation and usage can be found here: <a href="https://github.com/MELDProject/MELD-PostOp">MELD-PostOp Github Page</a></p>

<p>Pre-trained model weights can be downloaded here: <a href="https://figshare.com/s/c5a3d4a5604b229eeb5f">MELD-PostOp Figshare Page</a></p>

<h3 id="preprint">Preprint</h3>
<p>For more details, preprint is available here: <a href="https://www.medrxiv.org/content/10.64898/2026.02.26.26347093v1">MELD-PostOp MedRxiv</a></p>]]></content><author><name>Jieun Seo</name><email>jieun.seo.22@ucl.ac.uk</email></author><category term="studies" /><summary type="html"><![CDATA[Automated Segmentation of Post-Surgical Resection Cavities on MRI]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" /><media:content medium="image" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">AID-HS</title><link href="https://meldproject.github.io//studies/AID-HS/" rel="alternate" type="text/html" title="AID-HS" /><published>2025-12-10T00:00:00+00:00</published><updated>2025-12-10T00:00:00+00:00</updated><id>https://meldproject.github.io//studies/AID-HS</id><content type="html" xml:base="https://meldproject.github.io//studies/AID-HS/"><![CDATA[<h3 id="automated-and-interpretable-detection-of-hippocampal-sclerosis---a-meld-study">Automated and Interpretable Detection of Hippocampal Sclerosis - a MELD study</h3>

<p>Hippocampal Sclerosis (HS) is a form of atrophy affecting the hippocampus that can cause epileptic seizures. When identified and surgically removed, patients can be cured from epilepsy seizures. Accurate detection of HS on MRI scans and determining which side of the brain is affected is therefore essential for planning a successfull surgery.</p>

<p>In work published in <a href="https://pubmed.ncbi.nlm.nih.gov/39543853/">published in Annals of Neurology in 2024</a>, the Multi-centre Epilepsy Lesion Detection (MELD) Project introduced AID-HS, an automated tool that can detect HS and determines the affected hemisphere (lateralise) and generates interpretable patients reports summarising the results.</p>

<p>AID-HS extracts hippocampal volume and surface-based features (such as thickness, curvature, and gyrification of the hippocampus) using the software HippUnfold. These features are then normalised by the features of a large cohort of healthy controls, and asymmetries between the left and right hippocampi are calculated. The resulting asymmetry features were used to train a logistic regression model at detecting and lateralising HS. 
AID-HS also produces interpretable reports that show each patient’s hippocampal features relative to normative “growth charts” of the healthy population, visualise asymmetries, and present the model’s predictions.</p>

<figure>
<img src="/images/AID-HS_overview.png" alt="AID-HS development overview." />
<figcaption>AID-HS development overview.</figcaption>
</figure>

<h3 id="results">Results</h3>

<p>The tool was developed using MRI data from 154 patients with HS, 90 patients with focal cortical dysplasia (another epilepsy-associated lesion), and 121 healthy controls from four hospitals worldwide.
AID-HS detected HS with 90.1% sensitivity and 94.3% specificity and correctly lateralised HS in 97.4% of patients. Similar performance was observed when the tool was tested on an independent multicentre cohort of 275 patients and 161 controls, demonstrating its ability to adapt to new data different from the one used to train the model.</p>

<h3 id="running-the-pipeline-on-new-patients">Running the pipeline on new patients</h3>

<p>The pipeline generates an individualised patient report that includes:</p>

<ul>
  <li>A visualisation of the hippocampal segmentation and the pial surfaces reconstructed with HippUnfold, with dice scores assessing how well the segmentation matches the HippUnfold atlas</li>
  <li>Each hippocampal feature plotted against normative trajectories of healthy population</li>
  <li>The magnitude and direction of asymmetry features relative to abnormality thresholds</li>
  <li>Automated lateralisation scores from the AID-HS classifier, showing the probability of left HS (blue), right HS (pink), or no significant asymmetry (green)</li>
</ul>

<figure>
<img src="/images/AID-HS_patient_report.png" alt="Individual patient reports." />
<figcaption>Examples of AID‐HS reports for 2 patients with MRI‐negative right HS (example 1) and left HS (example 2). (A) Automated hippocampal segmentation and reconstructed hippocampal surfaces using HippUnfold, alongside automated quality control of the segmentation. (B) Individual hippocampal features compared to normative trajectories (with 25th – 75th percentiles in dark green, 5th – 25th and 75th – 95th percentiles in light green, patient's left hippocampus in blue and patient's right hippocampus in pink). (C) Asymmetry scores against left and right abnormality thresholds and automated lateralization scores from the AID‐HS classifier, indicating the probability that hippocampal feature asymmetries are consistent with left or right HS or that there is no asymmetry. AID‐HS, Automated and Interpretable Detection of Hippocampal Sclerosis; CA, cornu Ammonis; HS, hippocampal sclerosis; MRI, magnetic resonance imaging; SRLM, stratum radiatum, lacunosum, and moleculare.</figcaption>

</figure>

<p><strong>AID-HS worldwide use</strong></p>

<p>AID-HS is open-source and available on macOS, Windows, and Linux via <a href="https://github.com/MELDProject/AID-HS">Github</a>. Tutorial videos to help install and uses the tool are available on our <a href="https://www.youtube.com/@MELDproject">Youtube channel</a></p>

<p><em>Written with the assistance of ChatGPT</em></p>]]></content><author><name>Mathilde Ripart</name><email>m.ripart@ucl.ac.uk</email></author><category term="studies" /><summary type="html"><![CDATA[Automated and Interpretable Detection of Hippocampal Sclerosis]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" /><media:content medium="image" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">MELD Graph based FCD lesion segmentation</title><link href="https://meldproject.github.io//studies/MELD_Graph/" rel="alternate" type="text/html" title="MELD Graph based FCD lesion segmentation" /><published>2025-12-03T00:00:00+00:00</published><updated>2025-12-03T00:00:00+00:00</updated><id>https://meldproject.github.io//studies/MELD_Graph</id><content type="html" xml:base="https://meldproject.github.io//studies/MELD_Graph/"><![CDATA[<h3 id="graph-based-fcd-lesion-segmentation---a-meld-study">Graph based FCD lesion segmentation - a MELD study</h3>

<p>Focal Cortical Dysplasia (FCD) are small abnormalities occuring during brain development that can cause epileptic seizures. Identifying these lesions on MRI and removing them surgically can often cure epilepsy. However, FCDs are frequently subtle and difficult to detect, and up to half are missed by radiologists.</p>

<p>Previously, our group developed <a href="/_posts/studies/2022-08-06-MELD_FCD.md">MELD FCD</a> an AI model trained on a large, multicentre MRI dataset of epilepsy patients and controls. The model can detect 67% of FCD lesions, but typically produced around two false-positive clusters per subject. This is a common challenge in lesion-detection AI models, which increases the radiologist’s workload as they need to review more putative lesions.</p>

<p>In work published in <a href="https://jamanetwork.com/journals/jamaneurology/article-abstract/2830410">published in JAMA Neurology in 2025</a>, the Multi-centre Epilepsy Lesion Detection (MELD) Project introduced a new graph-based AI model with substantially improved accuracy.</p>

<p>To build this model (MELD Graph), we used a graph neural network trained on surface-based cortical features. Unlike the previous multilayer perceptron (MLP) approach, which analysed each cortical vertex independently, the graph neural network take into consideration neighbourhood information, makink the model more aware about the surrounding tissue.</p>

<p>To allow direct comparison, MELD Graph was trained and evaluated on the same multicentre dataset of 703 patients described in the original publication.</p>

<figure>
<img src="/images/MELD_Graph_overview.jpg" alt="MELD Graph processing pipeline." />
<figcaption>MELD Graph processing pipeline.</figcaption>
</figure>

<h3 id="results">Results</h3>

<p>In the test dataset (n=260 patients), MELD Graph achieved 67% accuracy (70% sensitivity; 60% specificity), compared with 39% accuracy (67% sensitivity; 54% specificity) for the earlier MELD MLP model. Importantly, MELD Graph produced zero false-positive predictions on average. Along with the predicted lesion, MELD Graph also generates interpretable reports describing lesion location, size, salient features, and a confidence score.</p>

<p>Below are examples showing MELD Graph vs. MELD MLP predictions and the reduction in false positives:</p>

<figure>
<img src="/images/MELD_Graph_predictions.jpg" alt="Example of " />
<figcaption> Reduction in false positive clusters with MELD Graph compared to MELD MLP. (A) Example classifier predictions for four patients using MELD Graph and the baseline multilayer perceptron (MELD MLP). Black line = manual lesion mask. Red = classifier predictions. (B) Box-and-whisker plot showing the mean positive predictive value (PPV) and the confidence interval (CI) in the test patients detected by both MELD MLP and MELD Graph models (N=161). (C) Box-and-whisker plot showing median, interquartile range and outlier numbers of false positive (FPs) clusters predicted on patients and controls in the test dataset using MELD Graph compared to MELD MLP. Gray lines connect identical subjects between the models.</figcaption>
</figure>

<h3 id="running-the-pipeline-on-new-patients">Running the pipeline on new patients</h3>

<p>The pipeline outputs an individualised patient report including predicted lesion location, imaging features, saliency values, and a confidence score. Predicted lesions are also mapped back to the native T1 image for radiological review.</p>

<figure>
<img src="/images/MELD_Graph_patient_report.jpg" alt="Individual patient reports." />
<figcaption>Examples of interpretable patient reports. MELD Graph outputs for two patients from the independent test cohort. Patient 1 is an example of an MRI-positive FCD detected by MELD Graph with a high confidence prediction (93%). Patient 2 has a FCD that was not identified by 5
expert radiologists but detected by MELD Graph with low confidence (7%). (A) Classifier predictions (red) and 20% most salient vertices (orange) visualized on brain surfaces of the lesional hemisphere and on the T1 volume. (B) Z-scored mean feature values within 20% most salient vertices of predicted lesions. Color represents saliency scores. Features driving the classifier’s prediction are positive (pink). Features inconsistent with MELD Graph prediction are negative (green). (C) T1 and FLAIR coronal sections with a red box indicating the lesional cortex..</figcaption>

</figure>

<p><strong>MELD Graph worldwide use</strong></p>

<p>MELD Graph is open-source and available on macOS, Windows, and Linux via <a href="https://github.com/MELDProject/meld_graph">Github</a>. Tutorial videos to help install and uses the tool are available on our <a href="https://www.youtube.com/@MELDproject">Youtube channel</a></p>

<p>As of December 2025 (10 months after publication), it is reported to be in use in more than 100 hospitals worldwide as a research tool. Our work has also been featured in the news, including coverage by the <a href="https://www.bbc.co.uk/news/articles/cvg1xd7l5pvo">BBC</a> and <a href="https://www.dailymotion.com/video/x9fwx0u">Reuters</a>.</p>

<p><em>Written with the assistance of ChatGPT</em></p>]]></content><author><name>Sophie Adler, Konrad Wagstyl and Mathilde Ripart</name><email>MELD.study@gmail.com</email></author><category term="studies" /><summary type="html"><![CDATA[Graph based FCD lesion segmentation]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" /><media:content medium="image" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Focal Epilepsies Study</title><link href="https://meldproject.github.io//studies/FocalEpilepsies/" rel="alternate" type="text/html" title="Focal Epilepsies Study" /><published>2023-02-02T00:00:00+00:00</published><updated>2023-02-02T00:00:00+00:00</updated><id>https://meldproject.github.io//studies/FocalEpilepsies</id><content type="html" xml:base="https://meldproject.github.io//studies/FocalEpilepsies/"><![CDATA[<h3 id="meld-project-a-collaborative-cohort-for-the-analysis-of-patients-with-focal-epilepsies">MELD Project: A collaborative cohort for the analysis of patients with focal epilepsies</h3>

<h3 id="why-are-we-doing-the-meld-focal-epilepsies-project">Why are we doing the MELD Focal Epilepsies project?</h3>
<p>Epilepsy is one of the most common neurological conditions, with a lifetime risk of 1 in 26. 20-30% of patients have drug-resistant epilepsy, in which multiple anti-seizure drugs have failed to control seizures(Picot et al. 2008; Sultana et al. 2021). Patients with uncontrolled epilepsy have increased risk of seizure related injuries, cognitive and psychological impacts and a 20-fold increase in risk of mortality(Hesdorffer et al. 2011).</p>

<p>In many patients, the seizures are caused by a focal cerebral lesion, and neurosurgical resection of the epileptogenic lesion is considered a safe, effective and cost-efficient treatment but is underutilised (Braun and Cross 2018). However surgery is not always successful, with post-surgical seizure freedom rates estimated between 50-70%(Lamberink et al. 2020). Accurate detection of lesions on presurgical MRI and complete neurosurgical resection are important predictors of post-surgical freedom, but lesions can be small, subtle and easily missed.</p>

<h3 id="what-are-the-aims-of-the-project">What are the aims of the project?</h3>
<p>The aim of the project is to improve epilepsy surgery outcomes through the collation (WP1) and characterisation (WP2) of a large multicentre cohort of clinical and MRI data from patients with focal epilepsy, development of deep learning algorithms for automated segmentation (WP3) and histological classification (WP4) of MRI lesions, generating models for predicting post-surgical seizure freedom (WP5) and identification of covert lesions (WP6).</p>

<figure>
<img src="/images/F2.png" />
</figure>

<h3 id="what-data-are-we-collecting">What data are we collecting?</h3>
<p>We are collating retrospectively acquired volumetric MRI data and clinical information from patients with a range of causes of focal epilepsy. We have protocols to ensure all data shared with the MELD team is anonymous.</p>

<figure>
<img src="/images/FE_data.png" />
</figure>

<h3 id="can-i-join-the-project">Can I join the project?</h3>
<p>We are inviting ANY epilepsy centre to take part in this study.</p>

<p>Contact <a href="mailto:MELD.study@gmail.com">MELD.study@gmail.com</a> for more information.</p>

<p><em>We are extremely grateful to ERUK and the Rosetrees Trust for funding for this project</em></p>]]></content><author><name>Sophie Adler, Konrad Wagstyl and Mathilde Ripart</name><email>MELD.study@gmail.com</email></author><category term="studies" /><summary type="html"><![CDATA[A collaborative cohort for the analysis of patients with focal epilepsies]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" /><media:content medium="image" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Infographic about the MELD Project</title><link href="https://meldproject.github.io//studies/PublicEngagement/" rel="alternate" type="text/html" title="Infographic about the MELD Project" /><published>2023-01-10T00:00:00+00:00</published><updated>2023-01-10T00:00:00+00:00</updated><id>https://meldproject.github.io//studies/PublicEngagement</id><content type="html" xml:base="https://meldproject.github.io//studies/PublicEngagement/"><![CDATA[<h3 id="ai-for-diagnosing-focal-epilepsy-a-collaborative-project-to-co-develop-an-information-sheet-with-patients-and-their-families">AI for Diagnosing Focal Epilepsy: A collaborative project to co-develop an information sheet with patients and their families</h3>

<figure>
<img src="/images/MELD_PROJECT_LEAFLET.jpg" alt="Infographic about the MELD project" />
<figcaption>Infographic about the MELD project.</figcaption>
</figure>

<p>Anonymised data from medical notes and images from MRI scans is being used in epilepsy research. This includes using state-of-the-art artificial intelligence to better understand, diagnose, treat and predict outcomes in epilepsies.</p>

<p><em>How do patients and their families feel about this type of research and their / their child’s medical data being used?</em></p>

<p><em>What is important for patients and their families to know about these types of technologies?</em></p>

<p>In December 2022 - January 2023, Dr Konrad Wagstyl (UCL) and Dr Sophie Adler (UCL), in collaboration with Dr Jonny O’Muircheartaigh (neuroscientist at KCL) and epilepsy charities (Epilepsy Research UK and Young Epilepsy), launched a public engagement project to find this out and work with patients and their families to co-create an information sheet about the MELD Project.</p>

<p>54 patients or parents/guardians of children with drug-resistant epilepsy responded to the online questionnaire about Big Data and AI in epilepsy research. This provided incredibly useful information about the HOPES and FEARS patients and their families have about big data and AI research.</p>

<figure>
<img src="/images/PE_parents.png" alt="88% of patients and parents HOPED AI could provide more accurate diagnoses. 77% of patients and parents FEARED about AI making inaccurate predictions." />
</figure>

<p>We ran a focus group at Young Epilepsy with parents of children with complex epilepsy to find out more information about what was important for parents to know about these types of technologies if they were to be used in their child’s care. This included collaboratively working on a template information sheet and adapting it to focus on information patients and their parents felt was important. We then worked with a fantastic illustrator, Bridget Meyne, to create a visually appealing infographic. Parents of children with epilepsy, researchers, clinicians and epilepsy charities all provided invaluable feedback for the final information sheet.</p>

<p><strong>The final information sheet, AI for Diagnosing Focal Epilepsy, can be downloaded <a href="https://meldproject.github.io//docs/MELD_PROJECT_LEAFLET.pdf">here</a></strong></p>

<p><em>The project was funded by the WCHN Public Engagement Innovation Grant. We would like to thank the volunteers who took part, and Bridget Meyne for transforming scribbles and ideas into beautiful illustrations.</em></p>]]></content><author><name>Sophie Adler and Konrad Wagstyl</name><email>konrad.wagstyl@gmail.com</email></author><category term="studies" /><summary type="html"><![CDATA[Working with patients and their families to create an infographic about the MELD Project.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" /><media:content medium="image" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">MELD Surface-based FCD Classifier</title><link href="https://meldproject.github.io//studies/MELD_FCD/" rel="alternate" type="text/html" title="MELD Surface-based FCD Classifier" /><published>2022-08-06T00:00:00+00:00</published><updated>2022-08-06T00:00:00+00:00</updated><id>https://meldproject.github.io//studies/MELD_FCD</id><content type="html" xml:base="https://meldproject.github.io//studies/MELD_FCD/"><![CDATA[<h3 id="interpretable-surface-based-detection-of-fcds---a-meld-study">Interpretable surface-based detection of FCDs - a MELD study</h3>

<p>Machine learning has the potential to revolutionize the field of diagnostic biomedical imaging, but one outstanding challenge is algorithm interpretability. This is particularly important when it comes to incorporating AI for FCD detection into clinical practice.</p>

<p>FCDs are difficult to visualize but often amenable to surgical resection. We wanted to create a robust machine-learning algorithm that could detect FCDs on heterogeneous structural MRI data from epilepsy surgery centres worldwide. However, crucially we wanted to “open the black box” and ensure that the algorithm was interpretable. Clinicians need to be able to understand why the AI identified a particular area.</p>

<p>In work <a href="https://academic.oup.com/brain/article/145/11/3859/6659752">published in BRAIN</a> in 2022, the Multi-centre Epilepsy Lesion Detection (MELD) Project set out to develop an open-source, interpretable, surface-based machine-learning algorithm to automatically identify FCDs.</p>

<p>We collated a retrospective MRI cohort of 1015 participants, including 618 patients with focal FCD-related epilepsy and 397 controls, from 22 epilepsy centres around the world. Using 33 surface-based features, we trained and cross-validated a neural network on 50% of the total cohort. We then tested the network on the remaining withheld 50% of the cohort, as well as on two independent test sites.</p>

<p>We used multidimensional feature analysis and integrated gradient saliencies to interrogate network performance.</p>

<figure>
<img src="/images/MELD_FCD_method.jpg" alt="MELD processing pipeline." />
<figcaption>MELD processing pipeline.</figcaption>
</figure>

<h3 id="results">Results</h3>
<p>Across the entire withheld test cohort,after including a border zone around lesions to account for uncertainty around the borders of manually delineated lesion masks, the sensitivity was 67%.
On a restricted “gold-standard” subcohort of seizure-free patients with FCD type IIB who had T1 and fluid-attenuated inversion recovery MRI data, the MELD FCD surface-based algorithm had a sensitivity of 85%. 
Specificity was 54%.</p>

<p>Here are examples of classifier performance:</p>

<figure>
<img src="/images/MELD_classifier_predictions.jpg" alt="Classifier predictions for six patients. Patients 1–4 are examples where the classifier has correctly identified the lesion. In Patient 4 an additional cluster in the left insula is identified. Patient 5 is an example where the classifier detects an area in the border zone. Patient 6 is an example of where the neural network has not identified the lesion. An additional cluster is detected in the right post-central gyrus." />
<figcaption>Classifier predictions for six patients. Patients 1–4 are examples where the classifier has correctly identified the lesion. In Patient 4 an additional cluster in the left insula is identified. Patient 5 is an example where the classifier detects an area in the border zone. Patient 6 is an example of where the neural network has not identified the lesion. An additional cluster is detected in the right post-central gyrus.</figcaption>
</figure>

<h3 id="running-the-pipeline-on-new-patients">Running the pipeline on new patients</h3>

<p>Our pipeline outputs individual patient reports with the location of predicted lesions, along with their imaging features and relative saliency to the classifier. The pipeline also maps the predicted lesions back to the native T1 so that they can be reviewed by a radiologist.</p>

<figure>
<img src="/images/MELD_pt_report.jpg" alt="Individual patient reports." />
<figcaption>Individual patient reports. Here are individual reports for 2 patients. (A) Classifier predictions (dark red) and manual lesion mask (black line) visualized on brain surfaces (B) Z-scored mean feature values within predicted lesions coloured with Integrated Gradients saliency scores. Positive saliency scores indicate feature values driving the classifier’s ‘lesion’ prediction. Negative scores indicate feature values that are inconsistent with the prediction. (C) Lesional cortex highlighted on the patients’ MRI scans.</figcaption>
</figure>

<p><strong>The MELD classifier can be run on MRI data from any 1.5T or 3T scanner on any patient who is over age 3!</strong></p>

<p>In April 2023 (8 months after publication), the MELD team had run 2 workshops to train clinicians and researchers how to use the classifier. We have issued 53 site codes from 35 epilepsy centres.</p>

<p><strong>Overall, by leveraging the power of machine learning and combining it with “explainable AI”, we provide an open-source algorithm for the detection of focal cortical dysplasias.</strong></p>

<p>There are many more cool analyses and results in the <a href="https://academic.oup.com/brain/article/145/11/3859/6659752">paper</a> - so do check it out!</p>

<p><em>Written with the assistance of ChatGPT</em></p>]]></content><author><name>Sophie Adler, Konrad Wagstyl and Mathilde Ripart</name><email>MELD.study@gmail.com</email></author><category term="studies" /><summary type="html"><![CDATA[Interpretable surface-based detection of FCDs]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" /><media:content medium="image" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Planning SEEG using automated lesion detection</title><link href="https://meldproject.github.io//studies/SEEG-planning/" rel="alternate" type="text/html" title="Planning SEEG using automated lesion detection" /><published>2022-04-04T00:00:00+00:00</published><updated>2022-04-04T00:00:00+00:00</updated><id>https://meldproject.github.io//studies/SEEG-planning</id><content type="html" xml:base="https://meldproject.github.io//studies/SEEG-planning/"><![CDATA[<h3 id="planning-seeg-using-automated-lesion-detection">Planning SEEG using automated lesion detection</h3>

<p>One‐third of children with epilepsy are medication‐resistant. In children with a focal seizure onset zone (SOZ), stereoelectroencephalography (sEEG) can be used to delineate the SOZ in complex patients. This involves electrodes being implanted into the brain to record brain activity. Currently, electrode placement is a clinical decision but planning SEEG can be challenging and time-consuming. In half of the patients selected for sEEG, the MRI scan looks normal which makes planning where to implant electrodes difficult.</p>

<p>In this study <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8432161/">published in Epilepsia</a> in 2020 we aimed to evaluate the feasibility and potential benefits of using deep learning on structural magnetic resonance imaging (MRI) to plan implantation of electrodes for stereoelectroencephalography (sEEG) in pediatric patients with drug-resistant epilepsy.</p>

<p>The study trained a neural network classifier to identify lesions in MRI data from 34 patients with known cortical dysplasias and 20 healthy controls.</p>

<figure>
<img src="/images/seeg_method.jpeg" width="50%" height="50%" alt="Framework for automated lesion detection and colocalization with sEEG electrodes." />
<figcaption> Framework for automated lesion detection and colocalization with sEEG electrodes. A, Surface‐based feature extraction, lesion labeling, and training of neural network classifier on MRI‐positive patient cohort. B, Testing of classifier on presurgical MRI of patients undergoing sEEG.</figcaption>
</figure>

<p>The classifier was able to detect lesions with a sensitivity of 74% in patients with known lesions and had a specificity of 100% in healthy controls. In 34 patients who underwent SEEG, we then assessed whether the  seizure onset zone (SOZ) overlapped with the classifier output.</p>

<figure>
<img src="/images/seeg_case.png" alt="Example case report where there is concordance between ictal contacts and classifier output" />
<figcaption>Example case report where there is concordance between ictal contacts and classifier output.  The case includes a brief clinical overview (left upper), a plot of distance of the stereoelectroencephalography (sEEG) contacts from the predicted lesion (right upper), visualization of the electrode positioning (ictal contacts = red, interictal = yellow, other = black) with automated clusters (red = top cluster, yellow = other clusters, lower panels), and a coronal section of the FLAIR MRI scan with lesion indicated by red arrow. </figcaption>
</figure>

<p><strong>We found that in 62% of the patients with focal SOZs and 86% of histopathologically confirmed FCDs the results were concordant!</strong></p>

<p>This suggested that incorporating deep-learning-based MRI analysis could be a useful tool for planning sEEG implantations.</p>

<p>One <strong>limitation</strong> of the study was its retrospective nature, which meant that further research was needed to determine the effectiveness of the automated method in a prospective study.</p>

<p><strong>However, the study provided a framework for using automated lesion detection to plan optimal electrode trajectories and the results supported the prospective evaluation of this approach in future studies. This work led TO the MAST clinical trial!</strong></p>]]></content><author><name>Sophie Adler and Konrad Wagstyl</name><email>konrad.wagstyl@gmail.com</email></author><category term="studies" /><summary type="html"><![CDATA[Planning SEEG using automated lesion detection]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" /><media:content medium="image" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Atlases of FCDs</title><link href="https://meldproject.github.io//studies/Atlas-FCD/" rel="alternate" type="text/html" title="Atlases of FCDs" /><published>2022-04-03T00:00:00+00:00</published><updated>2022-04-03T00:00:00+00:00</updated><id>https://meldproject.github.io//studies/Atlas-FCD</id><content type="html" xml:base="https://meldproject.github.io//studies/Atlas-FCD/"><![CDATA[<h3 id="atlas-of-lesion-locations-and-post-surgical-seizure-freedom-in-fcd">Atlas of lesion locations and post-surgical seizure freedom in FCD</h3>

<p>Focal cortical dysplasia is an important cause of drug-resistant focal epilepsy. These lesions had been known to occur anywhere in the cerebral cortex but the exact distribution of these lesions and the impact of where a lesion is on clinical presentation and post-surgical seizure freedom was largely unknown.</p>

<p>Through the MELD project, 20 epilepsy surgery centres worldwide contributed individual masks of FCD lesions as well as clinical information to create a cohort of 580 patients with FCD.</p>

<p>In work <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/epi.17130">published in Epilepsia</a> in 2021, we show that <em>although FCDs can occur anywhere in the cortex, there are certain “hot-spots” where FCDs are more common</em>. These include the superior frontal sulcus, superior temporal sulcus, frontal and temporal poles.</p>

<figure>
<img src="/images/lesion_locations_FCD.gif" alt="Distribution of FCDs across the cerebral cortex" />
<figcaption>Distribution of FCDs across the cerebral cortex. Red = higher numbers of lesions. Blue = lower numbers of lesions. </figcaption>
</figure>

<p><strong>This non-uniform distribution of FCDs is IMPORTANT!</strong></p>

<p>This is a pathology beginning in childhood - with 75% of patients starting to have seizures before age 12. However, <strong>where the FCD lesion is located is associated with the age at which the patient develops epilepsy</strong>. Patients with lesions in primary sensory cortices have earlier age of epilepsy onset than those in higher order association cortex.</p>

<figure>
<img src="/images/Age_onset.png" alt="Age of epilepsy onset according to lesion location" />
<figcaption>Lesions in primary sensory cortex (visual and motor cortices) are associated with younger age of epilepsy onset than lesions in higher order areas.</figcaption>
</figure>

<p><strong>Lesion size is associated with lesion location.</strong> Overall, lesions in occipital cortex are larger than lesions more anterior in the cortex, e.g. frontal cortex.</p>

<figure>
<img src="/images/Lesion_size.png" width="70%" height="70%" alt="Size of lesion according to lesion location" />
<figcaption>Size of lesion according to lesion location.</figcaption>
</figure>

<p>Overall, 65% of patients in the study were seizure free. However, <strong>only 30-40% of patients with lesions overlapping eloquent cortex (Visual, motor, and premotor areas) were seizure free.</strong></p>

<figure>
<img src="/images/Seizure_freedom.png" width="60%" height="60%" alt="Post-surgical seizure freedom according to lesion location." />
<figcaption>Post-surgical seizure freedom according to lesion location.</figcaption>
</figure>

<p><strong>So, FCDs are non-uniformly distributed across the cortex and the location of a patients lesion is important as it is associated with the age at which the patient will develop epilepsy, the size of the lesion and how likely the patient is to be seizure free following epilepsy surgery.</strong></p>

<p>There are many more cool analyses and results in the <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/epi.17130">paper</a> - so do check it out!</p>]]></content><author><name>Sophie Adler and Konrad Wagstyl</name><email>konrad.wagstyl@gmail.com</email></author><category term="studies" /><summary type="html"><![CDATA[Atlas of lesion locations and post-surgical seizure freedom in FCD]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" /><media:content medium="image" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Surface-based automated FCD detection at GOSH</title><link href="https://meldproject.github.io//studies/Automated-detection-GOSH/" rel="alternate" type="text/html" title="Surface-based automated FCD detection at GOSH" /><published>2021-03-03T00:00:00+00:00</published><updated>2021-03-03T00:00:00+00:00</updated><id>https://meldproject.github.io//studies/Automated-detection-GOSH</id><content type="html" xml:base="https://meldproject.github.io//studies/Automated-detection-GOSH/"><![CDATA[<h3 id="automated-detection-of-focal-cortical-dysplasias-at-gosh-using-a-surface-based-approach">Automated Detection of Focal Cortical Dysplasias at GOSH using a surface-based approach</h3>

<p>Focal cortical dysplasia (FCD) is a congenital abnormality of cortical development and a leading cause of surgically remediable drug resistant epilepsy. MRI has played a major role in the evaluation of patients; yet, significant proportions of lesions remain undetected by conventional image analysis. Machine learning offers a powerful framework to develop automated and individualised clinical tools that may improve the detection of lesions and prediction of clinically relevant outcome.</p>

<p>In work <a href="http://www.sciencedirect.com/science/article/pii/S2213158216302674?via%3Dihub">published in Neuroimage: Clinical</a> in 2017, Adler, Wagstyl et al., developed a classifier using surface-based features to identify focal abnormalities of cortical development in a paediatric cohort from Great Ormond Street Hospital. Focal cortical dysplasias in this paediatric cohort were correctly identified in 73% of the children.</p>

<figure>
<img src="/images/Example_classifier_results.png" alt="FCD examples" />
<figcaption>Examples of cortical area detected by the neural network classifier in 5 patients with a radiological diagnosis of FCD. First column: T1-weighted images. Second column: FLAIR images. White circle on T1 and FLAIR images indicates lesion location. Third column: Neural network classifier output (yellow) and manual lesion mask (light blue) viewed on pial surface, for large lesions, or inflated surface, for small lesions buried in sulci.</figcaption>
</figure>

<p>Further studies have since validated validated this method:
<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5934310/">Jin et al., 2018, in Epilepsia,</a>
<a href="https://www.frontiersin.org/articles/10.3389/fnins.2018.01008/full">Mo et al., 2018, Frontiers in Neuroscience,</a>
and <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/epi.16574">Wagstyl, Adler et al., Epilepsia</a></p>]]></content><author><name>Sophie Adler and Konrad Wagstyl</name><email>konrad.wagstyl@gmail.com</email></author><category term="studies" /><summary type="html"><![CDATA[Automated Detection of Focal Cortical Dysplasias at GOSH using a surface-based approach]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" /><media:content medium="image" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">The Third MELD Project Teleconference Meeting</title><link href="https://meldproject.github.io//blog/Teleconference_3/" rel="alternate" type="text/html" title="The Third MELD Project Teleconference Meeting" /><published>2019-01-23T00:00:00+00:00</published><updated>2019-01-23T00:00:00+00:00</updated><id>https://meldproject.github.io//blog/Teleconference_3</id><content type="html" xml:base="https://meldproject.github.io//blog/Teleconference_3/"><![CDATA[<h1 id="the-third-meld-project-teleconference-meeting">The Third MELD Project Teleconference Meeting</h1>

<p>Hello! :wave: Please find the minutes for the Third MELD Project Teleconference meeting below.</p>

<h4 id="date">Date:</h4>

<p>23rd January 2019, 8am and 5pm UK time</p>

<h4 id="present">Present:</h4>

<p>Sophie Adler-Wagstyl (SAW; Great Ormond Street Institute of Child Health, UK), Konrad Adler-Wagstyl (KAW; University of Cambridge, UK), Z. Irene Wang (ZIW; Cleveland Clinic, USA), Yawu Liu (University of Eastern Finland, Finland), Marcus Likeman (University Hospitals Bristol, UK), Xiaozhen You (Children’s National, USA), Pasquale Striano (Università di Genova, Italy), Gavin Winston (The National Hospital for Neurology and Neurosurgery, UK), John Duncan (The National Hospital for Neurology and Neurosurgery, UK)</p>

<h4 id="1-aes-meeting-and-conferences">1. AES meeting and conferences</h4>

<ul>
  <li>SAW briefly discussed the AES meeting and highlighted the minutes on the MELD website.</li>
  <li>SAW discussed future conferences for potential MELD meetings.
    <ul>
      <li>Currently these might include OHBM 2019 (Rome), IEC 2019 (Bangkok), AES 2019 (Baltimore), ECE 2020 (Geneva)</li>
      <li>IW raised ISMRM in Montreal (May 2019). SAW and KAW are unable to attend.</li>
    </ul>
  </li>
</ul>

<h4 id="2-data-protocols-and-scripts">2. Data, Protocols and scripts</h4>

<ul>
  <li>SAW shared that MELD has now reached the original target of 400 patients, with 380 controls :tada:</li>
  <li>This was in time for the original January 2019 deadline.</li>
  <li>SAW proposed allowing new sites to join:
    <ul>
      <li>MELD is an inclusive, open project</li>
      <li>Additional data can be used for validation</li>
      <li>The caveat is that their data may not be included in the initial MELD analyses / publications
        <ul>
          <li>Sites confirmed that they were happy with this solution</li>
        </ul>
      </li>
    </ul>
  </li>
</ul>

<h4 id="3-site-level-lesion-characteristics">3. Site-level lesion characteristics</h4>

<ul>
  <li>KAW three figures showing preliminary analyses of the full cohort
    <ul>
      <li>Demographic data for patient and control groups</li>
      <li>Lesion topography</li>
      <li>Group level structural features differentiate lesional tissue
        <ul>
          <li>Cortical thickness - increased</li>
          <li>‘Blurring’ - decreased</li>
          <li>Intrinsic curvature - increased</li>
          <li>FLAIR intensities sampled from the top of the cortex down to within the white matter
            <ul>
              <li>Decreased in cortex (hypointense)</li>
              <li>Increased in white matter (hyperintense)</li>
            </ul>
          </li>
        </ul>
      </li>
    </ul>
  </li>
</ul>

<h4 id="4-authorship">4. Authorship</h4>

<ul>
  <li>SAW commented that a large number of people have contributed to MELD and this is currently not well documented.</li>
  <li>Sites will be sent a form to ensure all contributors are included on future conference and manuscript submissions.</li>
</ul>

<h4 id="5-open-forum">5. Open forum</h4>

<ul>
  <li>‘Should the sites be asked to share genetic and further clinical data (eg. seizure frequency, EEG features etc)’?
    <ul>
      <li>This was discussed in both teleconference sessions.</li>
      <li>Some sites thought this would be problematic:
        <ul>
          <li>Current ethics/IRB approval did not include these extra data.</li>
          <li>Many sites have alread completed their data acquisition</li>
        </ul>
      </li>
      <li>Other sites did not think this would be a problem</li>
    </ul>
  </li>
</ul>

<p>:arrow_forward: The initial analyses will proceed as currently planned. MELD can be used as a platform for future analysis proposals that could include the above.</p>

<ul>
  <li>How is MELD dealing with normal variation in surface-based features eg thick motor cortex?
    <ul>
      <li>KAW explained that the cohort undergoes several normalisation steps included interhemispheric asymmetry calculations and normalisation by controls. These steps help to account for healthy regional variability.</li>
    </ul>
  </li>
  <li>Suggested analyses:
    <ul>
      <li>Is topographic pattern consistent across all patients/histologies?</li>
      <li>Is there a difference in outcomes between those operated as children vs adults?</li>
    </ul>
  </li>
</ul>

<p>:arrow_forward: These analysis will be carried out.</p>

<ul>
  <li>The pattern of FLAIR features is inconsistent across the 6 depths.
    <ul>
      <li>SAW explained that within the cortex FLAIR/T2 is relatively decreased, whereas within the white matter, the transmantle sign is an increase in FLAIR/T2 intensity.</li>
    </ul>
  </li>
  <li>Can/should sites update their patients’ histology reports if these become available?
    <ul>
      <li>Yes please! It is very straightforward to incorporate updated demographics files for sites if such changes occur. Please correct the csv file and send this to us.</li>
    </ul>
  </li>
</ul>

<h4 id="6-going-forward">6. Going forward</h4>

<ul>
  <li>The next teleconference will take place in 2 months’ time and will be scheduled for the same two time points (i.e., 8am and 5pm UK time)</li>
</ul>

<p><strong><em>Looking forward to the next meeting!</em></strong> :sunny:</p>]]></content><author><name>Konrad Adler-Wagstyl</name><email>konrad.wagstyl@gmail.com</email></author><category term="blog" /><summary type="html"><![CDATA[Minutes]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" /><media:content medium="image" url="https://meldproject.github.io//%7B%22feature%22=%3Enil%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>