Tech Stack for Deep Learning Explanability Medical Imaging
Published:
Author: Yusuf Brima
Comprehensive Overview of the Tool Stack
Here is an overview of the tools and frameworks used across different stages of the workflow, from data preprocessing to model development and deployment. It also included learning resources to help you familiarize yourself with each tool.
1. Data Preprocessing
This involves handling raw MRI scans in DICOM/NIfTI format and preparing them for model training.
Tools Used:
- DICOM/NIfTI Processing:
- Custom Preprocessing in InteractiveVis – Their in-house development includes preprocessing MRI scans.
Where to Learn:
2. Model Development & Interpretation
The main deep learning framework is TensorFlow 2.15, but they also explored PyTorch alternatives. The focus is on layer-wise relevance propagation (LRP) for model explanability.
Tools Used:
- TensorFlow 2.15 & iNNvestigate (for LRP-based explanability)
- iNNvestigate – A library for explanability methods in neural networks, tied to TensorFlow.
- AUCMEDI (for data generators)
- AUCMEDI – An open-source package that allows fast setup of medical image classification pipelines with state-of-the-art methods via an intuitive, high-level Python API or via an AutoML deployment through Docker/CLI.
- PyTorch (alternative, but not yet fully integrated)
- Captum – PyTorch library for interpretability (Integrated Gradients, LRP, etc.), though it didn’t match iNNvestigate’s results.
Where to Learn:
- TensorFlow 2.15 Documentation
- iNNvestigate Documentation
- AUCMEDI GitHub
- Captum for PyTorch Interpretability
3. Visualization & Interactive Exploration
For visualizing model outputs and making insights interactive.
Tools Used:
- Bokeh (for App UI, but using old TensorFlow 1.15)
- Bokeh – Interactive visualization library for web applications.
- Plotly (for OntoVis, ontology-based visualization)
- Plotly – Python library for interactive visualizations.
- PowerPoint (for summarizing results and explanations)
Where to Learn:
4. Deployment & Inference
The deployment workflow includes running inference on trained models with compatibility constraints between TensorFlow versions.
Challenges & Solutions:
- Versioning issues:
- TF2.15 models need to be manually translated to TF1.15 for inference in InteractiveVis.
- Inference Pipeline:
- Uses InteractiveVis, the new internal tool for handling MRI scans.
Where to Learn:
5. Other Resources
- Martin’s GitHub: https://github.com/martindyrba
- Experimental repo (Drafting phase): https://github.com/martindyrba/Experimental
- Videos of previous talks:
Summary of Key Areas to Strengthen
- Medical Imaging Preprocessing (DICOM/NIfTI with pydicom & nibabel)
- Model Development in TensorFlow 2.15 (with iNNvestigate & AUCMEDI)
- Model Interpretability (LRP, Captum for PyTorch)
- Visualization & UI (Bokeh, Plotly)
- Handling TF1.15/TF2.15 Compatibility Issues
Leave a Comment