Instructors: Prof. Dr. Nassir Navab, Dr. Shadi Albarqouni; Magda Paschali, Ashkan Khakzar
Announcements
- 17-07-2019: We would like to encourage you to send us motivation e-mail with title "DLMA_Application" to dlma@mailnavab.informatik.tu-muenchen.de by the 24th of July 2019.
We will only evaluate e-mails that follow the template:- Name:
- Master's program:
- Current Semester:
- Related courses (if passed, mention the grade):
- Short Motivation (Max 3 sentences. It should include related projects/publications/competitions/github repositories):
- 05-07-2019: Preliminary meeting: Thursday, 18.07.2019 (13:00-14:00) in CAMP Seminar Room, 03.13.010.
- 29-06-2019: Website is up!
Introduction
- Deep Learning is growing tremendously in Computer Vision and Medical Imaging as well. Highly impacted journals in the medical imaging community, i.e. IEEE Transaction on Medical Imaging, published recently their special edition on Deep Learning [1]. The Seminar will propose a list of recent scientific articles related to the main current research topics in deep learning for Medical Applications together with some interesting papers from other communities.
Registration
- Interested students should attend the introductory meeting to enlist in the course.
- Students can only register through TUM Matching Platform themselves if the maximum number of participants hasn't been reached (please pay attention to the Deadlines).
- A maximum number of participants: 20.
Requirements
In this Master Seminar (formerly Hauptseminar), each student is asked to send three preferences from the list, then he will be assigned one paper. In order to successfully complete the seminar, participants have to fulfill these requirements:
- Presentation: The selected paper is presented to the other participants (20 minutes presentation 10 minutes questions). Use the CAMP templates for PowerPoint camp-tum-jhu-slides.zip, or Latex: CAMP-latex-template.
- Blog Post: A blog post of 1000-2000 words excluding references should be submitted before the deadline.
- Attendance: Participants have to participate actively in all seminar sessions.
The students are required to attend each seminar presentation which will be held during this course. Each presentation is followed by a discussion and everyone is encouraged to actively participate. The blog post must include all references used and must be written completely in your own words. Copy and paste will not be tolerated. Both the blog post and presentation have to be done in English.
You need to upload your presentation and blog post here. More details will be provided before the beginning of the semester.
Submission Deadline : You have to submit both the presentation and the blog post two weeks right after your presentation session.
Schedule
Date | Session: Topic | Slides | Students |
---|---|---|---|
18.07.2019 (13-14) | Preliminary Meeting | Slides | |
Online | Paper Assignment | ||
24.10.2019 | No Class! | ||
31.10.2019 | Intro. to our DLMA Seminar | Guidelines | |
07.11.2019 | Presentation Session 1: Supervised Learning | Ismail, Olefir | |
14.11.2019 | Presentation Session 2: Self/Semi/Weakly Supervised Learning | Benito, Burak, Richter | |
21.11.2019 | Presentation Session 3: Interpretable ML | Yupeng, Mirac, Acosta | |
28.11.2019 | Presentation Session 4: Interpretable ML | Abdelhamid, Berger, Elsharnoby | |
05.12.2019 | Presentation Session 5: Misc. Topics: Domain adaptation - Uncertainty | Clement, Panarit | |
12.12.2019 | Presentation Session 6: Misc. Topics: Meta Learning - Graph Convolutions | Hongjia, Evren, Nasser | |
19.12.2019 | Presentation Session 7: Spatio-Temporal Learning | Benetti, Fok |
List of Topics and Material
The list of papers:
Topic | No | Title | Conference/ Journal | Tutor | Student (Last name) | Link |
---|---|---|---|---|---|---|
Supervised Learning | 1 | Cardiac Phase Detection in Echocardiograms with Densely Gated Recurrent Neural Networks and Global Extrema Loss | TMI | Maria | Ismail | |
2 | Fully Convolutional Architectures for Multiclass Segmentation in Chest Radiographs | TMI | Ashkan | Olefir | ||
3 |
| CVPR | ||||
Self/Semi/Weakly Supervised Learning | 4 | Collaborative Learning of Semi-Supervised Segmentation and Classification for Medical Images | CVPR | Roger | Benito | |
5 | Self-supervised learning for medical image analysis using image context restoration | MedIA | Magda | Burak | ||
6 | FickleNet Weakly and Semi Supervised Semantic Image Segmentation Using Stochastic Inference | CVPR | Tariq | Richter | ||
Interpretable ML (Session 1) | 7 | Explainable cardiac pathology classification on cine MRI with motion characterization by semi-supervised learning of apparent flow | MedIA | Tariq | Yupeng | |
8 | Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks | ICLR | SeongTae | Mirac | ||
9 | Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) | ICML | Mahdi | Acosta | ||
Interpretable ML (Session 2) | 10 | Disentangled representation learning in cardiac image analysis | MedIA | Magda | Abdelhamid | |
11 | ICML | Ashkan | ||||
12 | This Looks Like That: Deep Learning for Interpretable Image Recognition | NeurIPS | SeongTae | Berger | ||
13 | Are Disentangled Representations Helpful for Abstract Visual Reasoning? | NeurIPS | Shadi | Elsharnoby | ||
Misc. Topics: Domain adaptation - Uncertainty | 14 | Unsupervised domain adaptation for medical imaging segmentation with self-ensembling | NeuroImage | Roger | Clement | |
15 | Transfusion: Understanding Transfer Learning for Medical Imaging | NeurIPS | Shadi | Panarit | ||
16 | CVPR | Shadi | ||||
17 | NeurIPS | Shadi | ||||
Misc. Topics: Meta Learning - Graph Convolutions | 18 | Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning | CVPR | Azade | Hongjia | |
19 | Automatic multi-organ segmentation on abdominal ct with dense v-networks | TMI | Evren | |||
20 | Exploiting Edge Features in Graph Neural Networks | CVPR | Hendrik | Nasser | ||
Spatio-Temporal Learning | 21 | Prediction of Disease Progression in Multiple Sclerosis Patients using Deep Learning Analysis of MRI Data | MIDL | Ashkan | Benetti | |
22 | Predicting Alzheimer’s disease progression using multi-modal deep learning approach | Nature | Gerome | Fok |
MICCAI: Medical Image Computing and Computer Assisted Intervention
CVPR: Conference on Computer Vision and Pattern Recognition
ICLR: International Conference on Learning Representations
TMI: IEEE Transaction on Medical Imaging
JBHI: IEEE Journal of Biomedical and Health Informatics
MedIA: Medical Image Analysis (Elsevier)
TPAMI: IEEE Transactions on Pattern Analysis and Machine Intelligence
BMVC: British Machine Vision Conference
MIDL: Medical Imaging with Deep Learning
NeurIPS: Neural Information Processing Systems
Literature and Helpful Links
A lot of scientific publications can be found online.
The following list may help you to find some further information on your particular topic:
- Microsoft Academic Search
- Google Scholar
- CiteSeer
- CiteULike
- Collection of Computer Science Bibliographies
Some publishers:
- ScienceDirect (Elsevier Journals)
- IEEE Journals
- ACM Digital Library
Libraries (online and offline):
- http://rzblx1.uni-regensburg.de/ezeit/ (Elektronische Zeitschriften Bibliothek)
- Verbundkatalog des Bibliotheksverbundes Bayern (BVB)
- Computer ORG
- http://www.ub.tum.de/ (TUM Library)
- To get access onto the electronic library, see http://www.ub.tum.de/medien/ejournals/readme.html
- "proxy.biblio.tu-muenchen.de" mit Port 8080 (nur fuer http). Damit klappen zumindest portal.acm.org und computer.org meistens
- Various proceedings of conferences in our AR-Lab, 03.13.036 (These proceedings are not for lending!)
Some further hints for working with references:
- JabRef is a Java program for comfortable working with Bibtex literature databases. Handy feature: if you know the PubMed ID for an article, JabRef can import data from there (via "Web Search/Medline").
- Mendeley is a cross-platform program for organising your references.
If you find useful resources that are not already listed here, please tell us, so we can add them for others. Thanks.