This Indico installation shall not be used to organise MPI-CBG courses and events from beginning of 2024 on. Please use MPG Indico instead.
9-13 September 2019
Europe/Berlin timezone

The Dresden Deep Learning Hackathon ( #d3hack2019 ) is meant to bring together machine learning experts and scientific practitioners. Teams of 2-4 scientists can apply for the hackathon given a scientific problem they want to solve with machine learning. Upon approval, they will be assisted by one or two machine learning experts for 5 days consecutively! This effort is meant to give your team a head-start and potentially create an end-to-end machine learning solution for your science. The teams are motivated to publish a scientific paper about the hackathon efforts at dedicated conferences or in established journals - at best jointly with their mentors - after the hackathon. A win-win situation for all parties involved.  

The scope of scientific domains that can apply is not limited. For sure, our mentors have a given background mostly with regard to 2D or 3D images. So we will try to match that as close as possible. However, we are still in the process of fixing mentors (we have expressions of interest of about 5 more than listed below). We will also consider a limited amount of applications using standard machine learning (MLP, SVM, RandomForests,...).
If you are unclear whether your topic fits the hackathon, please reach out to us.

Most importantly, any team without a readily available data set for training will be discarded from the candidate list. In other words, if you are interested in applying machine learning to your data, you shouldn't use the hackathon to annotate your data.

The workshop admission fee amounts to € 300 per participant to cover room rent and catering. We are still looking for sponsors, so there is a non-negligible probability that the admission fee will be reduced in the future.

The call for applications closes on June 30, 2019, at 12pm AoE! We especially invite women and under-represented minorities to apply! After the aforementioned date, a review board of mentors and organizers will judge the applications and send out confirmations to the applying teams until mid July the latest. The registration mechanism of participants will be circulated then. Applications by members of any tier of the academic profession hierarchy (student, TA, apprentice, PhD candidate/predoc, post-doc, PI, specialist, RSE, ...).  

For members of non-academic institutions: We cannot allow applications from non-academic institutions or industry to our hackathon. If you want to participate with a project as a company, this project needs to be embedded in a scientific group and the majority of team members need to be employed by a scientific institution.  On top, the results of the hackathon are expected to be published. So be prepared to undisclose your results and (if possible) the data and code which produced these results.

During the workshop, we offer computational powerful compute resources at the TU Dresden as well as in the AWS cloud. Participants are henceforth only required to bring their laptops. If you have security concerns regarding your data, you can also apply and work on your hardware at home while being mentored in Dresden. Contact us for details and recipes.



Zellescher Weg 18 01069 Dresden Germany

Our confirmed Mentors so far:

  • Walter de Back (TU Dresden, Universitätsklinikum Dresden)

    Walter de Back has a MSc in Artificial Intelligence from Utrecht University (NL) and worked as a Junior Fellow at the the Collegium Budapest-Institute for Advanced Study. He studied pattern formation in tissues using multi-scale agent-based simulations, for which he obtained a PhD in Computational Biology from TU Dresden. Currently, Walter works as a postdoc data scientist at the Faculty of Medicine (TU Dresden) where he uses deep learning for biomedical image analysis. Recent projects include cell segmentation in live cell microscopy, dental age estimation from panoramic radiographs, and tumor tissue classification based on mass spectrometry imaging data.
  • Marc Busse (Carl Zeiss AG)

    Marc Busse received his PhD and diploma from the Arnold Sommerfeld Center for Theoretical Physics in Munich, where he studied the transition from quantum to classical physics. He is currently a data scientist at Zeiss Digital Innovation Partners, where he builds image analysis pipelines for Zeiss microscopy customers using APEER. Thereby, he regularly uses deep learning techniques, such as object detection or segmentation. Before joining Zeiss, Marc worked for over eight years for financial institutions, such as Munch Re, where he learned the extraction of information from data by means of statistics and machine learning. Marc is also a qualified actuary and a lecturer for actuarial data science at the Deutsche Aktuar Akademie (DAA), where he teaches machine learning and computer science classes.

  • Amir Fabin (CERN, University of Texas Arlington)

    Amir Farbin received his BS in physics from the Massachusetts Institute of Technology (MIT) and his PhD in physics from the University of Maryland, College Park measuring subtle differences between matter and anti-matter (time-dependent CP Violation in rare decays) on the BaBar experiment at Stanford Linear Accelerator Center (SLAC). He has been working on the ATLAS experiment at CERN's Large Hadron Collider since 2004 in a variety of areas, including Hadronic Calorimetry, Trigger, Supersymmetry Searches, and Software. He also served as the deputy computing coordinator for the DUNE experiment at Fermilab. He is currently an Associate Professor of Physics at the University of Texas Arlington and convener of the ATLAS Machine Learning Forum. In the past few years, he has been focusing on applications of Deep Learning to High Energy Physics, lecturing on this topic at a variety of summer schools. He is currently helping develop a Data Science undergraduate degree program at his home institution and teaching Data Science courses. And he is the co-founder of a Deep Learning startup.  

  • Oliver Guhr (HTW Dresden)

    Oliver received a bachelor degree in computer science and business studies in 2014 (HfT Leipzig) and holds a master degree in computer science since 2018 (HTW Dresden). From 2007 to 2018 he was working as a software engineer in the sector of information and communication technology. In 2018 he became a research fellow at the HTW Dresden in the department of artificial intelligence. His research focuses on Spoken Dialogue Systems, Machine Learning, and Natural Language Processing. He also teaches the Natural Language Processing part of the Deep Learning course at HTW Dresden.

  • Jenia Jitsev (Juelich Supercomputing Center, Helmholtz Research Center Juelich)

    Jenia is leading the Cross-Sectional Team Deep Learning (CST-DL) at the Jülich Supercomputing Center (JSC) - a research group with focus on large-scale continual unsupervised and reinforcement learning, built in frame of Helmholtz Artificial Intelligence Cooperation Unit (HAICU). His work takes place in the overlap of computational neuroscience and machine learning, focusing on plasticity and learning in deep artificial and biological neural networks. From a high performance computing (HPC) perspective, his interest is in scaling up learning in deep neural networks across multiple GPUs or other accelerators and enabling robust maintenance of continual learning systems running on HPC facilities over long periods (months-years) of time. Before joining JSC, Jenia studied computer science and psychology at University of Bonn and University of Bochum, and obtained his PhD in Computer Science from University of Frankfurt working on models of synaptic plasticity und unsupervised learning in the visual cortex. Later, he was with Max Planck Institute for Neurological Research in Cologne and at Institute for Neuroscience and Medicine at Forschungszentrum Jülich, working on reinforcement learning and models of reward-based learning in cortico- basal ganglia circuits in the brain. Long-term goal of his research is on enabling large-scale continual, multi-task, active self-supervised learning that is capable of growing generic models from incoming streams of data, extracting knowledge and skills transferable across different domains.

  • Jeffrey Kelling (Helmholtz-Zentrum Dresden-Rossendorf)

    Jeffrey Kelling obtained his diploma in statistical physics and his Ph.D. in physics on massively parallel lattice Monte-Carlo simulations on GPUs. He is a scientist in the computational science group at the Helmholtz-Zentrum Dresden-Rossendorf, concerned with high performance computing and deep learning applications in science as well as giving courses to teach related skills to fellow scientists.

  • Jonny Hancox (Nvidia)

    Jonny is a Deep Learning Senior Solutions Architect at NVIDIA and works in the Health & Life Sciences team in EMEA. His work is focussed on enabling biologists, researchers and clinicians to get the most value from their data using the latest hardware and software tools. Jonny's current focus areas are computational pathology, radiology and genomics. Before joining NVIDIA in 2018, Jonny spent four years working in a similar role at Intel and was based at Imperial College London so that the team could work closely with the medical and computer science groups there. Before that, Jonny was Technical Director for a software company creating automated data capture applications for the UK National Health Service and public sector. Although originally trained as a product design engineer, most of Jonny's career has been spent writing code - something he strives to maintain.

  • Nico Hoffmann (Helmholtz-Zentrum Dresden-Rossendorf)

    Nico Hoffmann earned his PhD (Dr. rer. nat.) in December 2016 from Technische Universität Dresden (Dresden, Germany) in medical image analysis. He developed statistical machine learning methods that compensate motion artefacts and advanced semiparametric regression models as well as neural networks to recognize evoked neuronal activity of the human brain. Additionally, he and his students developed novel approaches for multimodal 2D-3D image registration and -fusion. Nico visited the Laboratory of Mathematics in Imaging of Harvard University from 2018 to 2019. During that time, he advanced the reconstruction of the human brain's nerve fibre bundles using recurrent convolutional neural network. Recently, Nico joined the Computational Radiation Physics group of Helmholtz-Zentrum Dresden-Rosssendorf. His work mainly focusses on the design of deep neural networks that approximate forward simulations as well as parameter estimation (inverse problems) of complex physical systems.

  • Thomas Neumann (freelance R&D software developer & contractor) 

    Thomas Neumann is specialized in 3D scan data processing where he employs techniques from machine learning, nonrigid registration, statistical modeling, visualization and nonlinear optimization. He also teaches about convolutional neural networks at HTW Dresden. There, he previously was a PostDoc working reconstruction and analysis of human motion in the junior research group "TISRA". He defended his PhD in 2016 at the Institute for Computer Graphics at TU Braunschweig on "Reconstruction, Analysis, and Editing of dynamically deforming 3D-Surfaces".

  • Mangal Prakash (Max Planck Institute of Molecular Cell Biology and Genetics)

    Mangal Prakash received the BTech degree in Electrical Engineering from NIT Durgapur, India and MS degree in Electrical Engineering from University of Minnesota, USA. He is currently pursuing a PhD in Computer Science at MPI-CBG/CSBD in the lab of Dr. Florian Jug working on developing computer vision techniques and algorithms for cell segmentation and tracking. His research interests include machine learning and computer vision.

  • Nico Scherf (Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Institute for Medical Informatics and Biometry Dresden)

    Nico Scherf is senior research fellow at the Department of Neurophysics at the MPI-CBS and a group leader at the Institute for Medical Informatics and Biometry (IMB) at TU Dresden. He has a diploma in Informatics (Image Analysis / Artificial Intelligence) from the University of Leipzig and a PhD in Medical Biometry and Bioinformatics from the TU Dresden. His research focuses on advanced methods to extract, quantify and visualise the multi-scale processes underlying structure-formation (morphogenesis) in biological systems from multi-dimensional, biomedical image data. He uses deep learning approaches for computational imaging (image translation) and object segmentation in Microscopy and MRI data. He focuses on Deep Autoencoders for manifold learning for weakly supervised learning and mapping of complex biological structures. 

  • Uwe Schmidt (Max Planck Institute of Molecular Cell Biology and Genetics)

    Uwe Schmidt received the MSc and PhD degrees in computer science from TU Darmstadt, Germany. He has been a visiting graduate student at the University of British Columbia in Vancouver, Canada. Uwe is currently a postdoc at MPI-CBG in Dresden, Germany. His research interests include machine learning and computer vision.
  • Steffen Seitz (TU Dresden, Sonotec GmbH)

    Steffen holds a diploma in electrical engineering from Technische Universität Dresden (TUD) since April 2016. Since then he is working towards his PhD at the intersection of machine learning and predictive maintenance at the fundamentals of electronics chair at TUD. He develops unsupervised machine learning algorithms like RNN based autoencoders for representation learning to disentangle wear in machinery elements for industrial applications. He also is co-founder of the Machine Learning Community Dresden (MLC), to connect regional scientists working on artificial intelligence topics.

  • Sebastian Starke (Helmholtz-Zentrum Dresden-Rossendorf)

    Sebastian received his bachelor degree in mathematics in 2013 and his masters degree in statistics in 2015 from the Otto-von-Guericke University in Magdeburg. Afterwards he worked as an algorithm engineer in the field of speech recognition before joining the computational science group at HZDR in October 2016. At the moment he is working together with OncoRay scientists to apply deep learning methods to CT images of cancer patients to improve personalized treatment.
  • Kira Vinogradova (Max Planck Institute of Molecular Cell Biology and Genetics)

    Kira obtained her bachelor’s degree in Applied Mathematics and Physics from the Moscow Institute of Physics and Technology. She has been a summer intern at the IST Austria and the University of Heidelberg, she has also worked part-time at Kurchatov Institute and at Samsung Research Institute Russia. Currently, Kira is working on interpretation of convolutional neural networks in the group of Prof. Gene Myers as a PhD student. Her research is mainly focused on explainable AI, image classification and segmentation. 

  • Martin Weigert (Max Planck Institute of Molecular Cell Biology and Genetics)

    Martin Weigert holds a Diploma in Physics from Technical University Dresden. He is currently wrapping up his PhD in the group of Gene Myers at MPI-CBG in Dresden, where he investigates computational methods for advanced fluorescence microscopy. Among his interests are computational optics, physical simulations and visualizations, and machine learning methods for image reconstruction.

Your browser is out of date!

Update your browser to view this website correctly. Update my browser now