In this session, I am going to show and discuss results we obtained at Deutsche Telekom by employing one of the currently most sophisticated Auto Machine Learning tools available on-premise. Benchmarking the tool on several application domains against human Data Scientists as well as other Auto ML tools, I am going to put our findings into perspective with the Data Mining life cycle and show where Auto ML tools actually provide substantial support – but also where it falls short of the expectations and high hopes. Finally, I will conclude with an outlook on the role of Data Scientists and the future relevance of Automated Machine Learning.
For a successful and interactive participation the following things are required: Laptop, Docker (incl. Docker Compose) installation, Git Client and an open mind 🙂
This talk presents an overview of techniques that are able to make “black box” machine learning models transparent and demonstrate how they can be applied in Credit Scoring. We use the DALEX set of tools to compare a traditional scoring approach with state of the art Machine Learning models and asses both approaches in terms of interpretability and predictive power. Results show that a comparable degree of interpretability can be achieved while machine learning techniques keep their ability to improve predictive power.
Ziel des Vortrags ist, die Funktionsweise der Bausteine der Transformer Architektur anhand von Use-Case-Szenarien intuitiv verständlich zu machen, sowie deren Vor- und Nachteile im Vergleich zu RNNs zu erläutern.
One popular method for quickly finding objects in images is YOLO („you only look once“). After a brief introduction to image recognition with neural networks, we will apply them in practice. We have prepared for you code snippets that will make it as easy as possible to start using YOLO. We will give an introduction into how you can use these snippets, and then the floor will be yours.
During the session, we would like you to form small groups, to brainstorm on interesting use cases and to then start prototyping them. To fuel your imagination, we are bringing a unique data source.
The simplest requirement to participate in the session is bringing a Laptop and access to Google Colab (i.e. a Google Account). If you like, you can also try to set up the project dependencies on your own Laptop, ideally a Linux machine that has Python 3.x, an environment manager (e.g. Anaconda/Miniconda), and Git installed.
We try to answer these questions, provide real world examples and share our key learnings of leveraging data science in logistics at scale.