Planning and organisation of last-mile delivery offers a lot of
challenges. Traditional planning based purely on mathematical optimization often lacks important
real-life aspects and thus does not satisfy relevant operational requirements. Experienced
couriers have tacit knowledge about the delivery area and its customers, enabling them to choose
more efficient routes than the originally planned ones. This in turn renders predictions of
arrival times very imprecise because those predictions can only be based on planned routes. This
courier’s tacit knowledge is almost impossible to collect and maintain, let alone to incorporate
in optimization and prediction algorithms. Thus, we at Deutsche Post DHL developed a novel, more
holistic approach. We implicitly learn this tacit knowledge from historical tours and combine
this with optimization algorithms to plan routes that an experienced courier would choose. Based
on these routes, the delivery time predictions are done using machine learning trained on a
large amount of past delivery events. In this talk we will present details of our algorithm,
which incorporates machine learning, statistics, and optimization in a novel way. Furthermore,
we show how it impacted last-mile delivery planning at Deutsche Post after its rollout across
Germany.
TALKS
Where
Basecamp Bonn
10:40 – 11:10
Networking // Coffee Break #1
11:10 – 11:55
Talk 2: Power-charge your Search with Siamese Graph Neural Networks
Vectorization algorithms like TF-IDF enable powerful similarity
comparisons, searches, and recommendations on texts. However, often relations between documents
are latent factors that cannot be considered in this approach. Using a practical example I want
to showcase the use of Siamese Graph Convolutional Networks and invite you to take a peek at
what it is that makes such networks worth implementing.
TALKS
Where
Basecamp Bonn
12:05 – 12:50
Talk 3: Democratizing Data And Fostering Robust Machine Learning Systems with Data Mesh and
MLOps
We at REWE digital have successfully managed the transition from a
monolithic legacy system to a distributed, cloud-native software architecture. However, this
transformation has brought new challenges such as distributed data sources which hampered the
development speed of our machine learning teams due to impeded availability of required data
sources. To cope, we have begun to adopt the philosophy of the Data Mesh, whereby teams consider
their data as a product and publish them in analytical databases (e.g., Google BigQuery) for
consumption by other teams. Eventually, moving towards a Data Mesh architecture enabled us to
design, explore, and develop machine learning systems in a faster way. Yet, the development of
such machine learning systems introduces other inherent engineering challenges, particularly due
to uncertainties concerning the data (e.g., changing data) and concerning the model (e.g.,
behavior of the model is not deterministic and may change over time). Thus, we have also adopted
MLOps practices to deliver robust and reliable machine learning systems. In this presentation,
we will explore the adoption of the Data Mesh using Google DataFlow and BigQuery, implementation
of MLOPs practices with Google Vertex AI, and demonstrate a pertaining case study from our
last-mile delivery domain.
TALKS
Where
Basecamp Bonn
12:50 – 14:00
Lunch
14:00 – 14:45
Talk 4: Knowledge Discovery in komplexen Bedingungswerken
Vertrags- oder Bedingungswerke im Versicherungskontext sind teils mehr
als 50 Seiten lang und enthalten hunderte komplexe Klauseln. Ich werde einige aktuelle Projekte
rund um Knowledge Discovery in solchen Dokumenten beschreiben – von der Digitalisierung über
Metadaten-Anreicherung bis zum KI-unterstützten semantischen Vergleich. Dabei ist nicht nur die
Auswahl der richtigen Technologien wichtig, sondern auch der regelmäßige Austausch mit den
Fachabteilungen unabdingbar.
[Die Teilnahme ist auf 20 Personen begrenzt]
Immer mehr Anwendungen analysieren Videos mit Hilfe von Computer-Vision-Modellen. Von der
einfachen Erkennung von Objekten und Anomalien hin zur Bewertung von komplexen Bewegungen bieten
diese Modelle ein großes wirtschaftliches Potential. In dieser Hacksession möchten wir gemeinsam
mit euch ganz Hands-On verschiedene Bildanalyse-Methoden und -Modelle einsetzen, um mithilfe
eurer Webcam, einigen Gegenständen und eurem eigenen Körper ein Computerspiel zu steuern. Auf
diesem Wege macht ihr euch spielerisch leicht mit den Grundlagen von Computer-Vision-Modellen
vertraut. Ihr benötigt dazu lediglich einen Laptop mit einer Kamera, einer installierten Python
3-Umgebung und einer IDE eurer Wahl. Ihr braucht keine Vorkenntnisse zum Mitmachen, wobei
Python-Grundlagen auf jeden Fall nicht schaden.
3 Optionen:
Prio 1: Failures aus Mobilfunk-Daten – Fail Drops | Case mit einer extrem hohen Anzahl von Daten
| Analyse von komprimierter Pipeline, Diversifikation, Clusterung & Bilddaten | Zielfrage:
Wo gibt es Probleme im Netzwerk?
Sehr spannend in Bezug auf Data Science Methoden und Architektur
Prio 2: Predictive Maintenance im Festnetz der DT (Glasfaser oder Kupfer-Kabel, Unterschiedliche
Schadens- und Problemfälle)
Prio 3: Einsatz von Knime (Data Citizens, Upscaling, Wo wäre ein Vorteil, bei Einsatz von Knime
(Gerne auch BarCamp Session)
TALKS
Where
Basecamp Bonn
15:40 – 16:00
Networking // Coffee Break #2
16:00 – 16:45
Talk 6: Extraction of Entities on incoming documents
Deep Learning is nowadays the standard tool for classification tasks
and is used not only for differentiating cats and dogs but also for industry and private life
applications (e.g. insurance document input management, autonomous driving, molecule folding, …)
For tasks beyond classification, more layers of information are required: Named Entities.
Named entities on insurance documents are usually IBANs, addresses, customer numbers, specific
dates and amounts etc… Successful extraction of named entities enables more precise
classification tasks and automatized document processing for instance. For instance
distinguishing between company address, customers address and an address of the local company
outlet.
In this presentation, we discuss some promising approaches we developed for and inside ERGO to
extract those named entities.
Further, we elaborate on the occurring challenges (not only inside primary insurers) of
generating a labeled data set, train scalable models and the corresponding model performance.
TALKS
Where
Basecamp Bonn
17:00 – 17:45
Keynote: So, you learned Python – Now you’re a Data Scientists, right?
Since 30 years the field of Data Science undergoes changes in its
definitions, tasks and self-understanding. Where are we standing now? How do Data Scientists see
themselves after they left the universities and entered industries? Not a structured but a
knowledgeable view on the current situation with insights and outlooks on what Data Science is,
might be and can achieve.
In this presentation, I will showcase the application of generative AI
in automating financial reporting at BASF Coatings. The focus will be on a specific use case
where we successfully automated a traditional Controlling workflow by utilizing OpenAI’s GPT
large language model. I will highlight how intelligent design plays a crucial role in guiding
the model output, resulting in a reliable, efficient, and secure performance. Attendees will
gain valuable insights into the advantages of automation in the workplace and how AI can
optimize operations.
TALKS
11:10 – 11:55
Talk 2: Ein Code für alle Fälle: Automobilzulieferer Plock betreibt Qualitätssicherung für
Kleinstteile auf 2×2 Millimetern
By Timo
Klerx Gründer und Data Scientist, paiqo GmbH
Der Beitrag widmet sich der Frage, wie sich moralische Anforderungen im
agilen Vorgehensmodell Scrum berücksichtigen lassen. Dies wird durch die Anreicherung und
Einbeziehung von Elementen und Methoden aus dem UX-Design und der Diskursethik ermöglicht,
welche den Projektbeteiligten helfen, moralische Anforderungen aus Nutzersicht zu definieren und
umzusetzen. Der vorgestellte Ansatz wird mit Anwendungsfällen aus einem realen
Entwicklungsprojekt, der Implementierung eines KI-basierten Lerntagesbuchs für Jugendliche,
veranschaulicht.
This article addresses the question of how moral requirements can be taken into
account in the agile Scrum process model. This is made possible by enriching and
incorporating elements and methods from UX design and discourse ethics, which help
the project stakeholders to define and implement moral requirements from a user
perspective. The presented approach is illustrated with use cases from a real
development project, the implementation of an AI-based learning diary for teenagers.
Diese Website verwendet Cookies. Mit der Nutzung unserer Dienste erklären Sie sich damit einverstanden, dass wir Cookies verwenden.AkzeptierenDatenschutzerklärung