Der Beitrag widmet sich der Frage, wie sich moralische Anforderungen im agilen Vorgehensmodell Scrum berücksichtigen lassen. Dies wird durch die Anreicherung und Einbeziehung von Elementen und Methoden aus dem UX-Design und der Diskursethik ermöglicht, welche den Projektbeteiligten helfen, moralische Anforderungen aus Nutzersicht zu definieren und umzusetzen. Der vorgestellte Ansatz wird mit Anwendungsfällen aus einem realen Entwicklungsprojekt, der Implementierung eines KI-basierten Lerntagesbuchs für Jugendliche, veranschaulicht.
This article addresses the question of how moral requirements can be taken into
account in the agile Scrum process model. This is made possible by enriching and
incorporating elements and methods from UX design and discourse ethics, which help
the project stakeholders to define and implement moral requirements from a user
perspective. The presented approach is illustrated with use cases from a real
development project, the implementation of an AI-based learning diary for teenagers.
In this presentation, I will showcase the application of generative AI in automating financial reporting at BASF Coatings. The focus will be on a specific use case where we successfully automated a traditional Controlling workflow by utilizing OpenAI’s GPT large language model. I will highlight how intelligent design plays a crucial role in guiding the model output, resulting in a reliable, efficient, and secure performance. Attendees will gain valuable insights into the advantages of automation in the workplace and how AI can optimize operations.
Deep Learning is nowadays the standard tool for classification tasks and is used not only for differentiating cats and dogs but also for industry and private life applications (e.g. insurance document input management, autonomous driving, molecule folding, …)
For tasks beyond classification, more layers of information are required: Named Entities.
Named entities on insurance documents are usually IBANs, addresses, customer numbers, specific dates and amounts etc… Successful extraction of named entities enables more precise classification tasks and automatized document processing for instance. For instance distinguishing between company address, customers address and an address of the local company outlet.
In this presentation, we discuss some promising approaches we developed for and inside ERGO to extract those named entities.
Further, we elaborate on the occurring challenges (not only inside primary insurers) of generating a labeled data set, train scalable models and the corresponding model performance.
3 Optionen:
Prio 1: Failures aus Mobilfunk-Daten – Fail Drops | Case mit einer extrem hohen Anzahl von Daten | Analyse von komprimierter Pipeline, Diversifikation, Clusterung & Bilddaten | Zielfrage: Wo gibt es Probleme im Netzwerk?
Sehr spannend in Bezug auf Data Science Methoden und Architektur
Prio 2: Predictive Maintenance im Festnetz der DT (Glasfaser oder Kupfer-Kabel, Unterschiedliche Schadens- und Problemfälle)
Prio 3: Einsatz von Knime (Data Citizens, Upscaling, Wo wäre ein Vorteil, bei Einsatz von Knime (Gerne auch BarCamp Session)
Vertrags- oder Bedingungswerke im Versicherungskontext sind teils mehr als 50 Seiten lang und enthalten hunderte komplexe Klauseln. Ich werde einige aktuelle Projekte rund um Knowledge Discovery in solchen Dokumenten beschreiben – von der Digitalisierung über Metadaten-Anreicherung bis zum KI-unterstützten semantischen Vergleich. Dabei ist nicht nur die Auswahl der richtigen Technologien wichtig, sondern auch der regelmäßige Austausch mit den Fachabteilungen unabdingbar.
We at REWE digital have successfully managed the transition from a monolithic legacy system to a distributed, cloud-native software architecture. However, this transformation has brought new challenges such as distributed data sources which hampered the development speed of our machine learning teams due to impeded availability of required data sources. To cope, we have begun to adopt the philosophy of the Data Mesh, whereby teams consider their data as a product and publish them in analytical databases (e.g., Google BigQuery) for consumption by other teams. Eventually, moving towards a Data Mesh architecture enabled us to design, explore, and develop machine learning systems in a faster way. Yet, the development of such machine learning systems introduces other inherent engineering challenges, particularly due to uncertainties concerning the data (e.g., changing data) and concerning the model (e.g., behavior of the model is not deterministic and may change over time). Thus, we have also adopted MLOps practices to deliver robust and reliable machine learning systems. In this presentation, we will explore the adoption of the Data Mesh using Google DataFlow and BigQuery, implementation of MLOPs practices with Google Vertex AI, and demonstrate a pertaining case study from our last-mile delivery domain.