Refine
Year of publication
Document Type
- Article (13)
- Book (3)
- Part of a Book (62)
- Conference Proceeding (114)
- Contribution to a Periodical (25)
- Lecture (7)
- Internet Paper (2)
- Report (2)
- Working Paper (4)
Is part of the Bibliography
- no (232)
Keywords
- 02 (15)
- 03 (9)
- 04 (1)
- 5G (2)
- 7. EU-Forschungsrahmenprogramm (1)
- AI (2)
- APMS (1)
- APS (1)
- Aachener PPS-Modell (1)
- Ablauforganisation (1)
Institute
- Business Transformation (10)
- Dienstleistungsmanagement (67)
- FIR e. V. an der RWTH Aachen (232)
- Informationsmanagement (51)
- Produktionsmanagement (115)
Manufacturing companies face the challenge of managing vast amounts of unstructured data generated by various sources such as social media, customer feedback, product reviews, and supplier data. Text-mining technology, a branch of data mining and natural language processing, provides a solution to extract valuable insights from unstructured data, enabling manufacturing companies to make informed decisions and improve their processes. Despite the potential benefits of text mining technology, many manufacturing companies struggle to implement use cases due to various reasons. Therefore, the project VoBAKI (IGF-Project No.: 22009 N) aims to enable manufacturing companies to identify and implement text mining use cases in their processes and decision-making processes. The paper presents an analysis of text mining use cases in manufacturing companies using Mayring's content analysis and case study research. The study aims to explore how text mining technology can be effectively used in improving production processes and decision-making in manufacturing companies.
With the development of publicly accessible broker systems within the last decade, the complexity of data-driven ecosystems is expected to become manageable for self-managed digitalisation. Having identified event-driven IT-architectures as a suitable solution for the architectural requirements of Industry 4.0, the producing industry is now offered a relevant alternative to prominent third-party ecosystems. Although the technical components are readily available, the realisation of an event-driven IT-architecture in production is often hindered by a lack of reference projects, and hence uncertainty about its success and risks. The research institute FIR and IT-expert synyx are thus developing an event-driven IT-architecture in the Center Smart Logistics' producing factory, which is designed to be a multi-agent testbed for members of the cluster. With the experience gained in industrial projects, a target IT-architecture was conceptualised that proposes a solution for a self-managed data-ecosystem based on open-source technologies. With the iterative integration of factory-relevant Industry 4.0 use cases, the target is continuously realised and validated. The paper presents the developed solution for a self-managed event-driven IT-architecture and presents the implications of the decisions made. Furthermore, the progress of two use cases, namely an IT-OT-integration and a smart product demonstrator for the research project BlueSAM, are presented to highlight the iterative technical implementability and merits, enabled by the architecture.
Assets of integrated production systems, especially in the heavy industry, are facing high requirements in terms of reliability and availability. In case of component breakdown, the operating firm is confronted with high costs due to downtime and loss of production. Modern maintenance concepts in combination with advanced technologies can help to improve the plant availability and reduce the downtime costs caused by unplanned breakdowns. Against this background, the research institutes FIR and IMR from RWTH Aachen University, Germany, are collaborating within the research project “SiZu”. This project deals with the integration of condition monitoring system and real time simulation to assess the condition of components and to support failure cause analysis.
Based on the increasingly complex value creation networks, more and more event-based systems are being used for decision support. One example of a category of event-based systems is supply chain event management. The aim is to enable the best possible reaction to critical exceptional events based on event data. The central element is the event, which represents the information basis for mapping and matching the process flows in the event-based systems. However, since the data quality is insufficient in numerous application cases and the identification of incorrect data in supply chain event management is considered in the literature, this paper deals with the theoretical derivation of the necessary data attributes for the identification of incorrect event data. In particular, the types of errors that require complex identification strategies are considered. Accordingly, the relevant existing error types of event data are specified in subtypes in this paper. Subsequently, the necessary information requirements and information available regarding identification are considered using a GAP analysis. Based on this gap, the necessary data attributes can then be derived. Finally, an approach is presented that enables the generation of the complete data set. This serves as a basis for the recognition and filtering out of erroneous events in contrast to standard and exception events.
The complexity and volatility of companies’ environment increase the relevance of disruption preparation. Resilience enables companies to deal with disruptions, reduce their impact and ensure competitiveness. Especially in the context of procurement, disruptions can cause major challenges while resilience contributes to ensuring material availability. Even though past disruptions have posed various challenges and companies have recognized the need to increase resilience, resilience is often not designed systematically. One major challenge is the number of potential measures to increase resilience. The systematic design of resilience thus requires a detailed understanding of domain-specific measures. This also includes an understanding of the contribution of these measures to different resilience components and their interdependencies. This paper proposes a systematic approach for configuring resilience in procurement which enables the evaluation and selection of resilience measures. Based on a resilience framework, a resilience configurator is developed. The basis of the configurator are resilience potentials that have been characterized and clustered. Overarching approaches to design resilience and indicators to evaluate resilience are presented. Moreover, a procedure is proposed to ensure practical applicability. To evaluate the results two case studies are conducted. The results enable companies to systematically design their resilience in procurement.
Technologiebasierte Leistungssysteme versetzen den Werkzeugbau am Hochlohnstandort Deutschland in Zukunft in die Lage, nachhaltige Wettbewerbsvorteile zu generieren. Dazu ist es allerdings erforderlich, nicht nur die Technologiebasis in Form von Transponder- und Sensortechnik in das Werkzeug zu integrieren, vielmehr ist es nötig, entsprechende neue Geschäftsmodelle für diese Leistungssysteme zu entwickeln. Außerdem ist sicherzustellen, dass die Geschäftsmodelle auf operativer Ebene auch mit der Technologie harmonieren und die gewonnenen Daten entsprechend in die Auftragsabwicklungsprozesse integriert werden. Der vorliegende Beitrag stellt potenzielle neue Geschäftsmodelle für den Werkzeugbau vor und skizziert einen Ansatz zur operativen Integration der benötigten Informationen in die Geschäftsprozesse.
Der vorliegende Beitrag baut auf den Arbeiten eines Forschungsprojekts auf. Das Forschungsprojekt 'TecPro - Geschäftsmodelle für technologieunterstützte, produktionsnahe Dienstleistungen des Werkzeug- und Formenbaus' wird mit Mitteln des Bundesministeriums für Bildung und Forschung (BMBF) innerhalb des Rahmenkonzepts "Forschung für die Produktion von morgen" (Förderkennzeichen 02PG1095) gefördert und vom Projektträger Forschungszentrum Karlsruhe, Bereich Produktion und Fertigungstechnologien (PTKA-PFT), betreut.
In diesem Beitrag werden die aktuellen Aktivitäten im Forschungsprojekt „SiZu – Integration von Echtzeitsimulation und Zustandsüberwachung zur Bauteilprognose und Fehleranalyse für die Instandhaltung“ vorgestellt. Ziel des Projektes ist es, die bislang separat genutzten Funktionalitäten Condition-Monitoring und Echtzeitsimulationen in einem Analysewerkzeug (Condition- Analyser) für die Instandhaltung zusammenzuführen und damit Zustandsüberwachungssysteme um die Möglichkeit der Nutzung historischer Anlagendaten und Echtzeitsimulation zu erweitern. Neben der detaillierten Beschreibung der angestrebten Forschungsergebnisse und den daraus resultierenden Nutzungspotentialen für die Instandhaltung wird die zur Zielerreichung entwickelte Vorgehensweise vorgestellt und diskutiert.
In diesem Beitrag werden die Ergebnisse aus einer Studie in der Papierindustrie vorgestellt. Dabei zeigt sich eine deutliche Korrelation zwischen guten Ergebnissen in der Effektivität und Effizienz des Zuverlässigkeitsmanagements und dem Unternehmenserfolg. Der Unternehmenserfolg – im Sinne einer hohen Umsatzrendite – kann zwar nicht allein auf einen entscheidenden Einflussfaktor zurückgeführt werden, da der Umsatz durch eine Vielzahl von Faktoren bestimmt wird. Die durchgeführten Analysen und Interviews innerhalb der Studie deuten allerdings darauf hin, dass in der Tat das operative Anlagenmanagement einen maßgeblichen Erfolgsfaktor darstellt, sich „Reliability“ in der Prozessindustrie folglich auszahlt. Überdies konnte gezeigt werden, wie sich Methoden und Verhaltensweisen von Instandhaltung und Produktion auf die Zuverlässigkeit von Anlagen und die Effizienz deren Bewirtschaftung auswirken.
One of the major tasks of operations managers is to boost uptime while simultaneously keeping budget. To meet this challenge they discover reliability-based management as strategic factor to improve performance. But which parameters are the key to “reliability excellence” and drive a company’s performance? What are the relevant levers to pull in reliability-based management?
To answer these questions McKinsey & Company partnered with Aachen University to launch a global reliability survey in process industries. Objective of the initiative is to provide a statistically proven picture of key factors that drive maintenance and reliability excellence. Furthermore benchmarks and best practices concerning overall operational performance will be identified. The study is based on a questionnaire-based approach which addresses all relevant departments within a company, complemented by best practice analyses.
This paper provides results of the survey. The results demonstrate that reliability pays off. Some unproven beliefs have been confirmed (e.g. a good reliability performance results in a low spare part inventory) but also surprises like a correlation between safety and performance were identified. The analysis also shows that structural differences like company size or geography do not influence reliability performance.
Das (volks-)wirtschaftliche Umfeld produzierender Unternehmen wird aktuell mehr denn je durch unvorhersehbare und tiefgreifende Veränderungen geprägt. Die deutsche Industrie muss die Dynamik zukünftig aus eigener Kraft beherrschen. Teilweise nachteilige Standortfaktoren müssen kompensiert werden, um die Produktion in Deutschland langfristig zu sichern. Wandlungs- und Echtzeitfähigkeit in Prozessen und Strukturen stellen die zentralen Enabler zur Beherrschung des Produkt-Produktionssystems dar.