Refine
Year of publication
- 2022 (8) (remove)
Document Type
- Article (1)
- Part of a Book (1)
- Conference Proceeding (4)
- Working Paper (2)
Is part of the Bibliography
- no (8)
Keywords
- CPSL (1)
- Datenaustausch (1)
- Datensouveränität (1)
- Disruptions (1)
- EPCIS (1)
- Event Data (1)
- Fallstudien (1)
- Incorrect Data (1)
- Industrie 4.0 (1)
- Internet of Production (1)
Institute
- Produktionsmanagement (8) (remove)
Due to shorter product life cycles and the increasing internationalization of competition, companies are confronted with increasing complexity in supply chain management. Event-based systems are used to reduce this complexity and to support employees' decisions. Such event-based systems include tracking & tracing systems on the one hand and supply chain event management on the other. Tracking & tracing systems only have the functions of monitoring and reporting deviations, whereas supply chain event management systems also function as simulation, control, and measurement. The central element connecting these systems is the event. It forms the information basis for mapping and matching the process sequences in the event-based systems. The events received from the supply chain partner form the basis for all downstream steps and must, therefore, contain the correct data. Since the data quality is insufficient in numerous use cases and incorrect data in supply chain event management is not considered in the literature, this paper deals with the description and typification of incorrect event data. Based on a systematic literature review, typical sources of errors in the acquisition and transmission of event data are discussed. The results are then applied to event data so that a typification of incorrect event types is possible. The results help to significantly improve event-based systems for use in practice by preventing incorrect reactions through the detection of incorrect event data.
Companies operate in an increasingly volatile environment where different developments like shorter product lifecycles, the demand for customized products and globalization increase the complexity and interconnectivity in supply chains. Current events like Brexit, the COVID-19 pandemic or the blockade of the Suez canal have caused major disruptions in supply chains. This demonstrates that many companies are insufficiently prepared for disruptions. As disruptions in supply chains are expected to occur even more frequently in the future, the need for sufficient preparation increases. Increasing resilience provides one way of dealing with disruptions. Resilience can be understood as the ability of a system to cope with disruptions and to ensure the competitiveness of a company. In particular, it enables the preparation for unexpected disruptions. The level of resilience is thereby significantly influenced by actions initiated prior to a disruption. Although companies recognize the need to increase their resilience, it is not systematically implemented. One major challenge is the multidimensionality and complexity of the resilience construct. To systematically design resilience an understanding of the components of resilience is required. However, a common understanding of constituent parts of resilience is currently lacking. This paper, therefore, proposes a general framework for structuring resilience by decomposing the multidimensional concept into its individual components. The framework contributes to an understanding of the interrelationships between the individual components and identifies resilience principles as target directions for the design of resilience. It thus sets the basis for a qualitative assessment of resilience and enables the analysis of resilience-building measures in terms of their impact on resilience. Moreover, an approach for applying the framework to different contexts is presented and then used to detail the framework for the context of procurement.
The environment in which companies operate is increasingly volatile and complex. This results in an increased exposure to disruptions. Past disruptions have especially affected procurement. Thus, companies need to prepare for disruptions. The preparedness for disruptions in the context of procurement is significantly influenced by the design of the procurement strategy. However, a high number of purchased articles and a variety of influencing factors lead to high complexity in procurement. The systematic design of the procurement strategy should therefore take into account the criticality of the purchased articles. This enables to focus on the purchased articles that have a high impact on the disruption preparedness. Existing approaches regarding the design of the procurement strategy in uncertain environments either lack practical applicability and objective evaluation or focus on the criticality of raw materials rather than of purchased articles. Therefore, a data-based approach for the systematic design of the procurement strategy in the context of the Internet of Production has been proposed. One central aspect of this approach is the identification of success-critical purchased articles. Thus, this paper proposes a framework for characterizing purchased articles regarding supply risks by combining two systematic analyses. First, a systematic literature review is performed to answer the question of what factors can be used to describe the supply risks of purchased articles. The results are analyzed regarding sources and impacts of risks and thus contribute to a structured characterization of supply risks. Second, existing criticality assessment approaches for raw materials are analyzed to identify categories and indicators that describe purchased articles. The results of both reviews provide the basis for linking product characteristics with supply risks and assessing product criticality which will be integrated into an app prototype.
In der neuen Expertise des Forschungsbeirats Industrie 4.0 untersuchen das FIR e. V. an der RWTH Aachen und das Industrie 4.0 Maturity Center den Status-quo und die aktuellen Herausforderungen der deutschen Industrie bei der Nutzung und wirtschaftlichen Verwertung von industriellen Daten. Handlungsoptionen für Unternehmen, Verbände, Politik und Wissenschaft zeigen auf, wie der Nutzungsgrad der Datenbasis erhöht werden kann und wie sich Potenziale bei der Monetarisierung ausschöpfen lassen. Der Fokus liegt dabei auf produzierenden Unternehmen.
Generation of a Data Model For Quotation Costing Of Make To Order Manufacturers From Case Studies
(2022)
For contract or make to order manufacturers, quotation costing is a complex process that is mainly performed based on experience. Due to the high diversity of the product range of these mostly small or medium-sized companies (SMEs) and the poor data situation at the time of quotation preparation, the quality of the calculation is subject to strong variations and uncertainties. The gap between the initial quotation costing and the actual costs to be spent (pre- and post-calculation) is crucial to the existence of SMEs. Digitalization in general can help companies to get a better understanding of processes and to generate data. For improving these processes, an understanding of the important data for that specific process is crucial. Accurate quotation costing for customized products is time-consuming and resource-intensive, as there is a lack of an overview of data to be used within the process. This paper therefore derives a data model for supporting quotation costing in the company, based on literature-based costing procedures and recorded case studies for quotation and calculation. Based on the results, SMEs will have a first overview of the needed data for quotation costing to optimize their calculation process.
Um auf steigende Kundenanforderungen und das sich änderndes Unternehmensumfeld reagieren zu können, müssen Unternehmen ihre Agilität und Reaktionsfähigkeit, insbesondere in Produktionsprozessen, erhöhen. Dafür müssen die Auswirkungen der möglichen Änderungen im Unternehmensumfeld auf die eigenen Geschäfts- und Produktionsprozesse untersucht und verstanden werden. Das Prozessverständnis allein reicht jedoch nicht: Es werden Daten aus unterschiedlichen Quellen benötigt, um die Ereignisse in der Prozess- und Lieferketten nachzuverfolgen, um das Material eindeutig zu charakterisieren und in Unternehmen vorhandene Algorithmen oder Modelle mit Eingangsdaten zu versorgen. Daher spielt die Datenverfügbarkeit eine wichtige Rolle auf dem Weg zur adaptiven Produktion. In diesem Beitrag wird die Wichtigkeit der Datenverfügbarkeit erläutert sowie ein Konzept der Datenplattform zum sicheren, überbetrieblichen Datenaustausch vorgestellt.
Künstliche Intelligenz (KI) hat als Technologie in den vergangenen Jahren Marktreife erlangt. Es existiert eine Vielzahl benutzerfreundlicher Produkte und Services, welche die Anwendung von KI im Alltag und im Unternehmen vereinfachen. Die Herausforderung, vor denen Anwendende, gerade im betriebswirtschaftlichen Kontext, stehen, ist nicht die technische Machbarkeit einer KI-Applikation, sondern deren organisatorisch und rechtlich zulässige Gestaltung. Zu einer zunehmenden Dynamik in der Gesetzgebung kommt ein gesellschaftliches Interesse an der Kontrolle und Transparenz über die für KI-Modelle erhobenen Daten. Die Diskussion über Datensouveränität im geschäftlichen und privaten Alltag rückt mehr und mehr in das Zentrum der öffentlichen Aufmerksamkeit.
Datenbasierte KI-Anwendungen stehen damit in einem Spannungsfeld zwischen den Potenzialen, die das Erheben und Teilen von Daten über Unternehmensgrenzen hinweg bietet, und der Herausforderung, die Datensouveränität der involvierten Personen zu wahren. Die vorliegende Studie soll erstens über die Auswirkungen der Datensouveränität und die damit verbundenen aktuellen und kommenden Regularien auf KI-Anwendungsfälle aufklären. Dafür wurden Expertinnen und Experten aus den Bereichen Recht, KI- und Organisationsforschung befragt. Zweitens zeigt die Studie Potenziale und Best Practices von KI-Anwendungsfällen mit überbetrieblichem Datenaustausch auf. Dafür wurden Fallstudien in Unternehmen durchgeführt, die bereits erfolgreich Datenaustausch in ihre Geschäftsmodelle integriert haben, um ihre KI-Applikationen zu betreiben und zu verbessern.
Data-driven transparency in end-to-end operations in real-time is seen as a key benefit of the fourth industrial revolution. In the context of a factory, it enables fast and precise diagnoses and corrections of deviations and, thus, contributes to the idea of an agile enterprise. Since a factory is a complex socio-technical system, multiple technical, organizational and cultural capabilities need
to be established and aligned. In recent studies, the underlying broad accessibility of data and corresponding analytics tools are called “data democratization”. In this study, we examine the status quo of the relevant capabilities for data democratization in the manufacturing industry.
(1) and outline the way forward.
(2) The insights are based on 259 studies on the digital maturity of factories from multiple industries and regions of the world using the acatech Industrie 4.0 Maturity Index as a framework. For this work, a subset of the data was selected.
(3) As a result, the examined factories show a lack of capabilities across all dimensions of the framework (IT systems, resources, organizational structure, culture).
(4) Thus, we conclude that the outlined implementation approach needs to comprise the technical backbone for a data pipeline as well as capability building and an organizational transformation.