Refine
Year of publication
Document Type
- Article (3)
- Part of a Book (12)
- Conference Proceeding (105)
- Contribution to a Periodical (3)
- Lecture (2)
Language
- English (125) (remove)
Is part of the Bibliography
- no (125)
Keywords
- 02 (11)
- 03 (9)
- 04 (1)
- 5G (2)
- AI (2)
- APMS (1)
- APS (1)
- Adaptability (1)
- Advanced Planning System (1)
- Anomaly detection (1)
Institute
Pricing is one of the most important, but underestimated tools, to enhance a company's profitability. Especially value-based pricing has a high potential to reach higher levels of satisfaction because it equates the needs of providers and customers. Even though, it is a well-known price model and promises higher satisfaction, many companies struggle to implement it. Especially the manufacturing industry is characterized by cost-plus pricing and competition-based pricing. However, especially for digital products these pricing strategies are insufficient. Therefore, this paper aims at exploring the design fields for value-based pricing of digital products in the manufacturing industry. To achieve this, the basics of digital products and value-based pricing are explored. Furthermore, an expert workshop is conducted that follows a framework for value-based pricing consisting of four consecutive steps analysis, price strategy, pricing, and market launch to capture the design fields. This paper concludes with limitations, and practical and research implications.
Reinforced through the pandemic and shaped by digitalization, today's professional working environment is in a state of transformation. Working remotely has become a vital component of many professions' regular routines. The design of remote work environments presents challenges to organizations of all sizes. By providing a classification, this paper reveals a comprehensive understanding of the fields of design to be considered to establish lasting remote work concepts in organizations. A hierarchical classification with four dimensions consisting of human, technology, organization, and culture, seven design elements and, twenty design parameters indicates to organizations the fields of design that need to be examined. To satisfy both the theoretical foundation and the practical application, design elements are derived by implementing a systematic review of the literature that represents key areas of interest for remote work. Additionally, these are verified and complemented by a dedicated case study research to incorporate practice-oriented design parameters.
Gap Analysis for CO2 Accounting Tool by Integrating Enterprise Resource Planning System Information
(2023)
Detailed carbon accounting is the foundation for reducing CO2 emissions in manufacturing companies. However, existing accounting approaches are primarily based on manual data preparation, although manufacturing companies already have a variety of IT systems and resulting data available. The gap analysis carried out based on the GHG Protocol and an reference ERP system shows how much of the required information for CO2 accounting can be integrated from an ERP system. The ERP system can cover 20 % of the required information. The information availability can be increased to 49 % through additionally identified modifications of the ERP system. Integrating the CO2 accounting tool with other systems of the IT landscape, e. g. Energy Information System, enables an additional increase.
Manufacturing companies face the challenge of managing vast amounts of unstructured data generated by various sources such as social media, customer feedback, product reviews, and supplier data. Text-mining technology, a branch of data mining and natural language processing, provides a solution to extract valuable insights from unstructured data, enabling manufacturing companies to make informed decisions and improve their processes. Despite the potential benefits of text mining technology, many manufacturing companies struggle to implement use cases due to various reasons. Therefore, the project VoBAKI (IGF-Project No.: 22009 N) aims to enable manufacturing companies to identify and implement text mining use cases in their processes and decision-making processes. The paper presents an analysis of text mining use cases in manufacturing companies using Mayring's content analysis and case study research. The study aims to explore how text mining technology can be effectively used in improving production processes and decision-making in manufacturing companies.
With the development of publicly accessible broker systems within the last decade, the complexity of data-driven ecosystems is expected to become manageable for self-managed digitalisation. Having identified event-driven IT-architectures as a suitable solution for the architectural requirements of Industry 4.0, the producing industry is now offered a relevant alternative to prominent third-party ecosystems. Although the technical components are readily available, the realisation of an event-driven IT-architecture in production is often hindered by a lack of reference projects, and hence uncertainty about its success and risks. The research institute FIR and IT-expert synyx are thus developing an event-driven IT-architecture in the Center Smart Logistics' producing factory, which is designed to be a multi-agent testbed for members of the cluster. With the experience gained in industrial projects, a target IT-architecture was conceptualised that proposes a solution for a self-managed data-ecosystem based on open-source technologies. With the iterative integration of factory-relevant Industry 4.0 use cases, the target is continuously realised and validated. The paper presents the developed solution for a self-managed event-driven IT-architecture and presents the implications of the decisions made. Furthermore, the progress of two use cases, namely an IT-OT-integration and a smart product demonstrator for the research project BlueSAM, are presented to highlight the iterative technical implementability and merits, enabled by the architecture.
Assets of integrated production systems, especially in the heavy industry, are facing high requirements in terms of reliability and availability. In case of component breakdown, the operating firm is confronted with high costs due to downtime and loss of production. Modern maintenance concepts in combination with advanced technologies can help to improve the plant availability and reduce the downtime costs caused by unplanned breakdowns. Against this background, the research institutes FIR and IMR from RWTH Aachen University, Germany, are collaborating within the research project “SiZu”. This project deals with the integration of condition monitoring system and real time simulation to assess the condition of components and to support failure cause analysis.
Based on the increasingly complex value creation networks, more and more event-based systems are being used for decision support. One example of a category of event-based systems is supply chain event management. The aim is to enable the best possible reaction to critical exceptional events based on event data. The central element is the event, which represents the information basis for mapping and matching the process flows in the event-based systems. However, since the data quality is insufficient in numerous application cases and the identification of incorrect data in supply chain event management is considered in the literature, this paper deals with the theoretical derivation of the necessary data attributes for the identification of incorrect event data. In particular, the types of errors that require complex identification strategies are considered. Accordingly, the relevant existing error types of event data are specified in subtypes in this paper. Subsequently, the necessary information requirements and information available regarding identification are considered using a GAP analysis. Based on this gap, the necessary data attributes can then be derived. Finally, an approach is presented that enables the generation of the complete data set. This serves as a basis for the recognition and filtering out of erroneous events in contrast to standard and exception events.
The complexity and volatility of companies’ environment increase the relevance of disruption preparation. Resilience enables companies to deal with disruptions, reduce their impact and ensure competitiveness. Especially in the context of procurement, disruptions can cause major challenges while resilience contributes to ensuring material availability. Even though past disruptions have posed various challenges and companies have recognized the need to increase resilience, resilience is often not designed systematically. One major challenge is the number of potential measures to increase resilience. The systematic design of resilience thus requires a detailed understanding of domain-specific measures. This also includes an understanding of the contribution of these measures to different resilience components and their interdependencies. This paper proposes a systematic approach for configuring resilience in procurement which enables the evaluation and selection of resilience measures. Based on a resilience framework, a resilience configurator is developed. The basis of the configurator are resilience potentials that have been characterized and clustered. Overarching approaches to design resilience and indicators to evaluate resilience are presented. Moreover, a procedure is proposed to ensure practical applicability. To evaluate the results two case studies are conducted. The results enable companies to systematically design their resilience in procurement.
One of the major tasks of operations managers is to boost uptime while simultaneously keeping budget. To meet this challenge they discover reliability-based management as strategic factor to improve performance. But which parameters are the key to “reliability excellence” and drive a company’s performance? What are the relevant levers to pull in reliability-based management?
To answer these questions McKinsey & Company partnered with Aachen University to launch a global reliability survey in process industries. Objective of the initiative is to provide a statistically proven picture of key factors that drive maintenance and reliability excellence. Furthermore benchmarks and best practices concerning overall operational performance will be identified. The study is based on a questionnaire-based approach which addresses all relevant departments within a company, complemented by best practice analyses.
This paper provides results of the survey. The results demonstrate that reliability pays off. Some unproven beliefs have been confirmed (e.g. a good reliability performance results in a low spare part inventory) but also surprises like a correlation between safety and performance were identified. The analysis also shows that structural differences like company size or geography do not influence reliability performance.
Holistic PLM- Model
(2010)
Product Lifecycle Management (PLM) is a widely discussed topic concerning the increase of efficiency of product development in terms of time to market as well as customizing products to the different needs of customers worldwide adequately. Historically PLM focuses the early phases of the product’s lifecycle, namely the product development phase. Therein the roots of PLM are based in supporting the information logistics of product data: Consistent data sets should be available to all stakeholders in the different departments at all times. Due to the increasing product complexity PLM has to be extended in terms of the temporal dimension (not limited to product development phase) and systemic dimension (not limited to the information logistic aspect). In this paper the authors derive a holistic framework for Product Lifecycle Management by analysing existing integrated management approaches. The framework consists of four dimensions: PLM strategy, PLM process, Product structure and PLM IT-Architecture. The sustainability and benefits of the framework is demonstrated by applying the framework to the communication service provider industry (CSP).