Digital Ethics (Entity Specific)

People, machines, data, and processes are becoming increasingly interlinked, with technological advances transforming our society and posing new ethical challenges. Our digital ethics activities define how we responsibly handle data, algorithms and artificial intelligence (AI).

Our material impact related to digital ethics (SBM-3)

ESRS SBM-3 – Digital ethics

Digital ethics

Identifier

 

E-PI-02

Material impacts, risks and opportunities

 

Actual positive impact

Time horizon

 

Medium-term

Value chain step

 

Upstream; own operations; downstream

Description

 

Responsible handling of digital technologies:
The field of digital ethics comprises the ethical issues and impacts relating to digital technologies and the usage of data alongside digital applications and services. As digitalization progresses, companies are introducing ever more digital tools and platforms. Therefore, it is essential to ensure that these technologies are handled in an ethically responsible way – especially with respect to data protection, AI, algorithmic bias, and the implementation of applications in sensitive areas. In the context of technological innovations, compliance with digital ethics principles plays a decisive role in winning and retaining stakeholder trust. We take digital ethics aspects into account in our business activities to a greater extent than is legally stipulated, thereby contributing to the responsible development and use of digital technologies. This has a positive effect on society.

Our policy related to digital ethics (MDR-P)

ESRS MDR-P – Code of Digital Ethics

Code of Digital Ethics

Connection to material impacts, risks and/or opportunities

 

Entity-PI-02

Material sustainability matter

 

Digital ethics

Key contents

 

The policy serves as a set of guidelines for our digital business models, as an instrument for analyzing ethical issues and as a basis for practical recommendations by the Digital Ethics Advisory Panel of Merck KGaA, Darmstadt, Germany (DEAP). It is based on five central principles: justice, autonomy, beneficence, non-maleficence, and transparency.
These principles provide a clear structure for assessing ethical issues. Moreover, they support our business sectors and individual employees in overcoming challenges in the field of digital technologies for which no statutory or other regulations yet exist. The policy helps us assess the ethical risks of existing activities while also enabling us to ethically assess relevant aspects of new digital solutions. To this end, we use a principle-at-risk analysis (PaRA) based on the policy.
We regularly perform internal reviews of data and AI technologies, services, applications, and cooperations in this area, in close collaboration with and advised by the DEAP. The policy is regularly monitored and updated if necessary.

Scope

 

The policy applies Group-wide for all employees who work in the fields of data science, AI and other digital specialist areas.

Accountability

 

Executive Board, Managing Director or Site Manager.

Third-party standards/initiatives

 

The policy is based on the EU AI Act, various scientific articles and other third-party guidelines on the use of AI.

Consideration of stakeholder interests

 

We developed and reviewed the policy with the involvement of internal stakeholders and external experts.

Availability

 

The policy is available internally on the intranet and publicly on our website.

Digital Ethics Advisory Panel

The DEAP plays a key role in the assessment of ethical issues relating to data and AI in our company. As an independent advisory panel, it provides support in identifying and addressing complex ethical challenges. Its work is based on the Code of Digital Ethics. The panel consists of external international science and industry experts with specialist knowledge in the fields of digital ethics, law, big data technologies, digital health, medicine, and data governance. In addition, we involve bioethics experts as well as representatives from patient organizations as needed. All employees who work with data and AI can contact the DEAP at any time with their topics and challenges. The panel meets online on a quarterly basis and gathers in person at least once a year. In 2025, it dealt with the automatic recording and transcription of virtual meetings among other matters. In doing so, the panel identified ethical risks such as a lack of transparency regarding the purpose of, access to and duration of storage of the recordings. As a result of the panel discussion, a new Group-wide rule was introduced stipulating the automatic deletion of recordings after four weeks.

Digital ethics check

Using an analysis mechanism – the Digital Ethics Check of Merck KGaA, Darmstadt, Germany (MDEC) – we intend to identify ethical risks relating to our projects and products in the individual business units independently and at an early stage. All relevant phases of a project or a product life cycle are systematically taken into account during the process. The semi-automated MDEC is based on the Code of Digital Ethics. It reviews and assesses certain aspects of a project for ethical risks using a scoring system and suggests possible actions for mitigation. We can draw conclusions for product development on the basis of the calculated risk value. The MDEC can be performed without prior ethical knowledge. Upon request, our Digital Ethics Team supports the respective business unit in analyzing the risk value and conducting a more in-depth assessment of the ethical risks. Should complex ethical issues arise, these are submitted to the DEAP in order to obtain recommendations for risk mitigation. Since January 2024, every new project in the Life Science business sector has been analyzed in accordance with our scoring system. In fiscal 2025, we also developed an MDEC demo app that demonstrates the risk assessment process at Life Science and familiarizes employees with the topic. Additionally, we expanded the MDEC to projects in Human Resources as well as in the Digital Health franchise of the Healthcare business sector. At the same time, we are developing methods for identification of ethical risks accessible to the general public through scientific publications and providing opportunities for academic dialogue. In 2026, we plan to introduce them in further franchises in the Healthcare business sector. The aim is to gradually expand the MDEC to the entire company.

We aim to introduce the MDEC throughout the company and thus identify ethical risks in all AI projects within the company at an early stage. In 2026, we want to devise a specific MDEC version for the area of research and clinical development in the Healthcare business sector alongside a general variant for all other units. We also want to define metrics for monitoring the progress of the MDEC and establish a governance process for the monitoring. In doing so, we are creating a foundation upon which we can continuously evaluate the acceptance and effectiveness of the MDEC and adapt the analysis mechanism where necessary. Beyond these ambitions, we have not set any targets related to digital ethics.

Share this page: