A digital art piece shows a series of figures resembling "The Thinker," composed of blue and purple cubes. Each figure appears to be in different stages of formation, sitting on geometric stones against a plain background.

15 April 2025

Artificial Intelligence in Enterprise Content Management: Opportunities and Responsibility

By Jana Blankenhagen

Eine Frau mit lockigem Haar steht vor einem kastanienbraunen Hintergrund und trägt eine Jeansjacke über einem weißen Hemd. Sie lächelt leicht und hat die Arme verschränkt.

Jana Blankenhagen

Chief Human Resources Manager

As Chief Human Resources Officer (CHRO) at OPTIMAL SYSTEMS, I have experienced first-hand in recent years how artificial intelligence (AI) is changing the world of work and enterprise content management (ECM) in particular. Our solutions offer companies various opportunities to make better use of unstructured data and information from the thousands of documents and content they have to manage and to optimize their processes. At the same time, I see it as our task to ensure that the technology can be used responsibly—always with an eye on the people behind every decision. Because: "A company can purchase the most expensive and modern technology, but a person still has to operate it."

This guiding principle shapes my daily work. The best technology can only fulfill its potential if people understand it, question it critically and use it responsibly. This is exactly what we need to focus on: harmonizing technological progress and human expertise.

"Computers are idiots."

Peter Drucker emphasized the central role of people in dealing with technology. I wouldn't agree with this quote on its own. At the same time, it emphasizes that technology remains ineffective without human guidance and critical thinking.

Many experts follow this approach and have shaped it in different ways. The best known is probably the work of Frithjof Bergmann, the founder of the "New Work" concept. In his book "New Work, New Culture" (2004), he emphasized that technology should not replace people, but rather support them. He saw technology as a tool that could make repetitive and standardized tasks easier or even relieve people of them in order to generate more capacity for more meaningful work.

Ursula Franklin, a physicist and philosopher, argued in her book "The Real World of Technology" (1999) that technologies that put people at the center and expand their capabilities should be preferred. She was critical of technology that turns people into mere users.

Sherry Turkle, Professor of Social Sciences at MIT, examined the impact of technology on human interactions in her book "Reclaiming Conversation: The Power of Talk in a Digital Age" (2015). She pointed out that while digital technologies connect us, they can also affect our ability to communicate deeply and meaningfully.

Between vision and reality: How principles meet complex reality

In my view, the European Union's AI Regulation (AI Act) does justice to some of the ideas put forward by Drucker, Bergmann, Franklin and Turkle. This is because it formulates important and clear requirements for AI systems, including transparency, non-discrimination, security and human control. At the same time, it is already clear that implementing these principles in practice is proving to be extremely complex and expensive:

  • Transparency vs. authorization systems: Transparency requires AI decisions to be comprehensible and verifiable. But how can an employee evaluate the results of an AI if data protection regulations such as the Federal Data Protection Act prevent them from accessing certain data? AI can analyze data comprehensively, while humans remain restricted by legal limits. This discrepancy undermines the idea of complete transparency.
  • Non-discrimination: Bias in data and algorithms is a well-known problem. Companies would have to invest extensive resources in analyzing and cleansing data in order to avoid discrimination. But how many companies can afford to eliminate every potential bias?
  • Human control: The AI Act demands that people can question and correct AI results. But how often do we have time to check AI decisions in our hectic working lives? Efficiency and control are often an irreconcilable contradiction.

Meeting these requirements entails enormous costs and resources—from the development of transparent algorithms to compliance with strict data protection regulations. Here is a simple ECM scenario from personnel management:

One of the most exciting developments that we are driving forward at OPTIMAL SYSTEMS are AI-based features in our ECM solutions, which combine and directly process different information from the data and documents as seamlessly as possible. At first glance, this starts with very simple processes: When a document or e-mail is captured, our system automatically recognizes which digital personnel file it belongs in, what type of document it is (e.g., an appraisal or a termination notice), reads out the content, already records the metadata and automatically starts a corresponding offboarding workflow for the departure. It would take almost just one "click" and the capturing process would be complete.

This example alone contains several complex system functions that an AI has to access, analyze and process. This includes an intelligent search, content analysis, classification, data transfer, secure authorization system, versioning and processing history as well as an automated workflow with intelligent user management and interfaces to the systems from which the document originally comes (e-mail system, scanner, file import).

Focus on employees: Can we keep the balance?

From an HR perspective at least, employees will be even more central in the future. The requirements of the AI Act present them with new challenges: They should critically scrutinize technological results and integrate them into decisions. But the reality is that many employees are not sufficiently trained for this—or simply don't have the time.

The discrepancy between the legal requirements and the actual skills and capacities of employees is becoming increasingly apparent. How can companies ensure that employees acquire the necessary skills when cost pressure and efficiency gains are a priority? The question remains:

Will the AI Act lead to people taking responsibility or handing it over to technology entirely for fear of making mistakes?

Critical thinking: Urgent need for action for companies and HR managers

The integration of artificial intelligence (AI) into our working world is no longer an issue of the future—it is happening right now. Just like us, companies in all sectors are facing the challenge of preparing their employees to work efficiently and responsibly with AI systems.

As a software manufacturer, we share this responsibility to develop AI tools that comply with the basic principles of the AI Act and thus enable safe, transparent and human-centered use. In my view, critical thinking is becoming a key skill that can make the difference between success and failure in the short term.

Why critical thinking is essential.

I often find the results of AI very impressive. From the speed with which data is analyzed to the precision with which patterns are recognized. But we must not be deceived by this perfection: No system is infallible, and no technology can fully capture the complexity of human decisions.

Recognize the limits of technology: AI is based on algorithms that are developed by humans and trained with data, which in turn is not free of errors, bias or gaps. This means that the results of AI can always be subject to uncertainties. Employees must therefore learn to understand the limitations of the technology and question whether the results presented are actually reliable and applicable in the specific context.

Critical thinking as a key competence: The ability to critically review AI results is not only a safeguard against errors, but also a key factor in building trust in the technology. Employees must not only question whether a result is "technically correct", but also whether it corresponds to the company's values, objectives and legal requirements. Without this critical review, companies risk making decisions that could cause long-term damage.

Supporting tool, not a replacement: AI should never be seen as a substitute for human judgment. It is a tool that provides information and supports processes, but the responsibility for decisions remains with people. Employees must be encouraged to use their intuition, experience and ethical values to put AI results in the right context.

Promotion of critical thinking: Companies cannot take critical thinking for granted, but must actively promote it. This can be done through targeted training, workshops and actively practiced interdisciplinary collaboration that show employees how to deal with uncertainties and effectively evaluate AI results. An open error culture and regular discussions about the challenges and limitations of AI are also essential in order to develop a critical but constructive attitude towards technology.

Why it takes more than technology: Critical thinking is not only a skill, but also an attitude. It takes courage to question established processes or seemingly perfect results. Companies need to create a culture that rewards this attitude instead of sanctioning it, employees who are willing to question the status quo. They are the key to using the potential of AI responsibly.

This makes it clear that critical thinking should not be seen as an individual skill. It can only be effective in a cluster with other competencies. In my opinion, this includes at least three other specific skills:

Basic technological knowledge: A basic understanding of data sources, structure and quality as well as basic knowledge of AI algorithms and how AI models are trained. This applies to all employees in all departments that process data.

Interdisciplinary cooperation: In the future, teams from Research & Development, Professional Services and Sales should work together more closely on an interdisciplinary basis and involve experts from IT, Legal and HR at an early stage in order to work together on solutions. This unites all human-technology perspectives. The former are experts in technology and algorithms as well as professional requirements, while the latter are experts in human behavior and relevant influencing factors.

Communication and reflection skills: It is important for communicating complex issues, clearly formulating insights, actively listening and incorporating other perspectives and resolving conflicts.

 
Are these skills alone sufficient? Can you think of any other key competencies? Further skills for success may be needed.

To the best of our knowledge and belief

At OPTIMAL SYSTEMS, we are aware of this challenge. Not only because there will be a clear legal framework for the use of AI, but because we know that it is already crucial to act today. Technological progress is advancing faster than guidelines and policies can take effect. That is why we are now focusing on creating a culture that is characterized by:

  • Customer orientation with empathy: "Our work is not an end in itself. We respond to the needs of our customers."
  • Innovation: "We question the tried and tested." and
  • Personal responsibility: "We are guided by competence and reason to weigh up opportunities and risks."

Our employees should not just be passive users, but actively involved in shaping, understanding and critically scrutinizing technologies. By fostering open and honest communication, along with our "With expertise and aid" principle, we create an environment where questions are encouraged, and diverse perspectives are valued. This is essential in order to develop AI-based software solutions on the one hand and to use AI systems effectively and deal responsibly with their weaknesses on the other.

But will that be enough? Can we really meet the requirements of the AI Act? Are we ensuring that a human being will fill the remaining technological gaps? Will we ever experience the ideal state in reality?

My conclusion: responsibility, values and opportunities in harmony

In reality, we are already experiencing the integration of artificial intelligence on a daily basis, both professionally and privately, and with it the need to act responsibly. Solutions such as AI-based enterprise content management systems show how AI can help to optimize processes, process information faster, make better use of it and make more informed decisions. But this technology does not replace our human intuition, experience and judgment.

To the best of our knowledge and belief, our values and principles are decisive for us in developing and implementing AI-based ECM solutions. They help us to shape the collaboration between people and technology in such a way that our own employees, as well as the employees of our customers and partners, can work independently and at the same time develop an awareness of the impact of their decisions.

Critical thinking, a willingness to collaborate across disciplines as equals, a sense of responsibility, and a focus on human values remain essential. Yet, in light of the demands of the AI Act, I find myself questioning more than ever whether these principles alone are still enough.

AI is a powerful tool, but it's the people who make the difference, because: "A company can purchase the most expensive and modern technology, but a person still has to operate it."

Thanks to Nikola Milanovic, Chief Technology Officer, for the interesting discussion on the future of our products, which motivated me to write this article.

Do you have any further questions?