

19 December 2025
I often hear questions and complaints, how come there is no AI in Star Trek at all? How come the ship is not propelling itself, with just the captain giving the orders? Or why is the captain even needed on the bridge? I beg to differ. Star Trek is full of AI. And I do not mean only Lt. Data (aka the android). Geordi La Forge for one, has been vibe coding since the 1990s TV years. He “talks” to the computer constantly, programming it without writing a single line of code. Not that I understand what he is saying. But look for yourself in this classical example from Episode 6 from TNG season 3 called “Booby Trap”:
And of course, I now simply must include this even more classical example of Scotty trying to talk to the computer in the movie Star Trek IV: The Voyage Home:
Now, this makes me think further. Why are there no software engineers in Star Trek? There are plenty of mechanical engineers though. They run around the ship repairing things, breaking them and repairing them again. But not a single software engineer. All killed by a vibe coding monster perhaps?
Fat chance. I don’t believe for a second that vibe coding or any other AI monster is going to kill software engineers anytime soon or ever. I am what one considers an “old” person today (professionally, mind you!), pushing fifty. I learned assembler. I can even use it. I wrote a paint program in it. It took me months, but boy did it render those circles, rectangles and cubes fast!
And when the programming language C appeared, it was prophesied that everyone will code, because it was so easy to use compared to assembler (imagine someone saying pointer arithmetic is easy today), and software engineers will disappear before the job even becomes a real profession. Nothing of the sort happened. If anything, the number of software engineers has grown.
And then came OOP with C++ and Java and C# and… software engineers miraculously survived.
And then came the vibe coding. I did use it intensively (Replit), and yes, it looks great and powerful, and yes again, I am aware of the famous sentence “640KB of RAM will be enough for ever”.
But I still claim that in today’s form vibe coding cannot and will never replace a software engineer. Why? It will reach a certain level of maturity and usability and then evolve to another abstraction level, because it is nothing more (or less) than a code generator (worst case) or new programming language, very similar to a natural language (best case). As I said, I used Replit. And then I tried to (think about how to) maintain the generated code. Good luck with that.
So, what is going to happen instead is that, yet again, the number of software professionals is actually going to rise, not fall. And why is that? Well, apart from the maintenance issue, because, as Marc Andreessen famously wrote in his article: software is eating the world.
And because this is the exact pattern we observed every time there was a (r)evolutionary leap in the way we represent machine instructions for the computer processor. What will happen is that there will consequently be even more software, that it will be even more complex, that we will be building it into even more things and at the very end we will actually require even more software engineers to figure out the whole mess.
Back to Star Trek. One very nice thing about Star Trek is that it is an (almost) perfect utopia. There is no money. There is plenty of war though. And everyone, or at least the good guys, are living with a single purpose: to improve themselves. Great. Back to reality. Everything is (unfortunately) measured in money and profit. To generate both, or so it is said, you need to provide a value, which customers are ready to exchange for their money. So, until the Star Trek utopia arrives, the question remains how to exchange the marvelous AI potential for money?
Honestly: I don’t know. What I do know is that apart from the obvious usage of generative AI to generate things (texts, code, images, videos…) it is not very clear to anyone how to generate value in any vertical market with a generic recipe. What I can do instead is try and give you real-life examples that seem to work very well. So, here are the two use cases I built, which customers find attractive enough that they are ready to pay for them.
I work in document management, which means that my team is building products for the archiving and management of digital documents. In essence, we are usually storing a very large quantity of digital information – documents, such as Office files, images, videos, e-mails and many more, as well as their metadata – in a single system. As I wrote some time ago, this plethora of information can be used as “petrol” to fuel different AI “engines.” But this statement is somehow both obvious and obsolete today. And nobody wants to drive petrol-propelled cars anymore. So let us investigate business cases instead.
When archiving a lot of documents, you need to tell the storage system what the documents are, e.g., which class they belong to. Are they invoices, contracts, protocols, offers or maybe something entirely different? What are their metadata and where do they belong in the system (archiving systems are often structured in complex filing patterns)? And before you even do that, you need to distinguish between individual documents if, for example, they are being scanned or imported in a single big stream (file, audio, or video). Essentially, you need to figure out where one document ends and the next one begins.
How is this done today? Mostly manually, sometimes also semi-automated with complex rules. Enter AI.
The problem of
a) recognizing what a given document is,
b) describing it with metadata,
c) finding the appropriate filing location in an archiving structure (e.g., target folder) and
d) differentiating or splitting documents in an input stream are all classes of a known problem in machine learning called entity recognition/extraction.
And not surprisingly, when you integrate such machine learning tools into an archiving system, thus automating input management, the customers go crazy. Why do they go crazy? First, you speed up the import of documents, second, you make fewer errors while doing so, and third, you reduce manual effort, freeing people to do more meaningful things (like vibe coding perhaps?).
If you want to learn more about how exactly we do this with a product we develop and call enaio® kairos, look no further.
Very important: people are sensitive to sharing their data with any AI. So, whatever you do, do not forget to a) address this issue and b) if possible, offer feasible options of AI running on premises. More about that in the second use case.
The second use case being Lt. Geordi La Forge (again). What I did not tell you is that for the first business case you actually do not even need LLMs. This means that you can run your AI solution economically, without the need to offer a portable nuclear reactor to your customer in the small print. But today goes: no LLM, no AI.
So how do you use LLMs in a meaningful way? LLMs are great. I love them. They learn on publicly available data, so basically you can ask them many things. You can ask them how to bake a strawberry cheesecake, to write a love letter to your secret crush in Pushkin or Taylor Swift style (depending on the age of your secret crush, I expect?), it can even “solve” differential equations (but so could Mathematica 20 years ago).
What a general purpose LLM, however, cannot do, is answer the following questions: “How many contracts do I need to extend this week? Where can I find the latest version of the SOP AA-23 for business trips and how much can I spend on taxis on a single trip? With which customer did I make the most revenue with in the past quarter? Could you please compare these thirty contract versions and tell me the main differences?”
LLMs such as ChatGPT or Gemini cannot answer these questions simply because they don’t have this information. They were not trained on this dataset, because it is not publicly available. Unless you do keep this kind of information available on the public internet. Which you don’t. Right. Right? So, you need to provide this information to the LLM somehow.
Enter RAG. RAG stands for retrieval augmented generation, which is a fancy way of saying giving an LLM a sneak peek into my business data. This is done in a technically spectacular but very straightforward way of
a) repackaging all your business data as chunks in a vector database,
b) for each LLM prompt, first consulting a vector database to give you the top N hits corresponding to your prompt,
c) enriching the prompt by embedding relevant chunks from the vector database and providing this to the LLM, and finally
d) letting the LLM do its magic.
Splendid. Now put all this information, every single bit of it, from, for example, your document management software (which stores all employee’s contracts, among other things), into a vector database, and ask “Could you please give me the salary of my boss?”… and you will get it! Works as intended. Yikes! What very few people understand is that doing this means drilling a huge access control hole into your organization. You simply cannot do this. What you need to do instead is to somehow incorporate access control from each and every system you use to build RAG into this system. And this is not easily done.
Another thing: data in a document management system lives. It can be added, deleted, or modified. And this needs to be reflected in a vector database. To make the solution complete, you need (unfortunately) to add a sort of indexing service to the solution, which will re-index the vector database with each change in the underlying system. You do want your AI to give you the most current answer, don’t you?
If you want to learn more about how we did all this when we build our AI assistant enaio® lumee and integrated it with our document management software, simply ask me or look here.
Don’t forget about data privacy and security. Exposing your business data in a vector database may be assessed as a risk. Make sure that the vector database is separated from the generator (LLM) and can be run on premises or at least in a private cloud. That way you will minimize exposure and risk.
Is this where the future ends? Faster than light engines around the corner? Probably not. What will happen next are, in my opinion, two things.
Teams no longer have to ask about the status, coordination meetings are reduced and misunderstandings are minimized. At the same time, managers have a real-time overview of workloads, bottlenecks and process quality. Decisions are no longer based on gut feeling, but on facts. This transparency also has an enormous impact in the area of compliance. Each process step is automatically documented and stored in an audit-proof manner. An ECM system takes regulatory requirements into account – whether in public administration, healthcare, the financial sector or other highly regulated industries. Only fully documented and traceable processes can meet the increasing requirements.
Firstly, people will understand that AI, the way we use it today, is outrageously expensive. We cannot in all honesty start building nuclear reactors to power computing centers to allow us to build great memes. So, I expect that smaller, so-called domain specific LLMs, which can even run on a local infrastructure (thank you Moore law) will become predominant in the business context (not B2C!), substituting the very expensive usage of general purpose LLMs such as ChatGPT and Gemini.
For example, if I integrate Gemini into my contract management application used with document management software, I would be using a fraction of Gemini’s neurons to compare two contracts or check the validity of a contract. However, if I train a smaller LLM in a specific domain, I don’t need a large model with trillions of parameters and petawatts of energy to do the job.
Secondly, the prompting will evolve from querying ("Please find some information for me!"), to action. So, I expect that we will see prompts such as: “Please increase Nikola’s salary to one million euros per year, effective immediately”. As a response to which an LLM will create a sequence of actions in a document management system:
1. finding my personal file,
2. finding my contract,
3. creating a new version of the contract by selecting the adequate template,
4. filling the new salary and date fields,
5. saving the new version and
6. finally starting a workflow for someone sane to approve this.
At which point this person will probably counter the said prompt by saying: "Please fire Nikola." This is what some people call agentic AI.
So, this is the end, I hope you enjoyed the read. If you wish to give me some feedback, if you agree or disagree, feel free to contact me and I will be happy to discuss these topics with you, share my experiences of the past years with integrating AI into document management and show you how it really works (or doesn’t work). Take care and of course, live long and prosper.
Do you have any further questions? Your opportunity for first contact!