Vibing with the unpopular kids
TL;DR: AI coding agents reinforce and drift towards mainstream technologies because they perform best where training data and convention is densest. So, before using an AI coding agent in an unfamiliar ecosystem, get to know that ecosystem yourself. If you cannot tell whether the agent’s approach is idiomatic, it will confidently build the wrong thing faster than you can debug it.
Have you ever tried using one of the GPTs with an (subjectively) unpopular technology? I normally move between Python, web technology and C++. But I had never really needed to use it on technology that was not "mainstream" until recently. But using XWiki at work, I wanted to create a new macro that takes a wiki video, subtitles and chapter attachments, and displays them nicely on the page with clickable timestamp links. It's not too complicated, you'd think - maybe 1–2 hours with Claude or Codex, you'd think. But actually, this turned out to be a relatively painful all-day experiment.
I haven't used Java for about 18 years. To be honest, I don't miss it. Sometimes I dream of writing OpenGL apps or applets - for those who remember them - in Java. XWiki is written in Java. I also have no knowledge about the internals of XWiki. I just use it, and honestly, I don't want to become an "XWiki internals person". This was supposed to be a small production-side improvement, not a new specialization. Choose your battles!
XWiki has a few "XWiki-native" technologies and conventions layered on top of Java: Velocity templates, Groovy scripting, its own XWiki syntax, XObjects/XClasses and XARs. Lots of X-es! Without the GPTs, I would never have considered taking this on as a side project, especially not with the goal of developing it beyond the prototype stage and deploying it in production. So, overall, I was at a big disadvantage compared to other Python/Web projects, where I have strong opinions about what constitutes good code. So I decided I didn't want to look at the code at all. I also couldn't really guide Claude on the best way to set up, code and structure things. But, yea, let's stay positive - we'll figure it out together as we go along!
My plan was to throw a few example macros at Claude Code, tell it where the XWiki dev docs are - very good ones in retrospect - explain what I wanted, and it would just do the job. However, it turns out that even though Claude may outsmart me in plain Java quite a bit, it has little clue about XWiki internals, the aforementioned technologies, or developer best practices. It also did not seem too keen on looking any of that up on the internet, even when prompted.
My first mistake was not setting up the development environment properly. I wanted to use XWiki in Docker and iterate quickly. But I did not understand the extension deployment model. Claude worked quite hard: uploading the extension via API, driving the UI through Playwright MCP, or injecting it directly into the Docker container with docker cp. All of these sounded plausible. None of them were the right workflow. It turns out that you should create a local Maven repository and install and update the macro from there. In other words: the model did not need more Java knowledge. It needed XWiki ecosystem knowledge. So did I. I should have just read the documentation.
I had taken the time to create an extensive PRD for the task, so I expected it to be implemented correctly the first time, or at least after a few rounds of prompting - something I am used to with many trivial tasks, and even some not-so-trivial ones. But no. XWiki threw quite a few unexpected obstacles in our way, many of which I still do not understand. I just prompted the error back and prayed that internet search or model intelligence would provide a solution. The problem was not one single big blocker. It was death by ecosystem details.
At one point I was so stuck on a problem with Claude that I basically started from the beginning. But this time I wanted to - excuse the language - get shit done. So I turned to "Get Shit Done" (GSD), a lightweight and powerful meta-prompting, context engineering and spec-driven development system. Obviously this was not the trivial task I thought it would be, so it just needed MORE PROMPTING and MORE TOKENS!
This helped, and I think this was mostly due to the dedicated research phase that GSD does before implementation. Even though this phase is optional, you should never skip it when working on tasks in what I would describe as an "unpopular" fringe domain, such as XWiki macros.
After all four phases, I had a working macro. I can't tell if it's good or secure code, but it does the job.
I expected Claude to be better at "figuring out the missing knowledge", but it still prefers to guess rather than check. While this certainly can be improved with better instructions or model changes, it means vibing code with software that is relatively uncommon is hard.
It may also mean that new programming paradigms, frameworks and idioms will be harder for AI tools to adopt until enough public examples exist. Humans can adopt a new framework over time by building taste, reading docs, joining communities and making mistakes. AI coding agents mostly follow the density of examples, documentation, Stack Overflow history, GitHub repos and convention. That creates a kind of ecosystem gravity.
Maybe this makes it less likely that we will see future projects like React back in the days, which turned web dev upside down, emerge in quite the same way. Or at least, it may make the adoption curve weirder. The mainstream gets even easier. The fringe gets comparatively harder.
Vibe coding is not yet for everyone, and it is definitely not equally effective everywhere. You need ecosystem understanding. In mainstream ecosystems, coding agents benefit from millions of examples and well-trodden conventions. In fringe ecosystems, they often lose the plot unless you provide the map.
That does not make AI coding useless. It changes what expertise means. The valuable skill is no longer only writing code line by line. It is knowing the ecosystem, being competent in your profession, recognizing bad architecture, forcing the agent to research before it implements, and having enough taste to reject plausible nonsense.
Just as having a camera does not make someone a photographer, a coding agent will not make someone a programmer. Professional photographers do not have an easy time in the market; the same will happen to programmers, and we will need to change our mindset since we have been in high demand for a long time.
Ultimately, it will come down to competence, taste, experience. Photographers with "the eye" will still be in demand. So will programmers with "the eye".