I write documentation for a living. Although my output is writing, my job is observing, listening and understanding. I can only write well because I have an intimate understanding of my readers' problems, anxieties and confusion. This decides what I write about, and how to write about it. This sort of curation can only come from a thinking, feeling human being.
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
The problem is that so many things have been monopolized or oligopolized by equally-mediocre actors so that quality ultimately no longer matters because it's not like people have any options.
You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".
You are right. We are seeing a transition from the user as a customer to the user as a resource. It's almost like a cartel of shitty treatment.
I don't work for the public transit company; I introduce immigrants to Berlin's public transit. To answer to the broader question, good documentation is one of the many little things that affect how you feel about a company. The BVG clearly cares about that, because their marketing department is famously competent. Good documentation also means that fewer people will queue at their service centre and waste an employee's time. Documentation is the cheaper form of customer service.
Besides, how people feels about the public transit company does matter, because their funding is partly a political question. No one will come to defend a much-hated, customer-hostile service.
Counterpoint - I think it’s going to become much easier for hobbyists and motivated small companies to make bigger projects. I expect to see more OSS, more competition, and eventually better quality-per-price (probably even better absolute quality at the “$0 / sell your data” tier).
Sure, the megacorps may start rotting from the inside out, but we already see a retrenchment to smaller private communities, and if more of the benefits of the big platforms trickle down, why wouldn’t that continue?
Nicbou, do you see AI as increasing your personal output? If it lets enthusiastic individuals get more leverage on good causes then I still have hope.
When it became cheaper to publish text did the quality go up?
When it became cheaper to make games did the quality go up?
When it became cheaper to mass produce X (sneakers, tshirts, anything really) did the quality go up?
It's a world that is made of an abundance of trash. The volume of low quality production saturates the market and drowns out whatever high quality things still remain. In such a world you're just better of reallocating your resources from the production quality towards the the shouting match of marketing and try to win by finding ways to be more visible than the others. (SEO hacking etc shenanigans)
When you drive down the cost of doing something to zero you you also effectively destroy the economy based around that thing. Like online print, basically nobody can make a living with focusing on publishing news or articles but alternative revenue streams (ads) are needed. Same for games too.
> When it became cheaper to … did the quality go up?
No, but the availability (more people can afford it) and diversity (different needs are met) increased. I would say that's a positive. Some of the expensive "legacy" things still exist and people pay for it (e.g. newspapers / professional journalism).
Of course low quality stuff increased by a lot and you're right, that leads to problems.
Well yeah more people can afford shitty things that end up in the landfill two weeks later. To me this is the essence of "consumerism".
Rather than think in terms of making things cheaper for people to afford we should think how to produce wealthier people who could afford better than the cheapest of cheapest crap.
But in the context of softwares, the landfill argument doesn't fit exactly well (well, sure someone can argue that storage on say, github might take more drives but the scale would be very cheaper than say landfill filled with physical things as well
> Rather than think in terms of making things cheaper for people to afford we should think how to produce wealthier people who could afford better than the cheapest of cheapest crap.
This problem actually runs deep and is systemic. I am genuinely not sure how one can do it when the basis of wealth derives from what exactly? The growth of stock markets which people call bubbles or the US debt crisis which is fueling up in recent years to basically fuel the consumerism spree itself. I am not sure.
If you were to make people wealthy, they might still buy cheapest of cheapest crap just at a 10x more magnitude in many cases (or atleast that's what I observed US to do with how many people buy and sell usually very simple saas tools at times)
Re software and landfill.. true to some extent but there are still ramifications as you pointed out electricity demand and hardware infrastructure to support it. Also in the 80's when the computer games market crashed they literally dumped games cartridges in a hole in the desert!
Maybe my opinion is just biased and I'm in the comfortable position to pass judgment but I'd like to believe that more people would be more ethical and conscious about their materialistic needs if things had more value and were better quality and instead of focusing on the "price" as the primary value proposition people were actually able to afford to buy other than the cheapest of things.
Wouldn't the economy also be in much better shape if more people could buy things such as handmade shoes or suits?
> Re software and landfill.. true to some extent but there are still ramifications as you pointed out electricity demand and hardware infrastructure to support it. Also in the 80's when the computer games market crashed they literally dumped games cartridges in a hole in the desert!
I hear ya but I wonder how that reflects on Open source software which was the GP request created by LLM let's say. Yes I know it can have bugs but its free of cost and you can own it and modify it with source code availability and run it on your own hardware
There really isn't much of a difference in terms of hardware/electricity just because of these Open source projects
But probably some for LLM's so its a little tricky but I feel like open source projects/ running far with ideas gets incentivized
Atleast I feel like its one of the more acceptable uses of LLM in so far. Its better because you are open sourcing it for others to run. If someone doesn't want to use it, that's their freedom but you built it for yourself or running with an idea which couldn't have existed if you didn't know the details on implementations or would have taken months or years for 0 gains when now you can do it in less time
It significantly improves to see which ideas would be beneficial or not and I feel like if AI is so worrying then if an idea is good and it can be tested, it can always be rewritten or documented heavily by a human. In fact there are even job posts about slop janitor on linkedin lol
> Wouldn't the economy also be in much better shape if more people could buy things such as handmade shoes or suits?
Yes but also its far from happening and would require a real shake up in all things and its just a dream right now. i agree with ya but its not gonna happen or not something one can change, trust me I tried.
This requires system wide change that one person is very unlikely to bring but I wish you best in your endeavour
But what I can do on a more individualistic freedom level is create open source projects via LLM's if there is a concept I don't know of and then open sourcing it for the general public and if even one to two people find it useful, its all good and I am always experimenting.
When it became cheaper to publish text, for example with the invention of the printing press, the quality of what the average person had in his possession went up: you went from very few having hand-copied texts to Erasmus describing himself running into some border guard reading one of his books (in Latin). The absolute quality of texts published might have decreased a bit, but the quality per capita of what individuals owned went up.
When it became cheaper to mass produce sneakers, tshirts, and anything, the quality of the individual product probably did go down, but more people around the world were able to afford the product, which raised the standard of living for people in the aggregate. Now, if these products were absolute trash, life wouldn't make much sense, but there's a friction point in there between high quality and trash, where things are acceptable and affordable to the many. Making things cheaper isn't a net negative for human progress: hitting that friction point of acceptable affordability helps spread progress more democratically and raise the standard of living.
The question at hand is whether AI can more affordably produce acceptable technical writing, or if it's trash. My own experiences with AI make me think that it won't produce acceptable results, because you never know when AI is lying: catching those errors requires someone who might as well just write the documentation. But, if it could produce truthful technical writing affordably, that would not be a bad thing for humanity.
>When it became cheaper to x did the quality go up?
...yes?
It introduces a lower barrier to entry, so more low-quality things are also created, but it also increases the quality of the higher-tier as well. It's important to note that in FOSS, we (Or atleast...I) don't generally care who wrote the code, as long as it compiles and isn't malicious. This overlays with the original discussion...If I was paying you to read your posts, I expect them to be hand-written. If I'm paying for software, it better not be AI Slop. If you're offering me something for free, I'm not really in a position to complain about the quality.
It's undeniable that, especially in software, cheaper costs and a lower barrier to get started will bring more great FOSS software. This is like one of the pillars of FOSS, right? That's how we got LetsEncrypt, OpenDNS, etc. It will also 100% bring more slop. Both can be true at the same time.
I'd say that those high quality things that still exist do so despite of the higher volume of junk and they mostly exist because of other reasons/unique circumstances. (Individual pride, craftsmanship, people doing things as a hobby/without financial constraints etc)
In a landscape where the market is mostly filled with junk by spending anything on "quality" any commercial product is essentially losing money.
> but it also increases the quality of the higher-tier
I truly don't see this happening anymore. Maybe it did before?
If there's real competition, maybe this does happen. We don't have it and it'll never last in capitalism since one or a few companies will always win at some point.
If you're a higher tier X, cheaper processes means you'll just enjoy bigger profit margins and eventually decide to start the enshittification phase since you're a monopoly/oligopoly, so why not?
As for FOSS, well, we'll have more crappy AI generated apps that are full of vulnerabilities and will become unmaintainable. We already have hordes of garbage "contributions" to FOSS generated by these AI systems worsening the lives of maintainers.
Is that really higher quality? I reckon it's only higher quantity with more potential to lower quality of even higher-tier software.
I think for 'technical' writing, there is going to be some end-state crash.
What happens when all the engineers left can't figure out something, and they start opening up manuals, and they are also all wrong and trash. And the whole world grinds to a halt because nobody knows anything.
When was the last time that speed of development was the limiting factor? 15-20 years ago?
Nowadays the problem is that both technical and legal means are used to prevent adversarial interoperability. It doesn't matter if you (or AI) can write software faster if said software is unable to interface with the thing everyone else uses.
> Documentation is the cheaper form of customer service.
Thank you so much for saying this. Trying to convince anyone of the importance of documentation feels like an uphill battle. Glad to see that I'm not completely crazy.
> We are seeing a transition from the user as a customer to the user as a resource.
I'd argue that this started 30 years ago when automated phone trees started replacing the first line of workers and making users figure out how to navigate where they needed to in order to get the service they needed.
I can't remember if chat bots or "knowledge bases" came first, but that was the next step in the "figure it out yourself" attitude corporations adopted (under the guise of empowering users to "self help").
Then we started letting corporations use the "we're just too big to actually have humans deal with things" excuse (eg online moderation, or paid services with basically no support).
And all these companies look at each other to see who can lower the bar next and jump on the bandwagon.
It's one of my "favorite" rants, I guess.
The way I see this next era going is that it's basically going to become exclusively the users' responsibility to figure out how to talk to the bots to solve any issue they have.
> You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Exactly. If the AI-made documentation is only 50% of the quality but can be produced for 10% of the price, well, we all know what the "smart" business move is.
"well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it."
First, I understand what you're saying and generally agree with it, in the sense that that is how the organization will "experience" it.
However, the answer to "will it lead to a noticeable drop in revenue" is actually yes. The problem is that it won't lead to a traceable drop in revenue. You may see the numbers go down. But the numbers don't come with labels why. You may go out and ask users why they are using your service less, but people are generally very terrible at explaining why they do anything, and few of them will be able to tell you "your documentation is just terrible and everything confuses me". They'll tell you a variety of cognitively available stories, like the place is dirty or crowded or loud or the vending machines are always broken, but they're terrible at identifying the real root causes.
This sort of thing is why not only is everything enshittifying, but even as the entire world enshittifies, everybody's metrics are going up up up. It takes leadership willing to go against the numbers a bit to say, yes, we will be better off in the long term if we provide quality documentation, yes, we will be better off in the long term if we use screws that don't rust after six months, yes, we will be better off in the long term if we don't take the cheapest bidder every single time for every single thing in our product but put a bit of extra money in the right place. Otherwise you just get enshittification-by-numbers until you eventually go under and get outcompeted and can't figure out why because all your numbers just kept going up.
That’s one way to frame it. An other one is, sometime people are stuck in a situation where all options that come to their mind have repulsive consequences.
As always some consequences are deemed more immediate, and other will seem remoter. And often the incentives can be quite at odd between expectations in the short/long terms.
>this sucks and I'm gonna build a better one in a weekend
Hey, this is me looking at the world this morning. Bear with me, the bright new harmonious world should be there on Monday. ;)
Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.
Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.
I like to think of coding as gathering knowledge about some problem domain. All that a team learns about the problem becomes encoded in the changes to the program source. Program is only manifestation of the humans minds. Now, if programmers are largely replaced with LLMs, the team is no longer gathering the knowledge, there is no intelligent entity whose understanding of the problem increases with time, who can help drive future changes, make good business decisions.
Well said. I try to capture and express this same sentiment to others through the following expression:
“Technology needs soul”
I suppose this can be generalized to “__ needs soul”. Eg. Technical writing needs soul, User interfaces need soul, etc. We are seriously discounting the value we receive from embedding a level of humanity into the things we choose (or are forced) to experience.
your ability to articulate yourself cleanly comes across in this post in a way that I feel AI is trying to be and never quite reaches as well.
I completely agree that the ambitions of AI proponents to replace workers is insulting. You hit the nail on the head with pointing out that we simply dont write everything down. And the more common sense / well known something is the less likely it is to be written down, yet the more likely it might be needed by an AI to align itself properly.
The hard part is the slow, human work of noticing confusion, earning trust, asking the right follow-up questions, and realizing that what users say they need and what they actually struggle with are often different things
See also: librarians, archivists, historians, film critics, doctors, lawyers, docents. The déformation professionnelle of our industry is to see the world in terms of information storage, processing, and retrieval. For these fields and many others, this is like confusing a nailgun for a roofer. It misses the essence of the work.
I like the cut o' your jib. The local public transit guide you write, is that for work or for your own knowledge base? I'm curious how you're organizing this while keeping the human touch.
I'm exploring ways to organize my Obsidian vault such that it can be shared with friends, but not the whole Internet (and its bots). I'm extracting value out the curation I've done, but I'd like to share with others.
Why shouldn't AI be able to sufficiently model all of this in the not far future? Why shouldn't have it have sufficient access to new data and sensors to be able to collect information on its own, or at least the system that feeds it?
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
> Why shouldn't AI be able to sufficiently model all of this
I call it the banana bread problem.
To curate a list of the best cafés in your city, someone must eventually go out and try a few of them. A human being with taste honed by years of sensory experiences will have to order a coffee, sit down, appreciate the vibe, and taste the banana bread.
At some point, you need someone to go out in the world and feel things. A machine that cannot feel will never be a good curator of human experiences.
> Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.
As a counterpoint, the very worst "documentation" (scare quotes intended) I've ever seen was when I worked at IBM. We were all required to participate in a corporate training about IBM's Watson coding assistant. (We weren't allowed to use external AIs in our work.)
As an exercise, one of my colleagues asked the coding assistant to write documentation for a Python source file I'd written for the QA team. This code implemented a concept of a "test suite", which was a CSV file listing a collection of "test sets". Each test set was a CSV file listing any number of individual tests.
The code was straightforward, easy to read and well-commented. There was an outer loop to read each line of the test suite and get the filename of a test set, and an inner loop to read each line of the test set and run the test.
The coding assistant hallucinated away the nested loop and just described the outer loop as going through a test suite and running each test.
There were a number of small helper functions with docstrings and comments and type hints. (We type hinted everything and used mypy and other tools to enforce this.)
The assistant wrote its own "documentation" for each of these functions in this form:
"The 'foo' function takes a 'bar' parameter as input and returns a 'baz'"
Dude, anyone reading the code could have told you that!
All of this "documentation" was lumped together in a massive wall of text at the top of the source file. So:
When you're reading the docs, you're not reading the code.
When you're reading the code, you're not reading the docs.
Even worse, whenever someone updates the actual code and its internal documentation, they are unlikely to update the generated "documentation". So it started out bad and would get worse over time.
Note that this Python source file didn't implement an API where an external user might want a concise summary of each API function. It was an internal module where anyone working on it would go to the actual code to understand it.
The map is not the territory! Documentation is a helpful, curated simplification of the real thing. What to include and what to leave out depends on the audience.
But if you treat "write documentation" as a box-ticking exercise, a line that needs to turn green on your compliance report, then it can just be whatever.
Spot on! I think LLM's can help greatly in quickly putting that knowledge in writing, including using it to review written materials for hidden prerequisite assumptions that readers might not be aware of that. It can also help newer hires in how to write and more clearly. LLM's are clearly useful in increasing productivity, but management that think that they even close to ready to replace large sections of practically any workforce are delusional.
I don't write for a living, but I do consider communication / communicating a hobby of sorts. My observations - that perhaps you can confirm or refute - are:
- Most people don't communicate as thoroughly and complete - written and verbal - as they think they do. Very often there is what I call "assumptive communication". That is, sender's ambiguity that's resolved by the receiver making assumptions about what was REALLY meant. Often, filling in the blanks is easy to do - as it's done all the time - but not always. The resolution doesn't change the fact there was ambiguity at the root.
Next time you're communicating, listen carefully. Make note of how often the other person sends something that could be interpreted differently, how often you assume by using the default of "what they likely meant was..."
- That said, AI might not replace people like you. Or me? But it's an improvement for the majority of people. AI isn't perfect, hardly. But most people don't have the skills a/o willingness to communicate at a level AI can simulate. Improved communication is not easy. People generally want ease and comfort. AI is their answer. They believe you are replaceable because it replaces them and they assume they're good communicators. Classic Dunning-Kruger.
p.s. One of my fave comms' heuristics is from Frank Luntz*:
"It's not what you say, it's what they hear." (<< edit was changing to "say" from "said".)
One of the keys to improved comms is to embrace that clarify and completeness is the sole responsibility of the sender, not the receiver. Some people don't want to hear that, and be accountable, especially then assumption communication is a viable shortcut.
* Note: I'm not a fan of his politics, and perhaps he's not The Source of this heuristic, but read it first in his "Words That Work". The first chapter of "WTW" is evergreen comms gold.
LLMs are good at writing long pages of meaningless words. If you have a number of pages to turn in with your writing assignment and you've only written 3 sentences they will help you produce a low quality result that will pass the requirements.
Low-quality is relative. LLMs' low-quality is most people's above-average. The fact the copy - either way - is likely to go through some sort of copy-by-committee process makes the case for LLMs even stronger (i.e., why waste your time). Not always, but quite often.
"you are likely overestimating your own contributions at work"
Based on what? Your own zero-evidence speculation? How is this anything other than arrogant punting? For sure we know that the point was something other than how fast the author reads compared to an AI, so what are we left with here?
>you are likely overestimating your own contributions at work
That’s the logical fallacy anyone is going to be pushed to as soon as judging their individual worth in an intrinsically collective endeavor will happen.
People in lowest incomes which would not be able to integrate in society without direct social funds will be seen as parasites by some which are wealthier, just like ultra rich will be considered parasites by less wealthy people.
As as writer, you know this makes it seem emotional rather than factual?
Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).
Funnily, of all your comment, the only word I objected to was the one right before "insulting": "almost". Thinking that LLM can replace humans outright expresses hubris and disdain in a way that I find particularly aggravating.
The kind of documentation no one reads, that is just here to please some manager, or meet some compliance requirement. These are, unfortunately, the most common kind I see, by volume. Usually, they are named something like QQF-FFT-44388-IssueD.doc and they are completely outdated with regard to the thing they document despite having seen several revisions, as evidenced by the inconsistent style.
Common features are:
- A glossary that describe terms that don't need describing, such as CPU or RAM, but not ambiguous and domain-specific terms, of which there are many
- References to documents you don't have access to
- UML diagrams, not matching the code of course
- Signatures by people who left the project long ago and are nowhere to be seen
- A bunch of screenshots, all with different UIs taken at different stages of development, would be of great value to archeologists
- Wildly inconsistent formatting, some people realize that Word has styles and can generate a table of contents, others don't, and few care
Of course, no one reads them, besides maybe a depressive QA manager.
The best tech writers I have worked with don’t merely document the product. They act as stand-ins for actual users and will flag all sorts of usability problems. They are invaluable. The best also know how to start with almost no engineering docs and to extract what they need from 1-1 sit down interviews with engineering SMEs. I don’t see AI doing either of those things well.
AI may never be able to replace the best tech writers, or even pretty good tech writers.
But today's AI might do better than the average tech writer. AI might be able to generate reasonably usable, if mediocre, technical documentation based on a halfheartedly updated wiki and the README files and comments scattered in the developers' code base. A lot of projects don't just have poor technical documentation, they have no technical documentation.
> They act as stand-ins for actual users and will flag all sorts of usability problems.
I think everyone on the team should get involved in this kind of feedback because raw first impressions on new content (which you can only experience once, and will be somewhat similar to impatient new users) is super valuable.
I remember as a dev flagging some tech marketing copy aimed at non-devs as confusing and being told by a manager not to give any more feedback like that because I wasn't in marketing... If your own team that's familiar with your product is a little confused, you can probably x10 that confusion for outside users, and multiply that again if a dev is confused by tech content aimed at non-devs.
I find it really common as well that you get non-tech people writing about tech topics for marketing and landing pages, and because they only have a surface level understanding of the the tech the text becomes really vague with little meaning.
And you'll get lots devs and other people on the team agreeing in secret the e.g. the product homepage content isn't great but are scared to say anything because they feel they have to stay inside their bubble and there isn't a culture of sharing feedback like that.
Also true that most tech writers are bad. And companies aren't going to spend >$200k/year on a tech writer until they hit tens of millions in revenue. So AI fills the gap.
As a horror story, our docs team didn't understand that having correct installation links should be one of their top priorities. Obviously if a potential customer can't install product, they'd assume it's bs and try to find an alternative. It's so much more important than e.g. grammar in a middle of some guide.
I take your point, but a good PM will have been inside the decision-making process and carry embedded assumptions about how things should work, so they'll miss things. An outside eye - whether it's QA, user-testing, (as here) the technical writer, or even asking someone from a different team to take an informal look - is an essential part of designing anything to be used by humans.
Realistically, PMs incentives are often aligned elsewhere.
But even if a PM cares about UX, they are often not in a good position to spot problems with designs and flows they are closely involved in and intimately familiar with.
Having someone else with a special perspective can be very useful, even if their job provides other beneficial functions, too. Using this "resource" is the job of the PM.
In my experience, great tech writers quietly function as a kind of usability radar. They're often the first people to notice that a workflow is confusing
> I don’t see AI doing either of those things well.
I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer.
Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful.
I don’t know that it can be well-defined. It might be asking something akin to “What makes something human?” For usability, one needs a sense of what defines “user pain” and what defines “reasonableness.” No product is perfect. They all have usability problems at some level. The best usability experts, and tech writers who do this well, have an intuition for user priorities and an ability to identify and differentiate large usability problems from small ones.
Thinking about this some more now, I can imagine a future in which we'll see more and more software for which AI agents are the main users.
For tech documentation, I suppose that AI agents would mainly benefit from Skills files managed as part of the tool's repo, and I absolutely do imagine future AI agents being set up (e.g. as part of their AGENTS.md) to propose PRs to these Skills as they use the tools. And I'm wondering whether AI agents might end up with different usability concerns and pain-points from those that we have.
A good tech writer knows why something matters in context: who is using this under time pressure, what they're afraid of breaking, what happens if they get it wrong
Current AI writing is slightly incoherent. It's subtle, but the high level flow/direction of the writing meanders so things will sometimes seem a bit non-sequitur or contradictory.
It has no sense of truth or value. You need to check what it wrote and you need to tell it what’s important to a human. It’ll give you the average, but misses the insight.
> but can't quite put my finger on what exactly it's missing.
We have to ask AI questions for it to do things. We have to probe it. A human knows things and will probe others, unprompted. It's why we are actually intelligent and the LLM is a word guesser.
Yeah. AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement. The companies with the best docs will absolutely still have tech writers, just with some AI assistance.
Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)
In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.
I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.
The best tech writers I've known have been more like anthropologists, bridging communication between product management, engineers, and users. With this perspective they often give feedback that makes the product better.
AI can help with synthesis once those insights exist, but it doesn't naturally occupy that liminal space between groups, or sense the cultural and organizational gaps
None of the ten or so staff tech writers I have worked closely with over the years have honestly been great. This has been disappointing.
Always had to contract external people to get stuff done really well. One was a bored CS university professor, another was a CTO in a struggling tiny startup who needed cash.
However, the writing is on the wall: AI will completely replace technical writers.
The technology is improving rapidly, and even now, with proper context, AI can write technical documentation extremely well. It can include clear examples (and only a very small number of technical writers know how to do that properly), and it can also anticipate and explain potential errors.
And here I am, 2026, and one of my purposes for this year is to learn to write better, communicate more fluently, and convey my ideas in a more attractive way.
I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.
In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.
It is such a challenge! As English is not my first language I have to do some mind gimnastics to really convey my thoughts. 'On writing well' is on my list to read, it is supposed to help.
The failure mode isn't just hallucinations, it's the absence of judgment: what not to document, what to warn about, what's still unstable, what users will actually misunderstand
Two years ago, I asked chatgpt to rewrite my resume. It looked fantastic at a first sight, then, one week later I re-read it, and feel ashamed to have sent it to some prospective employers. It was full of cringe inducing babble.
You see, for an LLM there are no hierarchies other than what it observed in their training, and even then, applying it in a different context may be tricky for them. Because it can describe hierarchies, relationships by mimicry, but it doesn't actually have a model of them.
Just an example: It may be able to generate text that recognizes that a PhD title is a step above from a Master’s degree, but sometimes it won't be able to translate this fact (instead of the description of this fact) into the subtle differences in attention and emphasis we do in our written text to reflect those real world hierarchies of value. It can repeat the fact to you, can even kind of generalize it, but it won't take a decision based on it.
It can, even more now, get a very close simulation of this, because relative importance of stuff would have been semantically capture, and it is very good at capturing those subtle semantical relationships, but, in linguistic terms, it absolutely sucks at pragmatics.
An example: Let's say in one of your experiences, you improved a model that detected malignancy in a certain kind of tumor images, improving its false negative rate to something like 0.001%, then in the same experience you casually mention that you tied the CEOs toddler tennis shoes once. Given your prompt to write a resume according to the usual resume enhancement formulas, there's a big chance it will emphasize the irrelevant tennis lace tying activity in a ridiculously pompous manner, making it hierarchically equivalent to your model kung-fu accomplishments.
So in the end, you end up with some bizarre stuff that looks like:
"Tied our CEO's toddler tennis shoes, enabling her to raise 20M with minimal equity dilution in our Series B round"
Is it expected that LLMs will continue to improve over time? All the recent articles like this one just seem to describe this technology's faults as fixed and permanent. Basically saying "turn around and go no further". Honestly asking because their arguments seem to be dependent on improvement never happening and never overcoming any faults. It feels shortsighted.
I suspect a lot of folks are asking ChatGPT to summarize it…
I can’t imagine just letting an LLM write an app, server, or documentation package, wholesale and unsupervised, but have found them to be extremely helpful in editing and writing portions of a whole.
The one thing that could be a light in the darkness, is that publishers have already fired all their editors (nothing to do with AI), and the writing out there shows it. This means there’s the possibility that AI could bring back editing.
as a writer, i have found AI editing tools to be woefully unhelpful. they tend to focus on specific usage guidelines (think Strunk & White) and have little to offer for other, far more important aspects of writing.
i wrote a 5 page essay in November. the AI editor had sixty-something recommendations, and i accepted exactly one of them. it was a suggestion to hyphenate the adjectival phrase "25-year-old". i doubt that it had any measurable impact on the effectiveness of the essay.
thing is, i know all the elements of style. i know proper grammar and accepted orthographic conventions. i have read and followed many different style guides. i could best any English teacher at that game. when i violate the principles (and i do it often), i do so deliberately and intentionally. i spent a lot of time going through suggestions that would only genericize my writing. it was a huge waste of my time.
i asked a friend to read it and got some very excellent suggestions: remove a digressive paragraph, rephrase a few things for persuasive effect, and clarify a sentence. i took all of these suggestions, and the essay was markedly improved. i'm skeptical that an LLM will ever have such a grasp of the emotional and persuasive strength of a text to make recommendations like that.
That makes a lot of sense, but right now, the editing seems to be completely absent, and, I suspect, most writers aren’t at your level (I am sure that I’m not).
While I agree with the article, the reducing of the number of technical writers due to the belief that their absence can be compensated by AI is just the most recent step of a continuous process of degradation of the technical documentation that has characterized the last 3 decades.
During the nineties of the last century I was still naive enough to believe that the great improvements in technology, i.e. the widespread availability of powerful word processors and the availability of the Internet for extremely cheap distribution will lead to an improvement in the quality of technical documentation and to easy access to it for everybody.
The reverse has happened, the quality of the technical documentation has become worse and worse, with very rare exceptions, and the access to much of what has remained has become very restricted, either by requiring NDAs or by requiring very high prices (e.g. big annual fees for membership to some industry standards organization).
A likely explanation for the worse and worse technical documentation is a reduction in the number of professional technical writers.
It is very obvious that the current management of most big companies does not understand at all the value of competent technical writers and of good product documentation; not only for their customers and potential customers, but also for their internal R&D teams or customer support teams.
I have worked for several decades at many companies, very big and very small, on several continents, but, unfortunately only at one of them the importance of technical documentation was well understood by the management, therefore the hardware and software developers had an adequate amount of time planned for writing documentation in their schedules for product development. Despite the fact that the project schedules at that company appeared to allocate much more time for "non-productive tasks" like documentation, than in other places, in reality it was there where the R&D projects were completed the fastest and with the least delays over the initially estimated completion time, one important factor being that every developer understood very well what must be done in the future and what has already been done and why.
A lot of this applies to programming as well. And pretty much everything people are using GenAI for.
If you want to see how well you understand your program or system, try to write about it and teach someone how it works. Nature will show you how sloppy your thinking is.
I have not fired a technical writer, but writing documentation that understands and maintains users focus is hard even with llm. I am trying to write documentation for my start up and it is harder than I expected even with llm.
Kudos to all technical writer who made my job as software engineer easier.
If the business can no longer justify 5 engineers, then they might only have 1.
I've always said that we won't need fewer software developers with AI. It's just that each company will require fewer developers but there will be more companies.
IE:
2022: 100 companies employ 10,000 engineers
2026: 1000 companies employ 10,000 engineers
The net result is the same for emplyoment. But because AI makes it that much more efficient, many businesses that weren't financially viable when it needed 100 engineers might become viable with 10 engineers + AI.
The person you're replying to is obviously and explicitly aware that that is another scenario, and the whole point of their comment was to argue against it and explain why they think something else is more likely. Merely restating the thing they were already arguing against adds nothing to the discussion.
Not really a contradiction, since the entire point of jobs and the economy at all is to serve the specific needs of humanity and not to maximize paper clip production. If we should be learning anything from the modern era it's something that should have always been obvious: the Luddites were not the bad guys. The truth is you've fallen for centuries old propaganda. Hopefully someday you'll evolve into someone who doesn't carry water for paperclip maximizers.
Zero labor cost should see the number of engineers trend towards infinity. The earlier comment suggested the opposite — that it would fall to just 1000 engineers. That would indicate that the cost of labor has skyrocketed.
What difference does that make? If the cost of an engineer is zero, they can work on all kinds of nonsensical things that will never be used/consumed. It doesn't really matter as it doesn't cost anything.
Five engineers could be turned into maybe two, but probably not less.
It's the 'bus factor' at play. If you still want human approvals on pull requests then If one of those engineers goes on vacation or leaves the company you're stuck with one engineer for a while.
If both leave then you're screwed.
If you're a small startup, then sure there are no rules and it's the wild west. One dev can run the world.
This was true even before LLMs. Development has always scaled very poorly with team size. A team of 20 heads is like at most twice as productive as a team of 5, and a team of 5 is marginally more productive than a team of 3.
Peak productivity has always been somewhere between 1-3 people, though if any one of those people can't or won't continue working for one reason or another, it's generally game over for the project. So you hire more.
This is why small software startups time and time again manage to run circles around with organizations with much larger budgets. A 10 person game studio like Team Cherry can release smash hit after smash hit, while Ubisoft with 170,000% the personnel count visibly flounders. Imagine doing that in hardware, like if you could just grab some buddies and start a business successfully competing with TSMC out of your garage. That's clearly not possible. But in software, it actually is.
The tech writer backlog is probably worse, because writing good documentation requires extensive experience with the software you're writing documentation about and there are four types of documentation you need to produce.
Yes. I have been building software and acting as tech lead for close to 30 years.
I am not even quite sure I know how to manage a team of more than two programmers right now. Opus 4.5, in the hands of someone who knows what they are doing, can develop software almost as fast as I can write specs and review code. And it's just plain better at writing code than 60% of my graduating class was back in the day. I have banned at least one person from ever writing a commit message or pull request again, because Claude will explain it better.
Now, most people don't know to squeeze that much productivity out of it, most corporate procurement would take 9 months to buy a bucket if it was raining money outside, and it's possible to turn your code into unmaintainable slop at warp speed. And Claude is better at writing code than it is at almost anything else, so the rest of y'all are safe for a while.
But if you think that tech writers, or translators, or software developers are the only people who are going to get hit by waves of downsizing, then you're not paying attention.
Even if the underlying AI tech stalls out hard and permanently in 2026, there's a wave of change coming, and we are not ready. Nothing in our society, economy or politics is ready to deal with what's coming. And that scares me a bit these days.
"And it's just plain better at writing code than 60% of my graduating class was back in the day".
Only because it has access to vast amount of sample code to draw a re-combine parts. Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?
I'm starting to think about a risk of technological stagnation in many areas.
Nice read after the earlier post saying fire all your tech writers. Good post.
One thing to add is that the LLM doesn't know what it can't see. It just amplifies what is there. Assumed knowledge is quite common with developers and their own code. Or the more common "it works on my machine" because something is set outside of the code environment.
Sadly other fields are experiencing the same issue of someone outside their field saying AI can straight up replace them.
I’m on engineering side . We are in the same boat.
Writers become more productive = less writers needed not 0 but less.
That’s current step. Now if the promise of cursor that capable of Multi week system to be automated completely. All the internal docs become ai driven .
So only exception are external docs . But … if all software is written by machine there are no readers .
This obviously a vector not a current state :( very dark and gloom
"Productivity gains are real when you understand that augmentation is better than replacing humans..." Isn't this where the job losses happen? For example, previously you needed 5 tech writers but now you only need 4 to do the same work. Hopefully it just means that the 5th person finds more work to do, but it isn't clear to me that Jevons paradox kicks in for all cases.
I agree with the core concern, but I think the right model is smaller, not zero. One or two strong technical writers using AI as a leverage tool can easily outperform a large writing team or pure AI output. The value is still in judgment, context, and asking the right questions. AI just accelerates the mechanics.
I think using AI for tech documentation is great for people who don't really give a shit about their tech documentation. If you were going to half-ass it anyway, you can save a lot of money half-assing it with AI.
First, we've fallen into a nomenclature trap, as so-called "AI" has nothing to do with "intelligence." Even its creators admit this, hence the name "AGI," since the appropriate acronym has already been used.
But, when we use "AI" acronym, our brains still recognize "intelligence" attribute and tend to perceive LLMs as more powerful than they actually are.
Current models are like trained parrots that can draw colored blocks and insert them into the appropriate slots. Sure, much faster and with incomparably more data. But they're still parrots.
This story and the discussions remind me of reports and articles about the first computers. People were so impressed by the speed of their mathematical calculations that they called them "electronic brains" and considered, even feared, "robot intelligence."
Now we're so impressed by the speed of pattern matching that we called them "artificial intelligence," and we're back to where we are.
It’s not so much that AI is replacing “tech writers”; with all due respect to the individuals in those roles, it was never a good title to identify as.
Technical writing is part of the job of software engineering. Just like “tester” or “DBA”, it was always going to go the way of the dodo.
If you’re a technical writer, now’s the time to reinvent yourself.
The specialisations will always exist. A good software engineer can't replace a good tester, DBA, or writer. There are specific extra skills necessary for those roles. We may not need those full skills in every environment (most companies will be just fine without a DBA), but they sure are not going away globally.
You're going to get some text out of a typical engineer, but the writing quality, flow, and fit for the given purpose is not going to come close to someone who does it every day.
> Technical writing is part of the job of software engineering.
Where I work we have professional technical writers and the quality vs your typical SW engineer is night and day. Maybe you got lucky with the rare SW engineer that can technical write.
I remember the days when every large concern employed technical writers and didn't expect us programmers and engineers to write for the end users. But that stopped decades ago in most places at least as far as in house applications are concerned, long before AI could be used as an excuse for firing technical writers.
I will share my experience, hopefully it answers some questions to tech writers.
I was terrible writer, but we had to write good docs and make it easy for our customers to integrate with our products. So, I prepared the context to our tech writers and they have created nice documentation pages.
The cycle was (reasonably takes 1 week, depending on tech writer workload):
1. prepare context
2. create ticket to tech writers, wait until they respond
3. discuss messaging over the call
4. couple days later I get first draft
5. iterate on draft, then finally publish it
Today its different:
1. I prepare all the context and style guide, then feed them into LLM.
1.1. context is extracted directly from code by coding agents
2. I proofread it and 97% of cases accept it, because it follows the style guide and mostly transforms my context correctly into customer consumable content
3. Done. less than 20 minutes
Tech writers were doing amazing job of course, but I can get 90-95% quality in 1% of the time spend for that work.
Your docs are probably read many more times than they are written. It might be cheaper and quicker to produce them at 90% quality, but surely the important metric is how much time it saves or costs your readers?
Someone has to turn off their brain completely and just follow the instructions as-is. Then log the locations where the documentation wasn't clear enough or assumed some knowledge that wasn't given in the docs.
Meh. A bit too touchy feely for my taste, and not much in ways of good arguments. Some of the things touched on in the article are either extreme romanticisations of the craft or rather naive takes (docs are product truth? Really?!?! That hasn't been the case in ages, with docs for multi-billion dollar solutions, written by highly paid grass fed you won't believe they're not humans!)...
The parts about hallucinations and processes are also a bit dated. We're either at, or very close to the point where "agentic" stuff works in a "GAN" kind of way to "produce docs" -> read docs and try to reproduce -> resolve conflicts -> loop back, that will "solve" both hallucinations and processes, at least at the quality of human-written docs. My bet is actually better in some places. Bitter lesson and all that. (at least for 80% of projects, where current human written docs are horrendous. ymmv. artisan projects not included)
What I do agree with is that you'll still want someone to hold accountable. But that's just normal business. This has been the case for integrators / 3rd party providers since forever. Every project requiring 3rd party people still had internal folks that were held accountable when things didn't work out. But, you probably won't need 10 people writing docs. You can hold accountable the few that remain.
I love AI and use it daily, but I still run into hallucinations, even in COT/Thinking. I don't think hallucinations are as bad as people make it out to be. But I've been using AI since GPT3, so I'm hyper aware.
Yea. I think people underestimate this. Yesterday I was writing an obsidian plugin using the latest and most powerful Gemini model and I wanted it to make use of the new keychain in Obsidian to retrieve values for my plugin. Despite reading the docs first upon my request it still used a non existent method (retrieveSecret) to get the individual secret value. When it ran into an error, instead of checking its assumptions it assumed that the method wasnt defined in the interface so it wrote an obsidian.shim.ts file that defined a retrieveSecret interface. The plug-in compiled but obviously failed because no implementation of that method exists. When it understood it was supposed to used getSecret instead it ended up updating the shim instead of getting rid of it entirely. Add that up over 1000s of sessions/changes (like the one cursor has shared on letting the agent run until it generated 3M LOC for a browser) and it's likely that code based will be polluted with tiny papercuts stemming from LLM hallucinations
With every job replaced by AI the best people will be doing a better job than the AI and it'll be very frustrating to be replaced by people that can't tell the difference.
Why should I hire a dedicated writer if I have people with better understanding of the system? Also worth noting that like in any profession the most writers are... mediocre. Especially when you hire someone on contract. I had mostly bad experience with them in past. They happily charge $1000 for a few pages of garbage that is not even LLM-quality. No creativity, just pumping out words.
I can chip in like $20 to pay some "good writer" that "observes, listens and understands" for writing documentation on something and compare it with LLM-made one.
"Write a manual for air travel for someone who never flew. Cover topics like buying a ticket, preparing for travel, getting to airport, doing things in the airport, etc"
> Why should I hire a dedicated writer if I have people with better understanding of the system?
Many engineers are terrible at documentation, not just because they find it boring or cannot put it into words (that's the part an LLM could actually help with) but because they cannot tell what to document, what is unneeded detail, how best to address the target audience (or what is the profile of the target audience to begin with; something you can tell an LLM but which it cannot find on its own), etc, etc. The Fine Article goes into these nuances; it's the whole point of it.
> "Write a manual for air travel for someone who never flew. Cover topics like buying a ticket, preparing for travel, getting to airport, doing things in the airport, etc"
Air travel is a well-known thing, surely different from your bespoke product.
are you talking about the hashes (##, ###) etc in the subheadings? I think that's an intentional design thing, a bit of a nod to the back row, if you will.
There's another HN thread specifically asking people for links to their personal websites. I suspect an accidental typing-in-the-wrong-reply-box issue.
I don't think I've ever seen documentation from tech writers that was worth reading: if a tech writer can read code and understand it, why are they making half or less of what they would as an engineer? The post complains about AI making things up in subtle ways, but I've seen exactly the same thing happen with tech writers hired to document code: they documented what they thought should happen instead of what actually happened.
There are plenty of people who can read code who don't work as devs. You could ask the same about testers, ops, sysadmins, technical support, some of the more technical product managers etc. These roles all have value, and there are people who enjoy them.
Worth noting that the blog post isn't just about documenting code. There's a LOT more to tech writing than just that niche. I still remember the guy whose job was writing user manuals for large ship controls, as a particularly interesting example of where the profession can take you.
> they documented what they thought should happen instead of what actually happened.
The other way around. For example the Python C documentation is full of errors and omissions where engineers described what they thought should happen. There is a documentation project that describes what actually happens (look in the index for "Documentation Lacunae"): https://pythonextensionpatterns.readthedocs.io/en/latest/ind...
Yeah, but almost everyone wants money. You can see this by looking at what projects have the best documentation: they're all things like the man-pages project where the contributors aren't doing it as a job when they could be working a more profitable profession instead.
While I do appreciate man pages, I don't think they are something I would consider to be "the best documentation". Many of the authors of them are engineers, by the way.
A tech writer isn't a class of person. "Tech writer" is a role or assignment. You can be an engineer working as a tech writer.
Also, the primary task of a tech writer isn't to document code. They're supposed to write tutorials, user guides, how to guides, explanations, manuals, books, etc.
I'm currently in the middle of restructuring our website. 95% of the work is being done by codex. That includes content writing, design work, implementation work, etc. But it's a lot of work for me because I am critical about things like wording/phrasing and not hallucinating things we don't actually do. That's actually a lot of work. But it's editorial work and not writing work or programming work. But it's doing a pretty great job. Having a static website with a site generator means I can do lots of changes quickly via agentic coding.
My advise to tech writers would be to get really good at directing and orchestrating AI tools to do the heavy lifting of producing documentation. If you are stuck using content management systems or word processors, consider adopting a more code centric workflow. The AI tools can work with those a lot better. And you can't afford to be doing things manually that an AI does faster and better. Your value is making sure the right documentation gets written and produced correctly; correcting things that need correcting/perfecting. It's not in doing everything manually; you need to cherry pick where your skills still add value.
Another bit of insight is that a lot of technical documentation now has AIs as the main consumer. A friend of mine who runs a small SAAS service has been complaining that nobody actually reads his documentation (which is pretty decent) and instead relies on LLMs to do that for them. The more documentation you have, the less people will read all of it. Or any of it.
But you still need documentation. It's easier than ever to produce it. The quality standards for that documentation are high and increasing. There are very few excuses for not having great documentation.
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".
I don't work for the public transit company; I introduce immigrants to Berlin's public transit. To answer to the broader question, good documentation is one of the many little things that affect how you feel about a company. The BVG clearly cares about that, because their marketing department is famously competent. Good documentation also means that fewer people will queue at their service centre and waste an employee's time. Documentation is the cheaper form of customer service.
Besides, how people feels about the public transit company does matter, because their funding is partly a political question. No one will come to defend a much-hated, customer-hostile service.
Sure, the megacorps may start rotting from the inside out, but we already see a retrenchment to smaller private communities, and if more of the benefits of the big platforms trickle down, why wouldn’t that continue?
Nicbou, do you see AI as increasing your personal output? If it lets enthusiastic individuals get more leverage on good causes then I still have hope.
When it became cheaper to make games did the quality go up?
When it became cheaper to mass produce X (sneakers, tshirts, anything really) did the quality go up?
It's a world that is made of an abundance of trash. The volume of low quality production saturates the market and drowns out whatever high quality things still remain. In such a world you're just better of reallocating your resources from the production quality towards the the shouting match of marketing and try to win by finding ways to be more visible than the others. (SEO hacking etc shenanigans)
When you drive down the cost of doing something to zero you you also effectively destroy the economy based around that thing. Like online print, basically nobody can make a living with focusing on publishing news or articles but alternative revenue streams (ads) are needed. Same for games too.
No, but the availability (more people can afford it) and diversity (different needs are met) increased. I would say that's a positive. Some of the expensive "legacy" things still exist and people pay for it (e.g. newspapers / professional journalism).
Of course low quality stuff increased by a lot and you're right, that leads to problems.
Rather than think in terms of making things cheaper for people to afford we should think how to produce wealthier people who could afford better than the cheapest of cheapest crap.
> Rather than think in terms of making things cheaper for people to afford we should think how to produce wealthier people who could afford better than the cheapest of cheapest crap.
This problem actually runs deep and is systemic. I am genuinely not sure how one can do it when the basis of wealth derives from what exactly? The growth of stock markets which people call bubbles or the US debt crisis which is fueling up in recent years to basically fuel the consumerism spree itself. I am not sure.
If you were to make people wealthy, they might still buy cheapest of cheapest crap just at a 10x more magnitude in many cases (or atleast that's what I observed US to do with how many people buy and sell usually very simple saas tools at times)
Maybe my opinion is just biased and I'm in the comfortable position to pass judgment but I'd like to believe that more people would be more ethical and conscious about their materialistic needs if things had more value and were better quality and instead of focusing on the "price" as the primary value proposition people were actually able to afford to buy other than the cheapest of things.
Wouldn't the economy also be in much better shape if more people could buy things such as handmade shoes or suits?
I hear ya but I wonder how that reflects on Open source software which was the GP request created by LLM let's say. Yes I know it can have bugs but its free of cost and you can own it and modify it with source code availability and run it on your own hardware
There really isn't much of a difference in terms of hardware/electricity just because of these Open source projects
But probably some for LLM's so its a little tricky but I feel like open source projects/ running far with ideas gets incentivized
Atleast I feel like its one of the more acceptable uses of LLM in so far. Its better because you are open sourcing it for others to run. If someone doesn't want to use it, that's their freedom but you built it for yourself or running with an idea which couldn't have existed if you didn't know the details on implementations or would have taken months or years for 0 gains when now you can do it in less time
It significantly improves to see which ideas would be beneficial or not and I feel like if AI is so worrying then if an idea is good and it can be tested, it can always be rewritten or documented heavily by a human. In fact there are even job posts about slop janitor on linkedin lol
> Wouldn't the economy also be in much better shape if more people could buy things such as handmade shoes or suits?
Yes but also its far from happening and would require a real shake up in all things and its just a dream right now. i agree with ya but its not gonna happen or not something one can change, trust me I tried.
This requires system wide change that one person is very unlikely to bring but I wish you best in your endeavour
But what I can do on a more individualistic freedom level is create open source projects via LLM's if there is a concept I don't know of and then open sourcing it for the general public and if even one to two people find it useful, its all good and I am always experimenting.
When it became cheaper to mass produce sneakers, tshirts, and anything, the quality of the individual product probably did go down, but more people around the world were able to afford the product, which raised the standard of living for people in the aggregate. Now, if these products were absolute trash, life wouldn't make much sense, but there's a friction point in there between high quality and trash, where things are acceptable and affordable to the many. Making things cheaper isn't a net negative for human progress: hitting that friction point of acceptable affordability helps spread progress more democratically and raise the standard of living.
The question at hand is whether AI can more affordably produce acceptable technical writing, or if it's trash. My own experiences with AI make me think that it won't produce acceptable results, because you never know when AI is lying: catching those errors requires someone who might as well just write the documentation. But, if it could produce truthful technical writing affordably, that would not be a bad thing for humanity.
It introduces a lower barrier to entry, so more low-quality things are also created, but it also increases the quality of the higher-tier as well. It's important to note that in FOSS, we (Or atleast...I) don't generally care who wrote the code, as long as it compiles and isn't malicious. This overlays with the original discussion...If I was paying you to read your posts, I expect them to be hand-written. If I'm paying for software, it better not be AI Slop. If you're offering me something for free, I'm not really in a position to complain about the quality.
It's undeniable that, especially in software, cheaper costs and a lower barrier to get started will bring more great FOSS software. This is like one of the pillars of FOSS, right? That's how we got LetsEncrypt, OpenDNS, etc. It will also 100% bring more slop. Both can be true at the same time.
In a landscape where the market is mostly filled with junk by spending anything on "quality" any commercial product is essentially losing money.
I truly don't see this happening anymore. Maybe it did before?
If there's real competition, maybe this does happen. We don't have it and it'll never last in capitalism since one or a few companies will always win at some point.
If you're a higher tier X, cheaper processes means you'll just enjoy bigger profit margins and eventually decide to start the enshittification phase since you're a monopoly/oligopoly, so why not?
As for FOSS, well, we'll have more crappy AI generated apps that are full of vulnerabilities and will become unmaintainable. We already have hordes of garbage "contributions" to FOSS generated by these AI systems worsening the lives of maintainers.
Is that really higher quality? I reckon it's only higher quantity with more potential to lower quality of even higher-tier software.
What happens when all the engineers left can't figure out something, and they start opening up manuals, and they are also all wrong and trash. And the whole world grinds to a halt because nobody knows anything.
Nowadays the problem is that both technical and legal means are used to prevent adversarial interoperability. It doesn't matter if you (or AI) can write software faster if said software is unable to interface with the thing everyone else uses.
Thank you so much for saying this. Trying to convince anyone of the importance of documentation feels like an uphill battle. Glad to see that I'm not completely crazy.
I'd argue that this started 30 years ago when automated phone trees started replacing the first line of workers and making users figure out how to navigate where they needed to in order to get the service they needed.
I can't remember if chat bots or "knowledge bases" came first, but that was the next step in the "figure it out yourself" attitude corporations adopted (under the guise of empowering users to "self help").
Then we started letting corporations use the "we're just too big to actually have humans deal with things" excuse (eg online moderation, or paid services with basically no support).
And all these companies look at each other to see who can lower the bar next and jump on the bandwagon.
It's one of my "favorite" rants, I guess.
The way I see this next era going is that it's basically going to become exclusively the users' responsibility to figure out how to talk to the bots to solve any issue they have.
Thank you. I love it when someone poetically captures a feeling I’ve been having so succinctly.
It’s almost like they’re a professional writer…
I have exactly 1 guess but am waiting to say it.
Which means I replied to a bot.
I am officially retiring from social media.
Exactly. If the AI-made documentation is only 50% of the quality but can be produced for 10% of the price, well, we all know what the "smart" business move is.
AI-made documentation has 0% of the quality.
As the OP pointed, AI can only document things that somebody already wrote down. That's no documentation at all.
First, I understand what you're saying and generally agree with it, in the sense that that is how the organization will "experience" it.
However, the answer to "will it lead to a noticeable drop in revenue" is actually yes. The problem is that it won't lead to a traceable drop in revenue. You may see the numbers go down. But the numbers don't come with labels why. You may go out and ask users why they are using your service less, but people are generally very terrible at explaining why they do anything, and few of them will be able to tell you "your documentation is just terrible and everything confuses me". They'll tell you a variety of cognitively available stories, like the place is dirty or crowded or loud or the vending machines are always broken, but they're terrible at identifying the real root causes.
This sort of thing is why not only is everything enshittifying, but even as the entire world enshittifies, everybody's metrics are going up up up. It takes leadership willing to go against the numbers a bit to say, yes, we will be better off in the long term if we provide quality documentation, yes, we will be better off in the long term if we use screws that don't rust after six months, yes, we will be better off in the long term if we don't take the cheapest bidder every single time for every single thing in our product but put a bit of extra money in the right place. Otherwise you just get enshittification-by-numbers until you eventually go under and get outcompeted and can't figure out why because all your numbers just kept going up.
That’s one way to frame it. An other one is, sometime people are stuck in a situation where all options that come to their mind have repulsive consequences.
As always some consequences are deemed more immediate, and other will seem remoter. And often the incentives can be quite at odd between expectations in the short/long terms.
>this sucks and I'm gonna build a better one in a weekend
Hey, this is me looking at the world this morning. Bear with me, the bright new harmonious world should be there on Monday. ;)
Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.
Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.
“Technology needs soul”
I suppose this can be generalized to “__ needs soul”. Eg. Technical writing needs soul, User interfaces need soul, etc. We are seriously discounting the value we receive from embedding a level of humanity into the things we choose (or are forced) to experience.
I completely agree that the ambitions of AI proponents to replace workers is insulting. You hit the nail on the head with pointing out that we simply dont write everything down. And the more common sense / well known something is the less likely it is to be written down, yet the more likely it might be needed by an AI to align itself properly.
Nicely written (which, I guess, is sort of the point).
I'm exploring ways to organize my Obsidian vault such that it can be shared with friends, but not the whole Internet (and its bots). I'm extracting value out the curation I've done, but I'd like to share with others.
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
You know, just like the human it'd replace.
I call it the banana bread problem.
To curate a list of the best cafés in your city, someone must eventually go out and try a few of them. A human being with taste honed by years of sensory experiences will have to order a coffee, sit down, appreciate the vibe, and taste the banana bread.
At some point, you need someone to go out in the world and feel things. A machine that cannot feel will never be a good curator of human experiences.
That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.
See Duolingo :)
You may enjoy this story about her work:
https://www.folklore.org/Inside_Macintosh.html
As a counterpoint, the very worst "documentation" (scare quotes intended) I've ever seen was when I worked at IBM. We were all required to participate in a corporate training about IBM's Watson coding assistant. (We weren't allowed to use external AIs in our work.)
As an exercise, one of my colleagues asked the coding assistant to write documentation for a Python source file I'd written for the QA team. This code implemented a concept of a "test suite", which was a CSV file listing a collection of "test sets". Each test set was a CSV file listing any number of individual tests.
The code was straightforward, easy to read and well-commented. There was an outer loop to read each line of the test suite and get the filename of a test set, and an inner loop to read each line of the test set and run the test.
The coding assistant hallucinated away the nested loop and just described the outer loop as going through a test suite and running each test.
There were a number of small helper functions with docstrings and comments and type hints. (We type hinted everything and used mypy and other tools to enforce this.)
The assistant wrote its own "documentation" for each of these functions in this form:
"The 'foo' function takes a 'bar' parameter as input and returns a 'baz'"
Dude, anyone reading the code could have told you that!
All of this "documentation" was lumped together in a massive wall of text at the top of the source file. So:
When you're reading the docs, you're not reading the code.
When you're reading the code, you're not reading the docs.
Even worse, whenever someone updates the actual code and its internal documentation, they are unlikely to update the generated "documentation". So it started out bad and would get worse over time.
Note that this Python source file didn't implement an API where an external user might want a concise summary of each API function. It was an internal module where anyone working on it would go to the actual code to understand it.
But if you treat "write documentation" as a box-ticking exercise, a line that needs to turn green on your compliance report, then it can just be whatever.
Nonetheless, I live from that work. If you are correct, there's a fair bit of money on the table for you.
- Most people don't communicate as thoroughly and complete - written and verbal - as they think they do. Very often there is what I call "assumptive communication". That is, sender's ambiguity that's resolved by the receiver making assumptions about what was REALLY meant. Often, filling in the blanks is easy to do - as it's done all the time - but not always. The resolution doesn't change the fact there was ambiguity at the root.
Next time you're communicating, listen carefully. Make note of how often the other person sends something that could be interpreted differently, how often you assume by using the default of "what they likely meant was..."
- That said, AI might not replace people like you. Or me? But it's an improvement for the majority of people. AI isn't perfect, hardly. But most people don't have the skills a/o willingness to communicate at a level AI can simulate. Improved communication is not easy. People generally want ease and comfort. AI is their answer. They believe you are replaceable because it replaces them and they assume they're good communicators. Classic Dunning-Kruger.
p.s. One of my fave comms' heuristics is from Frank Luntz*:
"It's not what you say, it's what they hear." (<< edit was changing to "say" from "said".)
One of the keys to improved comms is to embrace that clarify and completeness is the sole responsibility of the sender, not the receiver. Some people don't want to hear that, and be accountable, especially then assumption communication is a viable shortcut.
* Note: I'm not a fan of his politics, and perhaps he's not The Source of this heuristic, but read it first in his "Words That Work". The first chapter of "WTW" is evergreen comms gold.
Based on what? Your own zero-evidence speculation? How is this anything other than arrogant punting? For sure we know that the point was something other than how fast the author reads compared to an AI, so what are we left with here?
That’s the logical fallacy anyone is going to be pushed to as soon as judging their individual worth in an intrinsically collective endeavor will happen.
People in lowest incomes which would not be able to integrate in society without direct social funds will be seen as parasites by some which are wealthier, just like ultra rich will be considered parasites by less wealthy people.
As as writer, you know this makes it seem emotional rather than factual?
Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).
It becomes: This person is fearful of their job and used feeling to justify their belief.
Good human written docs > AI written docs > no docs > bad human written docs
The kind of documentation no one reads, that is just here to please some manager, or meet some compliance requirement. These are, unfortunately, the most common kind I see, by volume. Usually, they are named something like QQF-FFT-44388-IssueD.doc and they are completely outdated with regard to the thing they document despite having seen several revisions, as evidenced by the inconsistent style.
Common features are:
- A glossary that describe terms that don't need describing, such as CPU or RAM, but not ambiguous and domain-specific terms, of which there are many
- References to documents you don't have access to
- UML diagrams, not matching the code of course
- Signatures by people who left the project long ago and are nowhere to be seen
- A bunch of screenshots, all with different UIs taken at different stages of development, would be of great value to archeologists
- Wildly inconsistent formatting, some people realize that Word has styles and can generate a table of contents, others don't, and few care
Of course, no one reads them, besides maybe a depressive QA manager.
But today's AI might do better than the average tech writer. AI might be able to generate reasonably usable, if mediocre, technical documentation based on a halfheartedly updated wiki and the README files and comments scattered in the developers' code base. A lot of projects don't just have poor technical documentation, they have no technical documentation.
I think everyone on the team should get involved in this kind of feedback because raw first impressions on new content (which you can only experience once, and will be somewhat similar to impatient new users) is super valuable.
I remember as a dev flagging some tech marketing copy aimed at non-devs as confusing and being told by a manager not to give any more feedback like that because I wasn't in marketing... If your own team that's familiar with your product is a little confused, you can probably x10 that confusion for outside users, and multiply that again if a dev is confused by tech content aimed at non-devs.
I find it really common as well that you get non-tech people writing about tech topics for marketing and landing pages, and because they only have a surface level understanding of the the tech the text becomes really vague with little meaning.
And you'll get lots devs and other people on the team agreeing in secret the e.g. the product homepage content isn't great but are scared to say anything because they feel they have to stay inside their bubble and there isn't a culture of sharing feedback like that.
Also true that most tech writers are bad. And companies aren't going to spend >$200k/year on a tech writer until they hit tens of millions in revenue. So AI fills the gap.
As a horror story, our docs team didn't understand that having correct installation links should be one of their top priorities. Obviously if a potential customer can't install product, they'd assume it's bs and try to find an alternative. It's so much more important than e.g. grammar in a middle of some guide.
True, but it raises another question, what were your Product Managers doing in the first place if tech writer is finding out about usability problems
But even if a PM cares about UX, they are often not in a good position to spot problems with designs and flows they are closely involved in and intimately familiar with.
Having someone else with a special perspective can be very useful, even if their job provides other beneficial functions, too. Using this "resource" is the job of the PM.
I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer.
Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful.
For tech documentation, I suppose that AI agents would mainly benefit from Skills files managed as part of the tool's repo, and I absolutely do imagine future AI agents being set up (e.g. as part of their AGENTS.md) to propose PRs to these Skills as they use the tools. And I'm wondering whether AI agents might end up with different usability concerns and pain-points from those that we have.
We have to ask AI questions for it to do things. We have to probe it. A human knows things and will probe others, unprompted. It's why we are actually intelligent and the LLM is a word guesser.
Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)
In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.
I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.
See my other comment - I'm afraid quality only matters if there is healthy competition which isn't the case for many verticals: https://news.ycombinator.com/item?id=46631038
[insert Pawn Stars meme]: "GOOD docs? Sorry, best I can do is 'slightly better than useless.'"
Always had to contract external people to get stuff done really well. One was a bored CS university professor, another was a CTO in a struggling tiny startup who needed cash.
The technology is improving rapidly, and even now, with proper context, AI can write technical documentation extremely well. It can include clear examples (and only a very small number of technical writers know how to do that properly), and it can also anticipate and explain potential errors.
I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.
In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.
Yep, and reading you will feel less boring.
The uniform style of LLMs gets old fast and I wouldn't be surprised if it were a fundamental flaw due to how they work.
And it's not even sure speed gains from using LLMs make up for the skill loss in the long term.
<list of emoji-labeled bold headers of numbered lists in format <<bolded category> - description>>
Is there anything else I can help you with?
I'll take imperfect ESL writing or imperfect writing in my native language over LLM soup any day.
They have AI finding reasons to reject totally valid requests
They are putting to court that this is a software bug and they should not be liable.
That will be the standard excuse. I hope it does not work.
Two years ago, I asked chatgpt to rewrite my resume. It looked fantastic at a first sight, then, one week later I re-read it, and feel ashamed to have sent it to some prospective employers. It was full of cringe inducing babble.
You see, for an LLM there are no hierarchies other than what it observed in their training, and even then, applying it in a different context may be tricky for them. Because it can describe hierarchies, relationships by mimicry, but it doesn't actually have a model of them.
Just an example: It may be able to generate text that recognizes that a PhD title is a step above from a Master’s degree, but sometimes it won't be able to translate this fact (instead of the description of this fact) into the subtle differences in attention and emphasis we do in our written text to reflect those real world hierarchies of value. It can repeat the fact to you, can even kind of generalize it, but it won't take a decision based on it.
It can, even more now, get a very close simulation of this, because relative importance of stuff would have been semantically capture, and it is very good at capturing those subtle semantical relationships, but, in linguistic terms, it absolutely sucks at pragmatics.
An example: Let's say in one of your experiences, you improved a model that detected malignancy in a certain kind of tumor images, improving its false negative rate to something like 0.001%, then in the same experience you casually mention that you tied the CEOs toddler tennis shoes once. Given your prompt to write a resume according to the usual resume enhancement formulas, there's a big chance it will emphasize the irrelevant tennis lace tying activity in a ridiculously pompous manner, making it hierarchically equivalent to your model kung-fu accomplishments.
So in the end, you end up with some bizarre stuff that looks like:
"Tied our CEO's toddler tennis shoes, enabling her to raise 20M with minimal equity dilution in our Series B round"
After all, if he didn't feel foolish for it, he wouldn't've held it in his memory, and thus wouldn't've shared it with us.
Who among us hasn't written an angry email, re-(re-)read it, smugly hit send, slept on it, then regretted the sending?
By whom?
Your expectations aren't the same everybody has.
I suspect a lot of folks are asking ChatGPT to summarize it…
I can’t imagine just letting an LLM write an app, server, or documentation package, wholesale and unsupervised, but have found them to be extremely helpful in editing and writing portions of a whole.
The one thing that could be a light in the darkness, is that publishers have already fired all their editors (nothing to do with AI), and the writing out there shows it. This means there’s the possibility that AI could bring back editing.
i wrote a 5 page essay in November. the AI editor had sixty-something recommendations, and i accepted exactly one of them. it was a suggestion to hyphenate the adjectival phrase "25-year-old". i doubt that it had any measurable impact on the effectiveness of the essay.
thing is, i know all the elements of style. i know proper grammar and accepted orthographic conventions. i have read and followed many different style guides. i could best any English teacher at that game. when i violate the principles (and i do it often), i do so deliberately and intentionally. i spent a lot of time going through suggestions that would only genericize my writing. it was a huge waste of my time.
i asked a friend to read it and got some very excellent suggestions: remove a digressive paragraph, rephrase a few things for persuasive effect, and clarify a sentence. i took all of these suggestions, and the essay was markedly improved. i'm skeptical that an LLM will ever have such a grasp of the emotional and persuasive strength of a text to make recommendations like that.
That makes a lot of sense, but right now, the editing seems to be completely absent, and, I suspect, most writers aren’t at your level (I am sure that I’m not).
It may be better than nothing.
During the nineties of the last century I was still naive enough to believe that the great improvements in technology, i.e. the widespread availability of powerful word processors and the availability of the Internet for extremely cheap distribution will lead to an improvement in the quality of technical documentation and to easy access to it for everybody.
The reverse has happened, the quality of the technical documentation has become worse and worse, with very rare exceptions, and the access to much of what has remained has become very restricted, either by requiring NDAs or by requiring very high prices (e.g. big annual fees for membership to some industry standards organization).
A likely explanation for the worse and worse technical documentation is a reduction in the number of professional technical writers.
It is very obvious that the current management of most big companies does not understand at all the value of competent technical writers and of good product documentation; not only for their customers and potential customers, but also for their internal R&D teams or customer support teams.
I have worked for several decades at many companies, very big and very small, on several continents, but, unfortunately only at one of them the importance of technical documentation was well understood by the management, therefore the hardware and software developers had an adequate amount of time planned for writing documentation in their schedules for product development. Despite the fact that the project schedules at that company appeared to allocate much more time for "non-productive tasks" like documentation, than in other places, in reality it was there where the R&D projects were completed the fastest and with the least delays over the initially estimated completion time, one important factor being that every developer understood very well what must be done in the future and what has already been done and why.
If you want to see how well you understand your program or system, try to write about it and teach someone how it works. Nature will show you how sloppy your thinking is.
Kudos to all technical writer who made my job as software engineer easier.
It’s obviously not AI generated but I’m more speaking to the tonality of the latest gpt. It’s now extremely hard to tell the difference.
If the business can no longer justify 5 engineers, then they might only have 1.
I've always said that we won't need fewer software developers with AI. It's just that each company will require fewer developers but there will be more companies.
IE:
2022: 100 companies employ 10,000 engineers
2026: 1000 companies employ 10,000 engineers
The net result is the same for emplyoment. But because AI makes it that much more efficient, many businesses that weren't financially viable when it needed 100 engineers might become viable with 10 engineers + AI.
Do you not see the logic?
Five engineers could be turned into maybe two, but probably not less.
It's the 'bus factor' at play. If you still want human approvals on pull requests then If one of those engineers goes on vacation or leaves the company you're stuck with one engineer for a while.
If both leave then you're screwed.
If you're a small startup, then sure there are no rules and it's the wild west. One dev can run the world.
Peak productivity has always been somewhere between 1-3 people, though if any one of those people can't or won't continue working for one reason or another, it's generally game over for the project. So you hire more.
This is why small software startups time and time again manage to run circles around with organizations with much larger budgets. A 10 person game studio like Team Cherry can release smash hit after smash hit, while Ubisoft with 170,000% the personnel count visibly flounders. Imagine doing that in hardware, like if you could just grab some buddies and start a business successfully competing with TSMC out of your garage. That's clearly not possible. But in software, it actually is.
Is the tech writers backlog also seemingly infinite like every tech backlog I've ever seen?
I am not even quite sure I know how to manage a team of more than two programmers right now. Opus 4.5, in the hands of someone who knows what they are doing, can develop software almost as fast as I can write specs and review code. And it's just plain better at writing code than 60% of my graduating class was back in the day. I have banned at least one person from ever writing a commit message or pull request again, because Claude will explain it better.
Now, most people don't know to squeeze that much productivity out of it, most corporate procurement would take 9 months to buy a bucket if it was raining money outside, and it's possible to turn your code into unmaintainable slop at warp speed. And Claude is better at writing code than it is at almost anything else, so the rest of y'all are safe for a while.
But if you think that tech writers, or translators, or software developers are the only people who are going to get hit by waves of downsizing, then you're not paying attention.
Even if the underlying AI tech stalls out hard and permanently in 2026, there's a wave of change coming, and we are not ready. Nothing in our society, economy or politics is ready to deal with what's coming. And that scares me a bit these days.
Only because it has access to vast amount of sample code to draw a re-combine parts. Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?
I'm starting to think about a risk of technological stagnation in many areas.
One thing to add is that the LLM doesn't know what it can't see. It just amplifies what is there. Assumed knowledge is quite common with developers and their own code. Or the more common "it works on my machine" because something is set outside of the code environment.
Sadly other fields are experiencing the same issue of someone outside their field saying AI can straight up replace them.
What post was that?
Writers become more productive = less writers needed not 0 but less.
That’s current step. Now if the promise of cursor that capable of Multi week system to be automated completely. All the internal docs become ai driven .
So only exception are external docs . But … if all software is written by machine there are no readers .
This obviously a vector not a current state :( very dark and gloom
But, when we use "AI" acronym, our brains still recognize "intelligence" attribute and tend to perceive LLMs as more powerful than they actually are.
Current models are like trained parrots that can draw colored blocks and insert them into the appropriate slots. Sure, much faster and with incomparably more data. But they're still parrots.
This story and the discussions remind me of reports and articles about the first computers. People were so impressed by the speed of their mathematical calculations that they called them "electronic brains" and considered, even feared, "robot intelligence."
Now we're so impressed by the speed of pattern matching that we called them "artificial intelligence," and we're back to where we are.
Technical writing is part of the job of software engineering. Just like “tester” or “DBA”, it was always going to go the way of the dodo.
If you’re a technical writer, now’s the time to reinvent yourself.
You're going to get some text out of a typical engineer, but the writing quality, flow, and fit for the given purpose is not going to come close to someone who does it every day.
Where I work we have professional technical writers and the quality vs your typical SW engineer is night and day. Maybe you got lucky with the rare SW engineer that can technical write.
I was terrible writer, but we had to write good docs and make it easy for our customers to integrate with our products. So, I prepared the context to our tech writers and they have created nice documentation pages.
The cycle was (reasonably takes 1 week, depending on tech writer workload):
Today its different: Tech writers were doing amazing job of course, but I can get 90-95% quality in 1% of the time spend for that work.People boast about the gains with LLMs all the damn time and I'm sceptical of it all unless I see their inputs.
Someone has to turn off their brain completely and just follow the instructions as-is. Then log the locations where the documentation wasn't clear enough or assumed some knowledge that wasn't given in the docs.
I think this is going to be a defining theme this year.
The parts about hallucinations and processes are also a bit dated. We're either at, or very close to the point where "agentic" stuff works in a "GAN" kind of way to "produce docs" -> read docs and try to reproduce -> resolve conflicts -> loop back, that will "solve" both hallucinations and processes, at least at the quality of human-written docs. My bet is actually better in some places. Bitter lesson and all that. (at least for 80% of projects, where current human written docs are horrendous. ymmv. artisan projects not included)
What I do agree with is that you'll still want someone to hold accountable. But that's just normal business. This has been the case for integrators / 3rd party providers since forever. Every project requiring 3rd party people still had internal folks that were held accountable when things didn't work out. But, you probably won't need 10 people writing docs. You can hold accountable the few that remain.
Why?
Because the legal catastrophe that will follow will entertain me so very very much.
But most people aren't that great at their jobs.
AI can’t generate insights far beyond what it’s trained on.
Their writing will be a different moat.
What if the next version of AI model gets trained on their work ?
Why should I hire a dedicated writer if I have people with better understanding of the system? Also worth noting that like in any profession the most writers are... mediocre. Especially when you hire someone on contract. I had mostly bad experience with them in past. They happily charge $1000 for a few pages of garbage that is not even LLM-quality. No creativity, just pumping out words.
I can chip in like $20 to pay some "good writer" that "observes, listens and understands" for writing documentation on something and compare it with LLM-made one.
"Write a manual for air travel for someone who never flew. Cover topics like buying a ticket, preparing for travel, getting to airport, doing things in the airport, etc"
Let's compare!
Many engineers are terrible at documentation, not just because they find it boring or cannot put it into words (that's the part an LLM could actually help with) but because they cannot tell what to document, what is unneeded detail, how best to address the target audience (or what is the profile of the target audience to begin with; something you can tell an LLM but which it cannot find on its own), etc, etc. The Fine Article goes into these nuances; it's the whole point of it.
> "Write a manual for air travel for someone who never flew. Cover topics like buying a ticket, preparing for travel, getting to airport, doing things in the airport, etc"
Air travel is a well-known thing, surely different from your bespoke product.
Hopefully they used AI to write this.
There are plenty of people who can read code who don't work as devs. You could ask the same about testers, ops, sysadmins, technical support, some of the more technical product managers etc. These roles all have value, and there are people who enjoy them.
Worth noting that the blog post isn't just about documenting code. There's a LOT more to tech writing than just that niche. I still remember the guy whose job was writing user manuals for large ship controls, as a particularly interesting example of where the profession can take you.
The other way around. For example the Python C documentation is full of errors and omissions where engineers described what they thought should happen. There is a documentation project that describes what actually happens (look in the index for "Documentation Lacunae"): https://pythonextensionpatterns.readthedocs.io/en/latest/ind...
Also, the primary task of a tech writer isn't to document code. They're supposed to write tutorials, user guides, how to guides, explanations, manuals, books, etc.
My advise to tech writers would be to get really good at directing and orchestrating AI tools to do the heavy lifting of producing documentation. If you are stuck using content management systems or word processors, consider adopting a more code centric workflow. The AI tools can work with those a lot better. And you can't afford to be doing things manually that an AI does faster and better. Your value is making sure the right documentation gets written and produced correctly; correcting things that need correcting/perfecting. It's not in doing everything manually; you need to cherry pick where your skills still add value.
Another bit of insight is that a lot of technical documentation now has AIs as the main consumer. A friend of mine who runs a small SAAS service has been complaining that nobody actually reads his documentation (which is pretty decent) and instead relies on LLMs to do that for them. The more documentation you have, the less people will read all of it. Or any of it.
But you still need documentation. It's easier than ever to produce it. The quality standards for that documentation are high and increasing. There are very few excuses for not having great documentation.