Pylimitics

"Simplicity" rearranged


GPT

Originally published April 2023

There’s a lot of, um, chat going around about ChatGPT, the well-known large-language-model generative chatbot. I’m holding off on calling it an “Artificial Intelligence,” but I will go as far as calling it an AEM — Artificial English Major. 

ChatGPT and its pals really are a lot like the typical caricature of an english major: they’re very good at writing, not very skilled at math, and not good at things like project management. Remember, HEMs (Human English Majors), I said it was a caricature. 

One thing that’s going on for people in particular kinds of jobs is a creeping anxiety about being replaced by software. Mind you, I’m not worried at all; I asked ChatGPT whether I could still have the money if it took over my job and it said sure, as an AI it had no use for money. But in the field of enterprise-employed technical writers, which in a general sense includes me, there’s a lot of worry. I’ve dreamed up some possible scenarios for the near term, focusing on people in and around those jobs, and here are some guesses about what might happen (and it could be more than one, or none of them).

Image by Jon Katz https://www.bedlamfarm.com

Scenario: Enhanced Contractors 

One scenario for LLM-assisted tech writing could be a contractor agency that invests in their OWN large language model, trains it on, say, a particular vendor’s documentation, and provides that vendor with writers who have access to the model. The investment is small enough that even an individual could afford it, although it would make more sense as a shared resource. What you’d need:

  • Computation. A high-end GPU can be as much as $2500. A GPU is the most efficient processing system for neural network processing, so the savings in time would be worth it. I’m assuming they’d already have the computer itself. Or you could just use a service like Amazon Web Services for roughly a couple hundred dollars per month (depending on how much you use it).
  • Storage. For Amazon this would be somewhere between $50 and $200 per month.
  • Initial software cost. Can be as little as nothing if you use an open-source project.
  • Initial training. This is mostly processing and people-time, unless you’re using an instance of a commercial large language model. Hard to estimate, but not huge. $10,000 or so for a big set of public documentation (depending on whether your model needs “preprocessing,” which is reformatting the content so the model can understand it).
  • Administration probably isn’t a full-time job for this software assistant. You need somebody to set it up, and somebody to train it, but once it’s running it won’t need much in the way of tweaking. It might eventually need retraining on newer products, which would be a bigger project, similar to initial training.

Then you’d have a resource for the tech writers you employ (or just for yourself) to turn out new documentation in whatever form a vendor wants, more quickly and efficiently than they could. Unless, of course, they invest in their own large language model too, which is another scenario.

Scenario: ChatGPTechComms

A documentation department in a large (or large-ish) company could commission and maintain a language model that’s trained in the product documentation the department produces. The model would be used by tech writers to initiate new topics, help draft them, improve consistency in the writing, potentially apply structured tagging (if there’s any call for that), and link topics. A more traditional search engine might be better than a language model for finding links, by the way. Tech writers, or “prompt engineers” (I bet they’re going to be similar jobs) would test and evaluate the output, thus continually training the model. Notice that I never mentioned “books;” I think a corpus of individual topics will be more useful, and the additional construct of “a book” won’t be wanted. 

This scenario will be more attractive to companies where the subject matter is confidential and the enterprise does not want to risk the language model training and database being owned by an outside vendor.

Scenario: The executives’ paradise

In this scenario, the executive level gets to eliminate the vast majority of programmers, writers, and so on, and assign the tasks solely to language models. This is what a lot of people are worrying about. But I think you can relax (a little), because for any reasonably complex enterprise, this is not something language models can do on their own. They can generate content and programming code, but not from scratch. They need to have prompts to respond to and a giant database of existing text or code or rules to draw from. 

However, a single individual using a language model (possibly custom-trained) could generate, say, working apps, written content, and/or graphics or audio as… well, a sort of “enhanced sole practitioner.” The copyright and social implications of this remain to be seen. I believe there are rules (maybe imposed by publishers?) about disclosing language model assistance with written material, and at least one language model vendor theoretically requires (possibly just “recommends”) that apps developed with model assistance be labeled. 

Other than that there are a lot of open questions. Will consumers want to know that a piece of art or music was partially LLM generated? Will they care? Will a musician capable of singing without autotone command a higher royalty than one who can’t? Will a “NO LLM” label appear that has a similar effect to “Vegan” or “Organic” or “No GMO” labels?  Will these things be regulated in some way, and how? It all remains to be seen, but SkyNet is not coming for us, at least not this month.

Scenario: Personal Augmentation

In this scenario, technical writers train and maintain their own language model instances. This is a reasonably likelihood as a personal app-like service, and would be priced attractively. Each tech writer would train the model as they wish, on both their own writing and on similar material covering both similar technical content and similar style and other considerations. Writers might go as far as including their personal model instance on their resumes, for both single-employer roles and contractual engagements.

While some tech writers might choose to host and train local models, the bulk of the administration and technical complexity will likely be provided by vendors in the same sense that one can host one’s own playlists music services. In fact, Amazon just opened up a service like this called “Bedrock.” (Hey, wasn’t that the town where the Flintstones lived?)

Scenario: The Enhanced Audience

I’ve been talking about how a language model could be useful to a producer, but consumers (particularly corporate customers) can have them too. Let’s say your company buys a subscription to a security software service and it’s your job to set it up. When you open the software, a secondary window gives you step-by-step instructions, tailored in some ways to your company network — which it has already scanned. It makes best practice recommendations to you, in the form of textual instructions. Or it could be narrated out loud if you choose. 

The software your company bought might not have what we think of as a “graphic UI,” either. It might simply a grid or matrix of choices, and the secondary window (the assistant) brings to the fore each menu or modal in turn. The software itself is not organized according to workflows; that organization is imposed by the assistant software. There are already any number of enterprise software products that are hard to use even if they do have a graphic interface, because adding that interface to very complex software with hundreds or thousands of options is very difficult and expensive. 

Anyway, when a software company’s audience is assisted in this way, it probably becomes the job of tech writers (“prompt engineers,” remember) to construct workflows and interactions for users. 



About Me

I’m Pete Harbeson, a writer located near Boston, Massachusetts. In addition to writing my own content, I’ve learned to translate for my loquacious and opinionated pup Chocolate. I shouldn’t be surprised, but she mostly speaks in doggerel. You can find her contributions tagged with Chocolatiana.