How I use GPT-3 to save five minutes or less a day

how to use gpt-3

by John Biggs

I create a lot of content. Way too much. I blast it out, forget about it, and then make some more. I’m used to it and I’ve been doing it for decades.

Like most content slingers, I find ChatGPT to be simultaneously exciting and repellent. It’s exciting because it produces content that is eminently readable. Repellent because it blasts out so much junk yet that we will soon be inundated with absolute garbage.

To that end, I thought I’d share how I currently use it to (slightly) improve my writing process. I see ChatGPT and GPT-3 as a great, if outdated, search engine. It can create paragraphs of text for me in seconds and helps the way a human research intern could help: by giving me a few paragraphs on a particular topic in a few seconds.

I wrote my own ChatGPT interface (let us know if you would like access) and I can ask it fairly simple questions and use it to summarize articles for me for blog posts. For example, say I wanted to write about the Byzantine Generals Problem. I could do a Google Search which results in a top-line description that I could easily repurpose:

Or I could use ChatGPT:

Here’s what I got:

The Byzantine Generals Problem is a computer science problem that deals with the issue of reaching consensus among distributed systems in the presence of malicious actors. It is a classic problem in distributed computing, and is used to illustrate the importance of fault-tolerant algorithms in a network. The problem is based on a hypothetical scenario in which several generals of the Byzantine army are camped outside an enemy city, each commanding a portion of the army. The generals must decide whether to attack or retreat, but they cannot communicate directly with one another. The problem is to find a way for the generals to reach a consensus, despite the presence of malicious actors who may be sending false information.

First, let’s assume this content is polluted. I don’t trust this information. It might not be accurate and it’s definitely not plagiarism-free. That said, individual lines are probably not taken whole cloth out of other posts. But I don’t like to play with fire.

So I’ll rewrite the paragraph like this.

The Byzantine Generals Problem is a computer science problem that deals with the issue of reaching a consensus among distributed systems without having to trust the actors on a network. We use the problem to illustrate the importance of fault-tolerant algorithms in a network. Why is it called the Byzantine Generals problem? Because the original problem, proposed by computer scientist Robert Shostak, describes several generals in the Byzantine army camped outside an enemy city. The generals must decide whether to attack or retreat, but they cannot communicate directly with one another. The problem is to find a way for the generals to reach a consensus, despite the presence of malicious actors who may be sending false information.

I’ve added some color, some outside links, and some additional data. I took the robotic output and turned it into something human.

Now, by all means, feel free to use ChatGPT as it stands. The language it creates is pretty and useful. That said, it’s also always highly suspect. As a writer, I don’t like anyone to put words in my mouth. I want to understand a topic before I write about it. That said, I understand the Byzantine Generals Problem, and having a few graphs I can use in a post about it is very helpful.

We are currently in the Centaur stage of AI. Like the horsey body of a strapping mythological demigod, the AI carries us farther than we could go using our human legs. When the AI becomes truly “sentient” the Centaur model might fall away but until then use it as a tool that can help you get things done slightly faster and with more efficiency. To depend on it for everything, however, is folly just as it’s folly to expect a Centaur to wear pants. In the end, we should be wearing the pants as the AI helps us save a few seconds of writing time for things we know and understand implicitly.

Trusting it for anything else is dangerous.

Do you want to learn more about GPT-3? Read our previous posts on the topic.

And if you prefer an actual human to write your next article or press release, contact us.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: