Skip to content
Thoughtful, detailed coverage of everything Apple for 33 years
and the TidBITS Content Network for Apple professionals
37 comments

ChatGPT: The Future of AI Is Here

Artificial intelligence (AI) has progressed in fits and starts for 70 years. It’s one of those technologies, like commercial fusion power, that’s always 20 years away. Now we may actually be on the cusp of an AI revolution. But it’s not the one you’re expecting.

We’ve become accustomed to machine learning (ML), where a neural network is trained on a large number of samples until it can recognize items on its own. Google and Apple use ML to identify objects in your pictures. Search for “mountain” or “dog” in your pictures, and your phone will find them, not because you’ve tagged your pictures, but because Photos has been trained to recognize images containing those items. Text-to-image systems like Stable Diffusion are trained with millions of pictures and can generate an image based on a text description, like “a penguin eating a donut on a tropical beach.” While useful and certainly thought-provoking with respect to its exploitation of and commentary on creativity, no one thinks of this as human-level cognition.

A penguin eating a donut on a beach, courtesy of Stable Diffusion
Image by Stable Diffusion Playground

On the other extreme, Artificial General Intelligence (AGI) is the name given to a fully sentient computer system of at least human-level intelligence. This is the stuff of science fiction movies, from HAL in 2001: A Space Odyssey to the virtual assistant Samantha (voiced by Scarlett Johansson) in Her. To be clear, AGI doesn’t exist, and no one knows how to build an AGI yet.

Let’s Have a Chat

But something new and game-changing just shipped: ChatGPT. It comes from the OpenAI research and development company, which has been working on it for years. ChatGPT is currently free—you can create an account and talk to it. It’s built on version 3.5 of the deep-learning system GPT (Generative Pre-trained Transformer, glad you asked?), which uses AI to produce text that reads as though a human wrote it.

OpenAI’s technology, while complex, is based on the simple idea of pattern matching and extension. If you feed the pattern-matching engine every work of Shakespeare, you can then ask it to write more Shakespeare, and it will generate new text in the style of Shakespeare. (Much easier than rounding up an infinite number of monkeys and typewriters.) If you feed GPT everything available on the Internet, you can give it a prompt about anything, and it will generate something interesting. That’s more or less what OpenAI has done with ChatGPT in chatbot form, and it’s tremendously impressive.

Ben Thompson of Stratechery asked ChatGPT, “Did Thomas Hobbes believe in separation of powers?” and received this reply:

Yes, Thomas Hobbes believed in the concept of separation of powers. In his book “Leviathan,” Hobbes argued that a strong, centralized government is necessary to maintain order and prevent conflict in society. However, he also believed that a single ruler or group should not have complete control over all aspects of government. Instead, he advocated for the separation of powers into different branches, with each branch having specific responsibilities and powers.

This is fascinating. It’s concise, well-written English.

It’s also wrong. As Thompson points out, John Locke believed in separation of powers, Thomas Hobbes did not. They’re often discussed together, so ChatGPT probably ingested many essays contrasting Locke, Hobbes, and separation of powers and associated the general concepts without attributing the details correctly.

It would be a mistake to think that ChatGPT “knows” who Locke and Hobbes are and associates a set of beliefs with each man. That would imply that it maintains a database of knowledge, and OpenAI could correct the database to associate the right views with each man. ChatGPT doesn’t work like that. It’s a statistical model of what the likely next word or sentence is, based on what came before. It does have a feedback mechanism, but it’s designed more to train the model to go in a different direction in response to a question than to correct a particular wrong fact.

In a podcast, Benedict Evans suggested thinking of ChatGPT as a new kind of Internet search engine. When you ask Google search a question, it returns links to the Web pages most likely to contain relevant information. If you ask ChatGPT a question, it summarizes everything it’s read about that on the Internet.

However, where a search engine has multiple ways of ranking the quality of the pages it returns, ChatGPT reflects whatever it finds in its training material, warts and all, presumably with a bias toward the most common groupings of words and sentences. Since that largely comes from the Internet, and thus from human beings, it contains all the ugly things humans do. OpenAI has tried to keep ChatGPT from reflecting that bigotry. It won’t take the bait if you ask it obviously racist questions.

Implications of Summarizing the Internet

The Internet contains a lot of material on programming. You can ask ChatGPT to write programs in Python or JavaScript, and it will likely get them right because there’s such a wealth of training material online. However, if you ask ChatGPT to write a Python program that ranks the best characteristics for scientists by race and sex, it will return one that ranks white men first. Ouch.

ChaGPT writng Python functions

This result shows that ChatGPT is complex enough that there’s no simple way to say “don’t be evil.” Again, it has no database of knowledge in which OpenAI could label certain ideas as “bad” and tell ChatGPT to avoid them. It’s a stochastic prediction model that just picks the next words based on statistical training.

Another interesting trick people have discovered is asking ChatGPT to generate or run computer programs. A simple example is asking GPTChat to simulate a Unix shell. You type in a shell command like ls, and ChatGPT responds exactly as a Unix shell would. (OpenAI has since tweaked ChatGPT not to respond to Unix commands.)

It’s easy to think that since ChatGPT is actually a computer program, it’s simply running this command for you, like a real Unix shell. This is wrong. It’s going through millions of pieces of training data showing how Unix shells respond, and it’s returning its best guess at the correct text. It has no understanding that a Unix shell is a computer program, while Shakespeare was a person.

Similarly, Thompson asked ChatGPT what 4839 + 3948 – 45 is. It said 8732, and when Adam Engst tried while editing this article, it answered 8632. Both answers are wrong—it should be 8742. Again, ChatGPT may be a computer program, but it isn’t doing any arithmetic. It’s looking through its huge text model for the most likely next words, and its training data was both wrong and inconsistent. But at least it showed its work!

ChatGPT failing to do math correctly

This is why even though ChatGPT can generate computer code, I wouldn’t use it in a real program. Its answers are not necessarily correct; they’re just based on Internet training data. It’s likely to return errors both subtle and blatant. Unlike human language, computer programs need to be 100% correct. That’s why Stack Overflow banned ChatGPT-generated code.

White-Collar AIs

While AI is unlikely to put programmers out of work anytime soon, it is coming for many other professions. ChatGPT-inspired systems will undoubtedly take over from today’s weak support chatbots and start replacing human customer-support representatives. The writing is on the wall for higher-end jobs too. Research assistants of all kinds may be replaced by programs that can summarize the current state of knowledge on any subject, at least what’s available on the Internet. “Content farm” websites already use the likes of GPT to auto-generate text—when will it be good enough for mid-tier sites writing about sports, movies, celebrities, and any other topic where speed, quantity, and low cost are more important than a human journalist’s quality, accuracy, and voice? Will lawyers lose simple bread-and-butter contract work to AIs? (Answer: yes.)

ChatGPT-generated rental contract

As blogger Kevin Drum noted:

A world full of lawyers and professors and journalists who are able to calmly accept the prospect of millions of unemployed truck drivers will probably be a wee bit more upset at the prospect of millions of unemployed lawyers, professors, and journalists.

ChatGPT really is the leading edge of a massive wave of AI that’s about to wash over society. You can see in the many dialogues posted online that it’s pretty good at answering questions and being pleasantly conversational.

Evolution of Chatbots and Societal Implications

There’s still plenty of room to improve, however. Currently, ChatGPT doesn’t have much “state”—that is, it doesn’t really remember what you’re talking about from question to question. If you were to ask it, “When was Super Bowl 50?” it may reply, “2016.” If you then ask, “Who won?” it would have to retain state information from the last question to realize you’re asking who won that particular Super Bowl.

ChatGPT comment on state

Retaining state is easy for humans but hard for computers. That’s especially true if you bring up a conversation from a few days ago that involves multiple people and places, and you refer to “him” and “her” and “there” rather than actual names, as humans tend to do. If you’re planning a trip to Paris next week, and you ask your spouse, “Have they started packing?” (in reference to your kids), your spouse will know what you mean, whereas a computer won’t. But this shortcoming will likely be addressed soon. Our AIs will have a persistent memory of the people and events in our lives.

The next step will be giving ChatGPT a voice, integrating it with Siri, Alexa, or Google Assistant, so we can just talk to it. The state of the art in computer-generated voices is already good and will continue to improve until it sounds sufficiently like a person that it’s not immediately obvious you’re talking to a computer. Celebrity voices might even become popular, so you could have Google Assistant sound like Scarlett Johansson for a nominal fee. (Google already has celebrity voices in its Waze GPS navigation app.)

Once there’s a voice interface, people will start having long, private conversations with their AIs. They will develop an emotional relationship (which, to be fair, people have been doing since the original ELIZA chatbot debuted in 1966). No matter how often computer scientists tell people that an AI is not intelligent, that it’s just a statistical language model, people will ascribe feelings, desires, and sentience to it.

There will be good and bad uses for this technology. Older people and shut-ins will have someone to talk to and keep them company. Those with autism might find it an untiring conversational companion, something we’ve already seen with Siri. Small children could develop an unhealthy co-dependence on their friendly AI, talking to it every night as they go to bed, unable to understand why mommy and daddy won’t let AI chat with them at the dinner table.

Say hello to your new best friend. The privacy implications alone are enough to give George Orwell nightmares.

Computer scientists and philosophers have pondered for years if it’s possible to create a conscious computer program. We may get programs that almost everyone thinks are intelligent and talks to as if they’re intelligent, even though programmers can show there’s nothing intelligent going on inside them. It’s just complex pattern-matching. The computer scientist Edsger Dijkstra said, “The question of whether machines can think is about as relevant as the question of whether submarines can swim.”

The societal implications of everyone having an electronic best friend, who they can talk to privately, whenever they want, about whatever they want, as long as they want, are hard to predict. Who knows what this will do to our ability to communicate with each other? But if you think parents complain that kids spend too much time texting with one another and playing video games with (human) friends online, you ain’t seen nothing yet.

Subscribe today so you don’t miss any TidBITS articles!

Every week you’ll get tech tips, in-depth reviews, and insightful news analysis for discerning Apple users. For over 33 years, we’ve published professional, member-supported tech journalism that makes you smarter.

Registration confirmation will be emailed to you.

This site is protected by reCAPTCHA. The Google Privacy Policy and Terms of Service apply.

Comments About ChatGPT: The Future of AI Is Here

Notable Replies

  1. It is definitely eerie. David makes a great point about how one can be fooled into presuming intelligence, when in reality it is just really good at searching and forming English syntax out of the results. But it states those results so clearly and “confidently” that it triggers your innate human prejudice to consider such statements trustworthy.

    Perhaps those here who know a bit about electricity can comment on the following interaction (ChatGPT is replying to my question). To a novice (me), it sure sounds definitive:

    Can you calculate the voltage and amperage of a phone charger if you know the characteristics of the phone’s battery, and how quickly it is charging?

    It is not possible to calculate the voltage and amperage of a phone charger based solely on the characteristics of the phone’s battery and the rate at which it is charging. The voltage and amperage of a phone charger are determined by the design of the charger itself and are not affected by the characteristics of the phone’s battery or the charging rate. In general, the voltage and amperage of a phone charger will be marked on the charger itself or on the packaging it came in. If you do not have this information, you can try using a multimeter to measure the voltage and amperage of the charger. However, it’s important to use a multimeter correctly to avoid damaging the charger or the phone. It’s best to consult a qualified electrician or technician if you are unsure how to use a multimeter.

  2. Jeff, ChatGPT is very good at sounding authoritative, even when it’s completely wrong.

    Also, your question isn’t a good question for a Large Language Model (LLM) like ChatGPT. Your question has a correct answer, which depends on knowledge and physics and battery chemistry. ChatGPT doesn’t know any of those things. It’s just a statistical engine that returns the most likely words and phrases, based on its training data.

  3. It answers the question your asked, but that’s probably not the question you intended to ask, and it seems to go off on a tangent.

    For instance, the values printed on a charger are represent its maximum charge rate. Which you might never achieve when plugged into your phone, because that rate is going to be the minimum of what the charger can supply and what the phone can draw.

    Measuring the rate of charge might be able to tell you the amount of power that is being delivered at that time, but it may well vary over the course of a charge, since Lithium charging circuits tend to charge quickly at first and slow down as the battery approaches full.

    I don’t know why the bot mentioned a multimeter. Sticking a meter on the outputs of a charger isn’t likely to provide anything useful. It might let you see the voltage, but the current will end up being a function of the meter’s internal resistance. And a modern charger (e.g. USB PD) may not deliver its maximum voltage without receiving a command from the attached device.

    You could connect a meter in-line between the charger and your phone. There are products that do just that, but that’s not connecting a multimeter - that’s using a purpose-built charger-meter. You would need to hack up a cable in order to connect a multimeter. Not something I’d recommend most people try.

  4. Great article. I’ve had a lot of fun talking to ChatGPT, and asking it to write poetry about various people in my family who are well-known enough to have a body of work on the web site. It’s quite eerie!

    I do believe we can (and should) attribute the word “intelligence” here, though I would add that I distinguish intelligence from consciousness, consciousness from sentience, and sentience from being truly self-aware.

  5. David B, I agree distinguishing intelligence from consciousness will be important. ChatGPT might be intelligent, depending upon your definition. I don’t see any sign that it’s conscious.

  6. You mean once it has (presumably) done a search on what it has determined the subject of the query is, right? In my case it clearly was operating within the relatively narrow domain of phone chargers and electricity. Once it had information it determined was germane to my question, it then formed a response with the most likely words and phrases, right?

  7. I’m not quite ready to apply “intelligence” as a quality to systems like ChatGPT. It has zero knowledge of what it is saying, and in fact does not even know what it just said to you. Its design and training were single-purpose: to emulate natural conversation. It will even state this point-blank if you ask it (I have).

    It’s very good at emulating conversation, if you take each question-and-response independently from all others, though it frequently repeats the same thing in different ways without adding anything. It actually reads a lot like mansplaining. Attempting to continue a topic usually results in more repetition.

    But it’s just an emulation. It isn’t “communicating” in the way real creatures do—the words literally mean nothing to it. If they can pair this kind of natural-language processing with expert systems that actually evaluate knowledge, then I might be willing to extend “intelligence” to cover it. But right now, it’s not there.

  8. This is fascinating. It’s concise, well-written English. It’s also wrong.

    Research assistants of all kinds may be replaced by programs that can summarize the current state of knowledge on any subject, at least what’s available on the Internet.

    The former quite clearly states why the latter should not happen.

    Perhaps it’s sufficient for some chatbot at Comcast trying to sell people on an upgrade, but in my department here on campus there is no way I would try to save a few $ on researching a certain field by relying on such “AI” instead of for example on a qualified GSRA who knows their limits (or is at least willing to learn them). Especially when that AI, as @das so aptly puts it, “is very good at sounding authoritative, even when it’s completely wrong.”

    But I guess it does befit our time, that coming across as confident is valued as high (if not higher) than actually being correct. Post-factual age indeed. :wink:

  9. @Marquelle: My argument is based on the idea that there is basic “intelligence” in a wide array of animals — and many biologists even attribute intelligence to bugs and plants. Just because they don’t “think” like humans doesn’t mean there’s no intelligence. Animals, plants, and even AI systems allow for input, filtering, processing, and decision making based on a wide variety of factors.

    https://www.ambius.com/blog/are-plants-intelligent/

  10. This is the perfect definition of ChatGPT:

    "ChatGPT is a bullshitter. The essence of bullshit is unconcern with truth. It’s not a liar because to be a liar, you must know the truth and intend to mislead. ChatGPT is indifferent to the truth.”

    (Disclaimer: I am an AI professor at the university…).
    And the following conversation is hilarious (four ordered snapshots):

  11. Thank you Enrico for your apt evaluation of a hype. I just played with ChatGPT and came to the same conclusion.

    — nef

  12. "ChatGPT is a bullshitter. The essence of bullshit is unconcern with truth. It’s not a liar because to be a liar, you must know the truth and intend to mislead. ChatGPT is indifferent to the truth.”

    Thank you Enrico for your apt evaluation of a hype. I just played with ChatGPT and came to the same conclusion.

    It’s a pleasure to find you in this group!
    —e.

  13. Enrico, that conversation is hilarious. But I think it shows a common misperception. ChatGPT doesn’t “know” facts, it doesn’t “lie.” It doesn’t have intentionality. It’s just an algorithm that’s been trained on most of the publicly available internet, and based on your prompt, tries to pick the next most common words or phrases. But people do tend to anthropomorphize it.

  14. But people do tend to anthropomorphize it.

    Yes, they do.

    But there is another problem that justifies Enrico’s label “bullshit”. Namely: many reports presenting ChatGPT – being probably overwhelmed by its linguistic authoritative competence – are enthusiastic and see all kinds or real-world applications without realising its fundamental shortcomings.

  15. Just like ELIZA. And the psychiatrists it is trying to emulate.

  16. I’m optimistic, despite myself. I think these AIs (or whatever you call them) can be used, even related to, but not trusted in the same way we’d trust other people or authorities. And indeed, even those other authorities are rightly subject to scepticism, when that’s warranted, e.g. because they are (consciously or unconsciously) partizan or self-interested. It’ll all work itself out in the end. :)

    It’s true, though. Some of the responses I got from it were very, very creepy indeed. It may just be a language model, but it challenges our own notions of intelligence and sentience on some level of abstraction, as well it should.

    See also:

  17. :+1:
    The apparently sophisticated linguistic competence in uttering bullshit has been called, rightly so, in this context:

    hallucination

    /həˌluːsɪˈneɪʃ(ə)n/ Learn to pronounce

    noun

    1. an experience involving the apparent perception of something not present.

    My (and Norbert’s) academic background in formal linguistics, semantics, and logics, of course emphasises the total semantic incompetence of ChatGPT.

    ChatGPT has been also called a “stochastic parrot”, exactly to point out the inability to understand the meaning of the conversation, but also with the ability to replicate somehow the human language with apparent good proficiency.
    Note that it is not at all trivial to design and implement a “stochastic parrot”, namely a tool, as a parrot, which is able to produce credible conversations, which a great linguistic competence, uttering bullshit. I’m not sure for which purpose, theoretical or practical, this can be useful.
    —e.

  18. Humans can create AI to perform tasks that humans already can do well. We are marvelous painters, darn good mathematicians and can play poker really well. I’ll be impressed when AI can do a task that humans are poor at, like driving cars on busy roads.

  19. From what the experts are saying about ChatGPT, it is hard to understand the praise that this technology is receiving in the popular press. If, as I’ve read, it simply scours the web for matching patterns and doesn’t really know (or learn) anything, I don’t get why people are so impressed with it. As for creating “art”, that’s a joke. Riffing on the work of others may be amusing, but it’s not art.

  20. So are many of my students.

  21. Ken

    Artificial Intelligence is a really bad term, I suspect used more for marketing. These programs do not have anything near intelligence. Their main characteristic is that they can learn. They gain information and also, more importantly, they learn to respond. They have a very large number of questions directed at them, and are optimised to return the responses that a person would make, by tuning their underlying algorithm. In the future we will have better data structures and algorithms and they will become closer to appearing intelligent.

  22. AI, especially when used by the marketers is often heavy on the A, and light on the I.

    But by the same token, I’d say you can criticize use of the term Machine Learning. When we humans learn, that includes our ability to gradually interpolate and extrapolate so we can extend what we learned (or rather what we remember) to new cases.

    In the ML we have used in our research here (high-energy physics), one thing we have consistently been able to demonstrate is that while with a lot of effort ML can be used to efficiently interpolate, it is persistently bad at extrapolating. ML is a strong toolset and has lots of benefits for us, but it is frankly just really bad at extrapolation. There are smart tricks talented people can play and with that kind of effort, you can get around certain problems that arise with “vanilla” ML, but I still would consider it more “remembering” or “recognizing” than actual true “learning”.

  23. No doubt about that. “Intelligence” implies an understanding of the subject, not just being able to consistently produce the expected answers.

    There are some research groups working on actual AI (Here’s a book describing one group’s work from 2008, which I’ve found particularly fascinating), but I have yet to see any such group develop something robust enough for commercial applications.

    “Machine learning” may be a better term, in that current neural-net approaches involved “training” the software by repeatedly presenting it with data and the answers and having it “learn” from that data so it can generate correct results when presented with data it wasn’t trained on.

    But that’s not intelligence. That’s pattern recognition. Very useful, but an “AI” that (for example) can identify different objects in pictures still has no clue what those objects actually are or how they are used, so it fails miserably with things that don’t look like expectations (e.g. a chair shaped like a hand) whereas humans have no such problem.

  24. I asked chatGPT several times to write a poem in the style of certain poets giving it a rather dark depressing theme and ALWAYS the poem ends with an uplifting couplet along the lines of “there is always light at the end of the tunnel”. I like that, though it is probably because of the way it is trained, which means is it biased by the trainers?

  25. AI has been “the future” for close to 50 years now (I remember it being talked about in the mid-70s when I was first getting into the business). The future so far ain’t what it was cracked up to be.

    I fear that we’ll know that AI has arrived when we have Skynet and Terminator, it takes over the adult entertainment business like it’s done for the Internet, or it’s being used by scammers.

  26. Maybe I don’t think enough of my abilities (I do think more of them than the general populous), but this hack would be VERY simple, not destroy either the charger, its cable, or the phone, to use a multimeter to measure the voltage or current (both if you have two Multimeters (I have 3), IF you can easily make electrical connections to both the charger’s plug and the phones socket, without shorting out connectors at either. For the phone it would be best to use an old charge cable that’s no longer needed (or broken at the end away from the phone), for the charger, most now have a USB-A or USB-C socket, so again sacrificing another cable. If you have a spare charger to phone cable, or buy a cheap one, you’ll have both connectors needed. The wires inside these cables are most often color coded, so that not a problem, but the wires are fine, and that is a problem. The internal resistance of the ammeter function should not cause any problem.

  27. How little do employee evaluators know. This was my problem in advancing at work throughout my career, I knew my limits and didn’t try to pretend I KNEW more than I really did. I think confidence was valued MUCH higher than being correct or have the ability (or knowledge of your ability and how to advance it). One boss I had said if you haven’t made any mistakes you haven’t done anything. How many people thought Galileo was wrong, he was imprisoned for being right, but against the grain. I know many internationally respected ‘famous’ physicists and all are the most humble vs. those at conferences that try to show off or emit how smart they are, I often found the later were not. But I slept well at night knowing how I would approach my problems the next day, I hate to think how those that prove the Peter Principal is correct sleep at night.

  28. Within the article it says: “There’s still plenty of room to improve, however. Currently, ChatGPT doesn’t have much “state”—that is, it doesn’t really remember what you’re talking about from question to question.” I just opened an account, the first feature it says:

    Capabilities
    Remembers what user said
    earlier in the conversation.

  29. Give it a try and I think you’ll see what @das means—it remembers a little, sometimes. I couldn’t find where I read this, but I thought I saw that it has something like 3000-4000 characters of state, whatever that means.

  30. As you certainly know (but others here may not), you measure voltage across two points (e.g. between the charger’s power line and its ground). You measure current inline with a single line.

    But in either case, all the wires you need access to are safely hidden inside the charger, in the phone, or in the cable. To take measurements the (relatively) easy way, you would need to remove the insulation from a charging cable. Then expose the conductors for the ground and power line. You can connect a meter across these two to measure voltage.

    To measure current, you would need to cut one of the lines, then attach each of the cut ends to your meter.

    This is what I mean by “hack up a cable”.

    Now, you could also fashion your own break-out cable with a few USB connectors of the appropriate type, a short cable, and a small circuit board with test points where you connect your meter. But once you’re going through that much work, you’re probably better off just buying a USB power meter dongle - they’re very inexpensive.

    I wouldn’t try that. Although the pins on a USB type-A connector are pretty big, the pins on a type-C or a Lightning connector are very small. It would not be easy to make a reliable connection just by sticking probes in a port. You would want a break-out cable/dongle to provide test points large enough to easily contact (or attach micro-clips to).

    I would define that as “hacking up a cable”. :slight_smile:

  31. I’ve been using multi-meters since the 60s. Back when they had a needle.

    And I have NEVER used the ammeter function on any of them. Well except in some EE labs in the 70s.

    Just a data point.

  32. Actually I have just the thing, don’t remember where I got it, I think either Amazon or EBay. But it’s a voltage supply that plugs into USB A, and has a USB output socket and screw terminals. Can be Voltage Constant or Current Constant, and can change the display to show current or voltage. The voltage can be changed from something like 1 V to 9 volts or so. Think it will supply 3 A max, and is also variable.

  33. What I don’t understand about this potential use, is where will something like GPT get the source material to ‘write’ the celebrity/film/sports news? Until people have written the original articles, it won’t have appropriate training material, so surely people will always be ‘faster’?

    I also wonder if we’re on the cusp of information being more restricted in some way. It’s easy to imagine news sites not wanting to share their articles with training models which are then used to take readers away, or lawyers not wanting to make new forms of contracts available.

  34. Linus Tech Tips decided to test ChatGPT by asking it for instructions for how to build a gaming PC, and by doing only what it says. A good test, revealing the good, the bad and the ugly:

  35. It will just be wrong. Which is just fine if you’re creating the useless click-bait articles we all see when web surfing.

  36. You have a good point that a program like ChatGPT won’t be able to write the first article about sports match, or an election, or a celebrity divorce. But it will be able to write the tenth, or the hundredth. Often an article like that has a small amount of new information, woven together with a lot of boilerplate background. There are already thousands of secondary websites, where people are paid minimum wage (or less) to sling together barely accurate articles about some controversy at the World Cup. The main goal is to have some content to sell advertising against.

Join the discussion in the TidBITS Discourse forum

Participants

Avatar for ace Avatar for Simon Avatar for silbey Avatar for cnjacobs Avatar for franconi Avatar for jzw Avatar for sjoerdw Avatar for fuchs Avatar for raleighthings Avatar for Quantumpanda Avatar for fischej Avatar for david26 Avatar for omalansky Avatar for das Avatar for phillman5 Avatar for Shamino Avatar for ken10 Avatar for Sebby Avatar for Technogeezer Avatar for Halfsmoke