Skip to content

Both.org

News, Opinion, Tutorials, and Community for Linux Users and SysAdmins

Primary Menu
  • About Us
  • Computers 101
    • Hardware 101
    • Operating Systems 101
  • End of 10 Events
    • Wake Forest, NC, — 2025-09-20
  • Linux
    • Why I use Linux
    • The real reason we use Linux
  • My Linux Books
    • systemd for Linux SysAdmins
    • Using and Administering Linux – Zero to SysAdmin: 2nd Edition
    • The Linux Philosophy for SysAdmins
    • Linux for Small Business Owners
    • Errata
      • Errata for The Linux Philosophy for SysAdmins
      • Errata for Using and Administering Linux — 1st Edition
      • Errata for Using and Administering Linux — 2nd Edition
  • Open Source Resources
    • What is Open Source?
    • What is Linux?
    • What is Open Source Software?
    • The Open Source Way
  • Write for us
    • Submission and Style guide
    • Advertising statement
  • Downloads
  • Home
  • 5 lessons we can learn from AI
  • AI

5 lessons we can learn from AI

There are plenty of very real lessons we can learn from artificial intelligence.
Seth Kenlon April 19, 2024 7 minutes read
linux_keyboard_desktop
1

It seems that the latest tech industry trend is for organizations to adopt AI tools. Early users of AI raised concerns that are so well known now that new terminology has developed around them (for example, “hallucination” is a common euphemism for “fictional”).

Astute observers identified quickly that AI was prone to error, and could easily mislead a casual user with misinformation that it seemed confident in, and would even “defend” it. Alternately, AI sometimes gives users an idea of what a valid solution might look like, but stops short of providing useful output.

These problems don’t seem to have slowed the trend, and businesses with no use case or business plan for AI have been quick to adopt nevertheless. With such a strongly flawed launch, there’s a lot AI can indirectly teach us.

Here are 5 important lessons you can learn from AI.

1. Get to the point

Factual data needs to be devoid of impassioned and opinionated language.

When an opinion is added to factual data, it’s considered appropriate to differentiate what you believe from what you know. AI is a natural language engine, and has no belief and no knowledge. AI generates sentences based on a vast example data set, but it has no understanding of what it is writing.

Should an AI algorithm determines that a sentence is likely to have an adjective before the object, based on the example it has been trained on, then it adds an adjective regardless of how inappropriate. Should an AI algorithm determine that an introductory paragraph needs three more sentences in it to be more like example introductory paragraphs, it generates sentences regardless of whether there’s anything more that needs to be said.

AI is good at generating superlative filler content. It is not good at generating concise and objective content.

Lesson: There’s a lot of content out there on the Internet. Unless you’re writing poetry or fictional prose, write focused content that respects the reader’s time.

2. Assert only defensible truth

Modern artificial intelligence is entirely based on probability.

AI is designed to analyze a set of potential output, and solve what output is most likely to satisfy a given prompt. The problem is, probable and correct are sometimes not the same thing, and often not the best thing. If you’re looking for a correct solution in a sea of incorrect answers, AI cannot deliver that for you because what’s correct in some cases is also the least probable.

This comes up a lot in the tech industry, and essentially reached meme status with the rise of Stack Overflow and other tech forums. Sometimes, someone marks a proposed solution in a forum as the correct solution, even though it’s incomplete at best or wrong at worst. It didn’t matter to search engines, and it doesn’t matter to AI now, how many people contribute the actual correct answer later on. The wrong answer got the award, so the wrong answer is “right” (even though it’s wrong, or at best sub-optimal).

The inverse is also true: A single article online containing accurate and correct documentation is destined to be obscured by 20 articles containing inaccurate information. Because AI is incapable of analyzing or testing the content, it has no ability to choose between correct and incorrect information. As long as misinformation outnumbers facts, AI considers misinformation more probable than fact.

Lesson: Don’t assert something you can’t prove. Write content that’s defensible with reproducible results.

3. Don’t make stuff up

When AI has zero knowledge about a topic, it doesn’t tell you that it has zero knowledge. Instead, it generates content.

The content is entirely fictional, but AI doesn’t identify it as fiction. AI treats all content, true or false, as equally probable. A sentence’s veracity doesn’t enter into the equation.

It often feels like you can fix the problem by interacting with an AI chatbot. When you point out something false, an AI is capable of changing the sentence construction into something that is likely to resolve your critique. However, it’s just as likely (or thereabouts, I admit I haven’t done the maths to determine precise probability) that the new sentence is just as incorrect.

It’s easy to detect incorrect information by an AI when you’re reading about a subject you’re an expert on. However, when you don’t know a subject well, you probably don’t know whether a sentence is correct or not. Just a single error by an AI that claims everything it says is true means you can trust nothing it claims.

Lesson: Be honest in your writing, and don’t pretend to know a topic you don’t understand.

4. Cite sources

All AI has been trained on existing data, and it uses that data to limit the distribution of probable sentences.

It’s obvious that an AI isn’t using summaries of Star Trek episodes to construct responses on how to write a usable DNS server configuration file. One partial solution to the problem of incorrect information from AI is for AI to cite sources of information.

When you cite sources, you provide your reader with the ability to verify and to better understand what you’re claiming. For example, most of what I know about AI comes from studying open_llama and other open source projects like Upscayl.

Lesson: Cite your sources.

5. Take feedback seriously

AI chatbots are interactive.

They’re designed to receive input from the user, and to iterate a session’s output based on user feedback. Of course, design and implementation are different things, and anyone who’s used an AI chatbot knows that there are severe limits on how much an AI can iterate.

Limited training data notwithstanding, the most frustrating interactions with AI chatbots occur when there’s dissonance between what the chatbot promises and what it outputs. The “art” of writing AI prompts is being treated as a serious discipline. Most AI chatbots are programmed to respond politely and positively to prompts for refinement, but of course an AI can’t change its “outlook” on a subject, and it cannot really learn new facts.

An AI can usually modify output to satisfy a specific request, but only in ways that fit with its training. Its limitation doesn’t effect its programming to agree with your feedback, though, despite output that fails to incorporate new information.

Lesson: Make meaningful changes based on valid feedback, and argue against suggestions you believe are not useful.

Are we documenting wrong?

The top lesson from AI is that it has gained popularity so quickly.

Zeitgeist is a powerful force, but it’s no easy feat to get millions of users and major corporations to adopt new technology. The question is, what’s made AI so appealing?

For us open source enthusiasts, I think there’s an argument that if users are relying on AI to summarize existing documentation, then we’re documenting things wrong. If it’s easier to ask an AI to write a YAML file to configure your Kubernetes load balancer, or a config file for your BIND configuration, than it is to read the documentation then something is either wrong with the docs or with the way the docs are being delivered.

Maybe it’s time to adopt interactive documentation for open source. Maybe it’s time for open source help forums to provide useful solutions. Maybe documentation needs to be clearer.

Indeed, there are plenty of real lessons we can learn from artificial intelligence.

Tags: open source

Post navigation

Previous: How to use udev
Next: Writing your own ‘fmt’ program

Related Stories

car-penguin-drive-linux-yellow
  • AI
  • Hardware
  • Linux
  • Moving to Linux

Linux and AI: Why Efficiency Still Matters

Don Watkins December 9, 2025
Fireworks-09
  • AI
  • Linux
  • Opinion
  • Windows

What do AI and Timothy Leary have in common?

David Both December 8, 2025
Year2019-cropped
  • Accessibility
  • AI
  • All Things Open
  • Linux
  • Open Source

A quick look back at ATO 2025

David Both October 16, 2025

Random Quote

There is no reason for any individual to have a computer in their home.

— Ken Olson

Why I’ve Never Used Windows

On February 12 I gave a presentation at the Triangle Linux Users Group (TriLUG) about why I use Linux and why I’ve never used Windows.

Here’s the link to the video: https://www.youtube.com/live/uCK_haOXPFM 

Why there’s no such thing as AI

Last October at All Things Open (ATO) I was interviewed by Jason Hibbits of We Love Open Source. It’s posted in the article “Why today’s AI isn’t intelligent (yet)“.

Technically We Write — Our Partner Site

Our partner site, Technically We Write, has published a number of articles from several contributors to Both.org. Check them out.

Technically We Write is a community of technical writers, technical editors, copyeditors, web content writers, and all other roles in technical communication.

Subscribe to Both.org

To comment on articles, you must have an account.

Send your desired user ID, first and last name, and an email address for login (this must be the same email address used to register) to subscribe@both.org with “Subscribe” as the subject line.

You’ll receive a confirmation of your subscription with your initial password as soon as we are able to process it.

Administration

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

License and AI Statements

Both.org aims to publish everything under a Creative Commons Attribution ShareAlike license. Some items may be published under a different license. You are responsible to verify permissions before reusing content from this website.

The opinions expressed are those of the individual authors, not Both.org.

You may not use this content to train AI.

 

Advertising Statement

Both.org does not sell advertising on this website.


Advertising may keep most websites running—but at Both.org, we’re committed to keeping our corner of the web ad-free. Both.org does not sell advertising on the website. Nor do we offer sponsored articles at this time. We’ll update this page if our position on sponsorships changes.

We want to be open about how the website is funded. Both.org is supported entirely by David Both and a few other dedicated individuals.

 

 

Copyright © All rights reserved. | MoreNews by AF themes.