Skip to content

Both.org

News, Opinion, Tutorials, and Community for Linux Users and SysAdmins

Primary Menu
  • About Us
  • Computers 101
    • Hardware 101
    • Operating Systems 101
  • End of 10 Events
    • Wake Forest, NC, — 2025-09-20
  • Linux
    • Why I use Linux
    • The real reason we use Linux
  • My Linux Books
    • systemd for Linux SysAdmins
    • Using and Administering Linux – Zero to SysAdmin: 2nd Edition
    • The Linux Philosophy for SysAdmins
    • Linux for Small Business Owners
    • Errata
      • Errata for The Linux Philosophy for SysAdmins
      • Errata for Using and Administering Linux — 1st Edition
      • Errata for Using and Administering Linux — 2nd Edition
  • Open Source Resources
    • What is Open Source?
    • What is Linux?
    • What is Open Source Software?
    • The Open Source Way
  • Write for us
    • Submission and Style guide
    • Advertising statement
  • Downloads
  • Home
  • How to debug a web server error with ‘wget’
  • Fixing problems
  • Web Server

How to debug a web server error with ‘wget’

The wget command line tool can provide detailed information about what’s happening on a web server.
Jim Hall April 10, 2024 5 minutes read
A spider web with dew on a purple background.

If you manage a website, you know that sometimes things can get a little messed up. You might remove some stale content and replace it with a redirect to other web pages. Or you might update a website to require a username and password to access certain pages. After making enough changes, you might discover that your website has stopped working the way you want it to. For example, you might see an error like this in your browser to indicate that the website is redirecting too many times:

screenshot of the Firefox web browser, showing an error of too many redirects

One way to debug this is by using the wget command line tool. While you might already know how to use wget to fetch individual web pages or files from websites, you can also use wget as a handy tool to keep in your system adminstrator “toolkit” to help troubleshoot web server errors.

For example, when I need to debug a website, I rely on the -S option to show all server responses. When using wget for debugging, I also use the -O option to save the output to a file in case I need to review it later. Here is how I use these options with wget to debug a website issue.

A problem website

Let’s say we have updated a website to add a new feature to edit content. But when we access the website, we get the message from earlier in this article about too many redirects.

The first step to debugging the problem is to understand what is happening behind the scenes. The web browser doesn’t provide any details about where it has been redirected to so we need to use wget -S to get more detail about how the client is being bounced around the website:

$ wget -S http://10.0.0.11/edit/
--2024-04-05 11:30:57--  http://10.0.0.11/edit/
Connecting to 10.0.0.11:80... connected.
HTTP request sent, awaiting response... 
  HTTP/1.1 302 Found
  Date: Fri, 05 Apr 2024 16:30:57 GMT
  Server: Apache/2.4.58 (Fedora Linux)
  X-Powered-By: PHP/8.2.17
  Location: /login
  Content-Length: 0
  Keep-Alive: timeout=5, max=100
  Connection: Keep-Alive
  Content-Type: text/html; charset=UTF-8
Location: /login [following]
--2024-04-05 11:30:57--  http://10.0.0.11/login
Reusing existing connection to 10.0.0.11:80.
HTTP request sent, awaiting response... 
  HTTP/1.1 301 Moved Permanently
  Date: Fri, 05 Apr 2024 16:30:57 GMT
  Server: Apache/2.4.58 (Fedora Linux)
  Location: http://10.0.0.11/login/
  Content-Length: 231
  Keep-Alive: timeout=5, max=99
  Connection: Keep-Alive
  Content-Type: text/html; charset=iso-8859-1
Location: http://10.0.0.11/login/ [following]
--2024-04-05 11:30:57--  http://10.0.0.11/login/
Reusing existing connection to 10.0.0.11:80.
HTTP request sent, awaiting response... 
[..]
Location: /edit [following]
--2024-04-05 11:30:58--  http://10.0.0.11/edit
Reusing existing connection to 10.0.0.11:80.
HTTP request sent, awaiting response... 
  HTTP/1.1 301 Moved Permanently
  Date: Fri, 05 Apr 2024 16:30:58 GMT
  Server: Apache/2.4.58 (Fedora Linux)
  Location: http://10.0.0.11/edit/
  Content-Length: 230
  Keep-Alive: timeout=5, max=81
  Connection: Keep-Alive
  Content-Type: text/html; charset=iso-8859-1
Location: http://10.0.0.11/edit/ [following]
--2024-04-05 11:30:58--  http://10.0.0.11/edit/
Reusing existing connection to 10.0.0.11:80.
HTTP request sent, awaiting response... 
  HTTP/1.1 302 Found
  Date: Fri, 05 Apr 2024 16:30:58 GMT
  Server: Apache/2.4.58 (Fedora Linux)
  X-Powered-By: PHP/8.2.17
  Location: /login
  Content-Length: 0
  Keep-Alive: timeout=5, max=80
  Connection: Keep-Alive
  Content-Type: text/html; charset=UTF-8
Location: /login [following]
20 redirections exceeded.

I’ve omitted most of the output from this command, but you can already see that accessing /edit redirects to /login, which then immediately redirects back to /edit, and so on. This is an endless loop, and wget will automatically exit after 20 iterations.

By seeing these details, the webmaster can make a determination that the login system is not working properly, and focus their attention on the /login page instead of trying to debug a (working) /edit page.

Saving output

In this case, the website redirects too many times, so it never produces a valid web page. But if you had an example where the web page printed an error that you needed to save for later, you can dump a copy of a web page to a file.

Let’s say you manage a website with an RSS feed, and the feed has stopped working. The feed displays an error code that you need to debug the problem, so the first step is to get a copy of the RSS feed. Use the -O option to save the output to a file:

$ wget -O t.rss http://10.0.0.11/rss/
--2024-04-05 11:44:09--  http://10.0.0.11/rss/
Connecting to 10.0.0.11:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘t.rss’

t.rss                 [ <=>          ]     273  --.-KB/s    in 0s      

2024-04-05 11:44:09 (26.2 MB/s) - ‘t.rss’ saved [273]

This saves a copy of the feed from http://10.0.0.11/rss/ to the temporary file named t.rss:

$ cat t.rss 
<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0">
<channel>
  <title>Latest News</title>
  <description>A summary of the latest news from XYZ.</description>
  <link>http://example.com/news/</link>

  *ERROR: Cannot read news items (ERR 8675309)
</channel>
</rss>

With this information, we can see that the RSS failed with the error code 8675309, which is likely an important error number on the system that produces the news feed. As the website administrator, you would then have the information you need to track down and fix the error.

Other useful options to ‘wget’

The wget command has lots of other useful options. Use man wget to read the online manual to learn what other options might be useful to you. For example, the -o (lowercase) will save the wget command output to a file, rather than printed to the screen. And the -T option can set a timeout, after which wget will stop waiting on the web server.

Tags: command line Problem Determination

Post navigation

Previous: An introduction to the Linux boot and startup processes with GPT and GRUB2
Next: Tweak your system performance with ‘noatime’

Related Stories

Network components
  • Fixing problems
  • I really borked it this time
  • Problem Solving
  • System Administration

My first sysadmin mistake

Jim Hall May 29, 2025
car-penguin-drive-linux-yellow
  • Linux
  • Web Server

My first Linux in the enterprise

Jim Hall December 18, 2024
Puzzle pieces coming together to form a computer screen
  • Linux
  • System Administration
  • Web Server

How to configure multiple websites with Apache web server

David Both June 5, 2024

System upgrades completed

Tools illustrationUpgrades to Fedora 44 for the Both.org firewall and server completed Sunday afternoon at about 14:39, EDT.

There was a problem with SSL certificates that prevented HTTPD, the website server from starting after the upgrade. It only took a couple minutes to fix and all now seems to be running smoothly.

Thanks for your patience.

Random Quote

Beauty is more important in computing than anywhere else in technology because software is so complicated. Beauty is the ultimate defense against complexity.

— David Gelernter

Why I’ve Never Used Windows

On February 12 I gave a presentation at the Triangle Linux Users Group (TriLUG) about why I use Linux and why I’ve never used Windows.

Here’s the link to the video: https://www.youtube.com/live/uCK_haOXPFM 

Why there’s no such thing as AI

Last October at All Things Open (ATO) I was interviewed by Jason Hibbits of We Love Open Source. It’s posted in the article “Why today’s AI isn’t intelligent (yet)“.

Technically We Write — Our Partner Site

Our partner site, Technically We Write, has published a number of articles from several contributors to Both.org. Check them out.

Technically We Write is a community of technical writers, technical editors, copyeditors, web content writers, and all other roles in technical communication.

Subscribe to Both.org

To comment on articles, you must have an account.

Send your desired user ID, first and last name, and an email address for login (this must be the same email address used to register) to subscribe@both.org with “Subscribe” as the subject line.

You’ll receive a confirmation of your subscription with your initial password as soon as we are able to process it.

Administration

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

License and AI Statements

Both.org aims to publish everything under a Creative Commons Attribution ShareAlike license. Some items may be published under a different license. You are responsible to verify permissions before reusing content from this website.

The opinions expressed are those of the individual authors, not Both.org.

You may not use this content to train AI.

 

Advertising Statement

Both.org does not sell advertising on this website.


Advertising may keep most websites running—but at Both.org, we’re committed to keeping our corner of the web ad-free. Both.org does not sell advertising on the website. Nor do we offer sponsored articles at this time. We’ll update this page if our position on sponsorships changes.

We want to be open about how the website is funded. Both.org is supported entirely by David Both and a few other dedicated individuals.

 

 

Copyright © All rights reserved. | MoreNews by AF themes.