Another Crawler problem with Sitemap

[ad_1]

Solution: Secure that curl can be used by your WordPress instance

My Crawler stopped working at some time. I havn’t noticed it, but during a migration I validated everything, and spotted that it didn’t work anymore.

It’s throwing the classic “No valid sitemap parsed for crawler.”

However, the given sitemap URL is working just fine, as it has been for a few years. I have tested it in my browser, in a php instance (see below) and with curl on the server itself.

I’m using the the Google Sitemap Generator plugin, which normally works (and is listed as a supported sitemap):

The “test script” provided in other topics, works just fine. It’s fetching the sitemap (domain removed), and var_dumping it as expected.

<?php
require( './wp-load.php' );

$response = wp_remote_get( 'domain.com/sitemap.xml' );

echo '<pre>';
var_dump($response);
echo '</pre>';

I’ve checked the last 10 tickets related to the same error message. but I’ve not found any interesting. If I turn on “Debug”, I don’t get anything interesting related to the “crawler” process.

Pressing “Refresh Crawler Map” in “Map” also throws: “No valid sitemap parsed for crawler.”

Info:
CentOS + Ubuntu server tested + CyberPanel
Page is behind Cloudflare DNS + Cache

  • This topic was modified 28 minutes ago by exetico. Reason: Added a few details
  • This topic was modified 2 minutes ago by exetico.

 

This site will teach you how to build a WordPress website for beginners. We will cover everything from installing WordPress to adding pages, posts, and images to your site. You will learn how to customize your site with themes and plugins, as well as how to market your site online.

Buy WordPress Transfer