Ok let me explain what is happening here, why it is harmless, and what the solution is (even though not required). I recently came across this for our own X3 websites, so I did some digging some time ago.
Let's look at what Google search console is reporting:
https://www.flamepix.com/content/5.faq
The above is entirely
CORRECT by Google of course. The url
/content/5.faq is NOT a website page and should NOT be indexed. When accessed, it correctly returns an error "404 page not found".
The url is actually a folder, and contains the data that creates the page
www.flamepix.com/faq/ and images on that page.
Ok, but why is Google listing these pages in my search console under crawl errors?
That is a good question, but f
irst of all, let me be 100% crystal clear here: X3 does NOT link to these 404 URL's under any circumstance. I believe Google is finding these folders from IMAGE files located in the folders, and simply thinking "ok if there is an image /content/5.faq/image.jpg, there must be an URL /content/5.faq/". That is correct of course, but that folder is not a page, something Google will learn when it tries to access it.
Furthermore, I would like to emphasize what Google says when you CLICK any specific crawl error like this:
Google search console wrote:Generally, 404s don't harm your site's performance in search, but you can use them to help improve the user experience.
With the above, I would just like to emphasize that this does not mean there is anything wrong with your website or it will hurt your performance. Google search console will seemingly try to browse ANYTHING it might consider an URL, and sometimes report them as crawl errors regardless. Doesn't mean you have to fix anything. In this case, "you can use them to help improve user experience" ... Ok, but keep in mind, no users are actually viewing those pages ... So thanks for reporting Google, but this is not a problem.
X3.25.0
Since I found this annoying myself, we have solved it in
X3.25.0 (released yesterday, available from Panel > Tools > X3 updates) by adding a rule to robot.txt.
X3.25.0 Release wrote:Robots.txt
For some reason, Google Bot attempts to crawl parent folders of images (for example “/content/folder/image.jpg”), falsely assuming there is a PAGE at “/content/folder/“. X3 content folders are NOT pages, so it will only find a “403 Forbidden”. We have added a rule Disallow: /content*/$ in robots.txt to prevent Google bot from crawling content folders, without blocking access to files.