Not very technical in this area, but received this from my page indexing report from google search console, and after some light research it could possibly be an error with my robots.txt file on my site ?
i typed site:site.com/robots.txt to see if it pulled anything up, and I received this, but from other posts it seems this is doing what it is supposed to do.
# START YOAST BLOCK
# ---------------------------
User-agent: *
Disallow:
Sitemap: https://website.com/sitemap_index.xml
# ---------------------------
# END YOAST BLOCK
Website is a showit site, with a wordpress blog. Any ideas on what would cause so many of my main pages to not be indexed? Any advice/ articles/ pointing in the right direction would be greatly appreciated.

Click on “Discoverd but not indexed” + “Crawled but not indexed” to drill down – and it’ll tell you why they haven’t indexed it yet. It could just be a time thing and you need to wait longer. How old is the site / pages?