One of the things 10up lists as a best engineering practice is this:
Do not use
posts_per_page => -1.
This is a performance hazard. What if we have 100,000 posts? This could crash the site. If you are writing a widget, for example, and just want to grab all of a custom post type, determine a reasonable upper limit for your situation.
This is a very valid point, but I found myself stymied at how to work around it in a case where I knew I needed to check all posts in a custom type. And worse that post type was growing every week by 10 to 20. In my case, the reasonable upper limit was an unknown that was also unpredictable. But an endless loop would also be bad.
One of the other recommendations from 10up is not to run more queries than needed. I was already using
no_found_rows => true to prevent counting the total rows, as it’s really only necessary for pagination. I also force in the post type I’m scanning, which again limits how many possible items will be queried. And yes, I have
update_post_term_cache set to false as in most cases those aren’t needed either.
But what could I do to make the actual query of getting how many posts of type A had a post meta value that matched post type B? And to make it worse, I could have multiple values in the post metas. It’s really a case where, in retrospect, making them into a custom taxonomy might have been a bit wiser.
What I decided to do was limit the number of posts queried based on how many posts were in the post type.
posts_per_page => wp_count_posts( 'custom_post_type' )->publish;
I’m not quite concerned with 100,000 posts, and I have some database caching installed to mitigate the load. But also I have set an upper limit and this feels less insane than the
-1 value. Since I had to generate the count anyway for displaying statistics, I moved that check to a variable and called it twice.