There are eventually some limits - servers would eventually run out of disk space and that sort of thing. However, we're not really very close to those limits. There's also some softer limits - as the dataset gets bigger, less of it is going to be cached in ram at any given time, which can have a performance impact. Similarly, that's why large wikis like English Wikipedia get their own DB server, and smaller wikis share a DB server.
Currently the largest category is on commons with 34 million entries .
The way databases work, is you have the data, and then a bunch of "indexes" of the data. The index is just a sorted list of all the data based on some criteria. In the categorylinks table there are a bunch of indexes but the important one for viewing a category page is the one on (cl_to, cl_type, cl_sortkey, cl_from) which basically sorts all the entries in order based on the name of the category, followed by the page type (media, subcategory or "normal" page) and the sortkey/name of the page in the category. So if you want to display the first 200 normal page entries of a category, the DB basically just finds where that category begins in the index (Taking O(log N) time), and then starts reading 200 entries on that list in order starting at that point. Since its already in sorted order, the database can just look at the first 200 entries and stop, instead of looking through the whole category. If you're interested in the nitty gritty details on how this works see w:B-tree.
However, for DPL where multiple categories are specified, you can't really do it like that since you have to find the pages that are in all specified categories. The best case scenario is still relatively cheap, but the worst case might involve looking through all the entries in the category.
The situation on russian wikinews, was that they mass imported a lot (Like half a million) pages from a freely-licensed russian news source. All of these pages had an infobox on them with a DPL. The increasing size of the categories made these DPLs slow. Additionally, importing all these articles quickly, meant they all had to be rendered at the same time. Most of MediaWiki assumes database queries are very quick, and that most of the time rendering is spent in the CPU doing non-database stuff. This wasn't really true in this situation, as a result the DB started to get backed up, and requests to it piled up overwhelming it, making everything even slower.
Anyways, after that was dealt with, a few months later russian wikinews did another similar import. Although there may possibly have been language barrier issues, i don't know, at the time it seemed like they did it very uncautiously, doing the import as fast as possible without any consideration of possible risks. This caused the same DB problem, but also because the DB was so slowed down, normal requests to projects using that DB sort of hanged. As a result all the PHP servers were stuck waiting on the DB to respond, and web requests started to pile up, which caused things to spill out of control even further, triggering downtime not just for sites using that DB server, but all wikimedia websites. Anyways, when russian wikinews was told they had to stop, they got very angry, called all the devs incompetent, and wrote a "news" article about how WMF devs are screwing over ru wikinews. This didn't really endear them to the developers involved, who were not too happy to have to deal with a second outage caused by the same people doing the same thing. End result was that DPL was removed from ruwikinews (After all, it was only designed for small wikis, and ruwikinews was no longer small after the mass import). Additionally some changes were made to DPL that was hoped would improve performance. Recently made DPL queries were cached for a short time, as a big part of the problem was doing the same DPL query over and over for templates used on many pages. There was an attempt to use [PoolCounter]] to limit concurrency, but that wasn't enabled due to a bug that we couldn't figure out. A timeout to DPL queries was also added.
The nitty gritty technical details of the russian wikinews situation is at wikitech:Incidents/2021-07-26_ruwikinews_DynamicPageList
Anyhow, the moral of the story, if you do something that takes down Wikimedia websites, be really careful before doing it a second time, and don't get angry when someone tells you to stop.