If you're using PostgreSQL and need to track performance anomalies, you probably know the option log_min_duration and maybe even auto_explain contrib module, that allow you to react to queries that are slower than the defined limit. That's a great thing, but there's a catch - how to set the threshold, especially when there are queries with very different query times?
If you set it too low, you'll get huge log files with a lot of queries. If you set it too high, you'll probably miss changes in case of the very show queries. What if you set the limit to 200 ms and the queries that used to run at 50 ms suddenly slow down to 150 ms? The performance effectively dropped to 30% yet nothing got logged ...