When trying to predict something, it is often the case that you need progressively more processing power as you increase the precision of your predictions. There's a point, however, at which increasing the amount of effort you put into creating the perfect prediction runs into the hard truth that either your model or your initial reading of the system's initial state limit your prediction's accuracy. Past this point, there's no reason to invest more effort because the amount of precision you'll have in your answer is more than the margin of error for that answer. This is the fuzz point. It is the point of diminishing returns.
It can show up in interesting places. My favorite place is the classic, intractable argument over aesthetics. If you just work at a little more it would look better. This isn't always the case. Consider your ability to predict what people find aesthetically pleasing. Consider any data you have on the topic and how much error there's likely to be. Consider the amount of time you've spent arguing about whether the arrow should be green or blue. You've past the fuzz point.
Aesthetics aren't unique. A special case of the fuzz point shows up when prioritizing. Bug fixes and feature in software.
How accurately can you predict how long something is going to take to fix?
How accurate can you be in predicting how important a feature is to implement?
How long are you going to argue about it?
The motto is, past the fuzz point, flipping a coin is actually cheaper in the long run.
The fuzz point is very small for small bug fixes. So small, in fact, that you get multiple different sorts of penalties.
In many shops all bug reports must be prioritized before it's decided whether or not they are worth doing. For bug fixes < 4 hours weird things start to happen.
The cost of figuring out if the bug is severe becomes more important.
The cost of tracking down the cause of the bug tends to be much more important.
The cost of the bureaucracy of fixing the problem becomes very important.
The cost of merely context switching away from to bug for enough time for it to be prioritizes becomes important.
The difficulty in measuring the relative importance of all these things increases.
Small bug fixing is fuzz point land. If a bug takes a short amount of time, there's no point in prioritizing it. The amount of time you've spent just trying to figure out the true severity and the cause dominate. If the fix is quick, don't prioritize, do it now, on the main branch and the deal with the risk portion of the bug fix separately. (Essentially review the severity and risk of each bug and fix and decide if they must be back ported to the old branch for a bug fix.. Also decide if it's worth running it by QA. The answer is almost certainly yes.).
If you do this, however, you will notice that you're development will stop. This isn't good. The way of getting around this is to allocate a fixed amount of resources to the task and prioritize bug fixing in its entirety with the adding of new features.
If you must prioritize fixes then poll the list of bugs looking for important ones. Don't force everything to be run through the bureaucracy before anyone can get a time budget for it.) .
If you implement this make sure you clearly say how long to spend in the various stages of bug tracking before giving up (how much time trying to ascertain the severity vs how much time investigating for each level of severity vs how much time trying to implement the fix). This is a heuristic but it works fairly well, because bug fixes show up in timesheets so you can see violations.
Must be going. I've been spending too long on this issue.