Given a problem, we can:
Give efficient methods to find exact solutions. When this is possible, all’s well. But alas, this is not possible all the time. Give inefficient methods to find exact solutions (brute force), and bemoan that P != NP (mostly). Propose intuitively appealing heuristics that are quick, and give “mostly good” results in practice, and hope that the malicious worst case input which can tear the heuristic apart never comes about. Prove theoretical approximation guarantees about these heuristics so that even in the worst case, the heuristic won’t go bad beyond some known bound. Personally, I am fascinated by #4. Most current large scale problems rely on #3, which hopefully will get elevated to #4 in a few years time, and theoreticians will tell the practitioners – “Yeah, your heuristic will never give a solution that if off the optimal by 200%.” As a budding researcher, is it better to build simulations, concrete systems, models, etc., and show empirically that certain heuristics work most of the time? Or is it better to take some small heuristic and prove without doubt its effectiveness under all circumstances?
...