Do not get me wrong - I think in general the MythBusters TV series was fantastic for raising awareness of the utility (and fun) science holds. But sometimes they get something so wrong that it upsets me. I have complained about some of these errors before. This time however, the mistake was not with a specific myth as much as with the scientific approach in general.
MythBusters repeatedly refers to the scientific process they are following in their show. In specific, when performing tests they usually point out that they need a larger sample size to ensure a meaningful result, as a single test will hardly yield useful information, as they said: "If it is not repeatable, it is not science". So how is it then, in the episode Unfinished Business where one of the myths they tested whether a computer game simulating a real life skill can improve your real life skill, they fail completely in this regard? Specifically:
Considering these issues, a better way to test this myth would have been to:
I am pretty sure that as an example, a chess training program can significantly increase your skills much more so than something like golf.
In the end, I think the myth was too general to test. In science one tries to get to the essence of a postulate and test that. This myth is akin to asking to prove that birds fly faster than mammals can run. The question can be answered, but only statistically speaking. And that is only accurate if your sample includes most of all known mammals and birds. So only by considering most computer game genres and methods of interaction with a larger sample size of people, can one draw a statistical average only.