I immediately wondered, "How do you pick your studies?" and in no time, the task seemed as daunting as any experimental study. Hillocks reviewed about 500 and chose only 60. Wow. Even if five to ten studies can be worthwhile for meta-analysis, a meta-analysis's literature review puts our measly annotated bib to shame. It's interesting that the effect size for a study, the impact on the original study's dependent variable, now becomes the criterion variable. The actual methods used in the original studies disappear after the choice to use the studies is made. The criteria for selecting studies seem rather arbitrary, and avoiding bias in this selection might be the biggest challenge in meta-analysis. Once criteria (favored methods) for choosing studies are established, that bias is furthered by then seeking homogeneity. Although I can't argue with the logic behind homogeneity--that correctly performed studies should yield similar results--I cannot help but feel sorry for studies designed with new ways to test similar variables. Meta-analysis seems to dismiss any research that goes against the grain. Only meta-analyses can compete with meta-analyses, and the results of meta-analysis may be given too much clout.
Maybe I'm just paranoid. If the results usually say one thing, let's see how strongly they say it using meta-analysis!
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment