My favorite statistician in the behavioral sciences, Rand R. Wilcox, begins chapter 1.1 of his book on hypothesis testing with "To begin, distributions are never normal." I love this because so many behavioral stats books will barely touch on this and act as if the central limit theorem were some sacrosanct given upon which to build all of behavioral science, while in fact even normal distributions can make the common tests can fail.
He gives (in one of his books) the example of sexual promiscuity, which is just barely tangential to this topic so as to somehow relate a bit but without me having to deal with a topic that I tend to become...upset about very very quickly.
There was a study in the 90s of sexual promiscuity, in which ten billion or two dozen (I have no idea but it was a largish sample size, probably ~100 undergrads) were asked how many sexual partners they wanted. The sample size was large enough to at the very least approximate college males (i.e., it was large enough to "assume" a normal distribution). They found that the average for males was over 50. Why? Mainly because one male said he wanted several thousand sexual partners. Also (and I believe this was the point of the example), even removing this outlier wasn't good enough. Because while it approximated a normal distribution, it did so the wrong way. Almost all the responses were 1 or 2, but then there were several which were a fair amount higher and several which were around 100. Here, a test using the median (something which I have never seen in any behavioral statistics book if memory serves) was a better value for a statistical test than the mean (which is what t tests, ANOVA's, and all the most commonly taught and used parametric tests use).
Most sample size estimates assume an approximately normal distribution, but this alone is not sufficient to say anything much unless you at least test in what way it approximates a normal distribution, because Tukey showed a long time ago that small departures from normality can seriously distort the very tests he developed. The problem was back then your standard computer had less power than many a high school child's calculator (TI 83, or 84, or whatever the top texas instrument calculator is today). Now, a lot of ways to test enormous amounts of data are readily down through R, MATLAB, SAS, and other software packages. However, as most colleges still use SPSS and still preach the gospel of normal distributions, we could have fully functional quantum nanobiosystems computers and it wouldn't matter.