Here's the question you clicked on:
sarahseburn
What is the difference between mean and sample mean A researcher is studying percentage scores of 30 people who have written an aptitude test. She sees 15 scores of 70%, 9 scores of 76%, and 6 scores of 81%. Find the sample mean and the sample standard deviation of these scores.
The mean of the data includes all the scores of everyone that took the test. The sample of thirty people is just the same data figured on that subset of the data.
so... whats the different in how i would calculate the mean? i wouldn't add all of them up and divide by 30 .. ?
Yes, the sample mean is the mean of the data points in the sample being studied. In your example, the population would be ALL people who wrote aptitude tests (say like 500 people), but whoever conducted the study selected 30 out the 500 who wrote the test This smaller set of data is your sample. We do this because it is usually very difficult to obtain the mean of the whole population because there is too much data (this might simply be called the "mean"). But, finding the mean in your sample is finding the mean for the 30 people only (called the "sample mean"). In this example you only have a sample and can only calculate a mean for the sample... so you would calculate the sample mean as you'd expect for the mean... it's the sample calculation formula but it has a subtle "different meaning". The "sample standard deviation", however, is a bit different from the ""standard deviation". It's similar to the "standard deviation" formula but instead of dividing by "n", you divide by "n-1". Long story short, you can "reason" that we divide by n-1 instead of n because in a sample, you mostly likely expect a smaller standard deviation from the population standard deviation, so the dividing by n-1 makes your answer a little bit larger to hopefully match better with the population mean. However, the answer is a bit more complicated and requires a discussion on degrees of freedom which would be a bit long to discuss here.
So ya for your example, the mean is simply (15*0.7 + 9*0.76 + 6*0.81)/30 The sample standard deviation uses the formula \[\sqrt{\frac{(\sum_{i=1}^{n}(x _{i}-\bar{x})^2)}{n-1}}\] (in contrast to the population standard deviation (or simply "standard deviation" which is sqrt{frac{(sum_{i=1}^{n}(x _{i}-bar{x})^2)}{n}})
Sorry that long code there in the last sentence should appear as\[\sqrt{\frac{(\sum_{i=1}^{n}(x _{i}-\bar{x})^2)}{n}}\]
so it would all be divided by 29 then instead of 30?
It would be divided by 30 for the sample mean. But divided by 29 for the sample standard deviation
yeah, ok, gotcha! thanks!