You use the different tests in different places - different types of data, if you are interested in the probability distribution on the mean, etc, as discussed above - and it is just a matter of remembering when to use each one.
For example, the chi-square tests are used to test goodness of fit, as in whether a certain expected categorical distribution is similar to what you actually observe. Like if you are doing a genetics cross and you expect to get 1/4 red plants, 1/2 pink plants and 1/4 white plants as offspring, but what you actually get is 1/5 red, 3/5 pink 1/5 red - then how certain are you that what you have observed has the same distribution as the expected?
Also, degrees of freedom depend on the test you are doing. They are usually inherent on the dimensions of the data you have sampled. Many tests involve n - 1 degrees of freedom, so if you have a sample of 20 observations, then you have 19 degrees of freedom.
And P values are important. Very important. Go back to the null and alternative hypothesis. The P value is the probability of obtaining whatever test statistic at least as extreme as the one you get, *if the null hypothesis is true*.
So if you get a P value of 0.000001, then there is a very small probability, given the data you sampled, that the null hypothesis is true. You should reject the null hypothesis. But on the other hand, if you get a P value = 0.65, then the null hypothesis probably is true and you should not reject it.
It depends on your field, but for general purposes you should think about rejecting the null hypothesis if P = 0.05 and certainly if P = 0.01.