By reversing the order of conditionality of a probability density function: ƒ(data|param) → L(param|data) we end up with something called the "likelihood function". Here we are looking at the binomial distribution. The x-axis represents the parameter you're looking at (in this case, p: probability of success) and the y-axis is proportional to the likelihood of that parameter value being the truth, given the observed data.

These intervals contains all values of the parameter that have at least a 1/a likelihood of being the truth given the observed data. These are similar in concept to confidence intervals but have a slightly different intepretation (see here for more info). Try 1/6.8 for a support interval roughly equivalent to a 95% confidence interval.

The likelihood that hypothesis 1 for the parameter is true over hypothesis 2, given the observed data.

Likelihood ratio
H_{1} /
H_{2} =