Is it necessary to include both standard errors and exact $p$ values in a result table? Or is an exact $p$ value enough?
2 Answers
$\begingroup$
$\endgroup$
This is a style issue, and each journal has its own.
However, if it was up to me, I'd require standard errors and make p values optional.
$\begingroup$
$\endgroup$
The $p$ values are already a hairy subject, but I would trust them much less if there were no indicators of where that $p$ value came from. From my personal standpoint, you should always:
- List the exact $p$ value up to $3$ decimal places (this is the APA standard from what I remember).
- List all inferential estimates related to that (standard errors, degrees of freedom, etc).
- When possible, visualizing the data to determine what could be driving the $p$ values (such as a serious model misfit).
- Avoid flowery language about $p$ values such as "strongly significant" or "important" to rate it over other predictors.
- Always report effect sizes when possible (e.g. Cohen's $d$ or $R^2$).