These are issues to consider as you prepare manuscripts to submit to PHAIR. In some cases, these criteria will likely not apply to your manuscript or to the studies reported in your manuscript. In other cases, these criteria are aspirational and for various reasons a particular study might fall short in certain ways but still be considered publishable. However, we seek to publish studies that are of very high ethical and scientific quality, and the criteria below give authors a sense of the issues that the editorial team will consider most closely.

General Issues

  • The findings should be solid, meaning that they would likely replicate if the study were repeated (e.g., based on power, preregistration, or other aspects of study design quality).
  • Novelty of findings will generally not be considered independent of importance of findings. This means both that a manuscript that provides solid new evidence regarding a previously reported (and cited) finding would not be rejected for lack of novelty, and conversely that a finding that is novel but not substantively important would generally not be considered worthy of publishing.
  • Manuscripts should follow page, table/figure, and reference limits as specified on the Article Types page. They should also include all Submission Components, and meet the PsychOpen GOLD Formatting Guidelines.

Ethics

  • Text should be original, and not be taken from previously published work without credit. We will use an automated search to confirm this for every submission.
  • For studies that come from datasets that have been used for other studies, there should be a thorough discussion of overlap with previous studies and a description of the unique contribution of the current submission relative to previous studies. All data in the dataset from which study was conducted should be reported, and a justification should be given for why variables from that dataset were used or not used. This information should usually be presented in the Supplemental Materials.
  • All potential Conflicts of Interest and Funding Sources must be declared by authors on the Title Page.
  • The manuscript should be scientific. The interpretation of empirical results should not reflect pro-animal advocacy. Tests of animal advocacy interventions, reactions to advocacy, etc., are encouraged, but they will be reviewed in terms of scientific merit alone, and no preference will be given based on their appearing to favor a particular moral or ethical position. Theoretical or opinion pieces that explicitly take a pro-animal advocacy position are also appropriate in some case, but should be framed as such.
  • Language should be appropriate, not offensive, and consistent with APA language standards.

Diversity, Equity, and Inclusion

PHAIR is a society founded on concern for social justice and an intersectional worldview. We seek to promote diversity and inclusion whenever possible. Here are some examples to consider:

  • Authors should explicitly address the intersectional nature of study findings when relevant.
  • Authors should use non-discriminatory language throughout, avoid gender-biased expressions, and use inclusive terminology (e.g., avoid exclusively binary gender categories, avoid referring to non-humans as “it”).
  • Authors should make efforts to sample from underrepresented human populations, and to note limitations to generalizability when this was not done.
  • Anyone involved with the Journal should alert the Editorial leadership of any unjust, unfair, or exclusionary behavior, or of ways that the Editorial leadership can support scholars from underrepresented or underprivileged backgrounds. These kinds of communication would be treated anonymously as appropriate to the situation.

Use of Supplemental Materials

Due to limited journal space, the journal has relatively strict word limits. In many cases, this will mean that, in order to fully report details about the study, some information will need to go into the Supplemental Materials. This might include things like

  • Evidence regarding the reliability and validity of measures
  • Deviations from preregistration
  • Power analyses
  • Specific details about the analytic approach
  • Constraints to the generalizability of findings
  • Speculations about the meaning of results that should be followed up in future studies
  • Additional references
  • Additional Tables and Figures

Supplemental Materials should be anonymous and formatted in a way that is easy to read and interpret. They will be reviewed by the reviewers and Editor, but unlike the main text, they will not be copyedited or printed as primary articles. Links to supplemental materials will be included in published articles.

Methods

All manuscripts should conform to JARS standards. The following methodological issues should also be considered. These are aspirational and general, and many will not apply to every manuscript. In many cases, details about methods should go in the supplemental materials in order to keep manuscripts within page limits.

  • Transparency
    • Papers should follow TOP Guidelines as described on the Open-Science page (TOP provides a suite of tools to guide implementation of better, more transparent research - see the TOP Guidelines project page)
    • All methods, data, and script should be made available and fully documented, unless there is a specifically stated reason not to (e.g., posting data would be unethical because of risks to participant confidentiality). This will be required for submission. This should be thorough and documented clearly enough that an independent researcher could reproduce the results and/or directly replicate the study in a new sample. Methods and reported analyses in the actual manuscript do not have to be this thorough, but should be sufficient for the reader to have a clear idea of how hypotheses were tested.
    • Materials will ideally be posted to PsychArchives (www.psycharchives.org), ZPID's repository for psychological science, although we also accept materials that are posted on other public sites.
  • Sampling
    • Population should be clearly described and appropriate for the research question(s).
    • Demographics should be fully and clearly described, as appropriate to the population. In most cases, this includes descriptions of the distributions of gender identity, racial/ethnic/national background, and age, at a minimum.
    • Sample size should be explicitly justified. For studies in which data were collected by the authors, this should be based on power analysis that occurred prior to data collection and are fully reported. For studies using preexisting data, it may be helpful to report minimum effect sizes needed for a certain level of power given the sample size.
    • Sampling diversity (e.g., from underrepresented groups) is not necessary but is strongly desired, within the scope of the research question.
    • Any removal of data (e.g., outliers or incomplete/faulty responders) should be explicitly justified, ideally in a preregistration.
    • Timing of assessments in longitudinal studies should be justified in terms of scale, frequency, number of assessments, and timing of assessments. Attrition should be fully reported and the possibility of attrition-related bias should be examined.
  • Measurement
    • There should generally be some justification for the type of assessment method used, unless this is obvious (e.g., use of questionnaires to measure explicit attitudes).
    • Reliability of measures should be reported for the sample used in the study and should be appropriate to the nature of the measure (e.g., inter-rater for observations, retest if stability is relevant to study question, omega if inference is about the internal consistency of items intended to measure a latent variable).
    • For unfamiliar scales, evidence from prior studies should be given for internal and external validity from other studies. For new scales created for the study, evidence for internal validity (i.e., dimensionality/multidimensionality, reliability, item characteristics) and external validity (i.e., convergent, discriminant, and criterion validity) should be given from the current sample, and a justification should be given for creating a de novo scale rather than using an existing measure.
    • Evidence regarding the success of translation/back translation, such as results of invariance testing, should be provided.
    • If existing scales are shortened to brief scales, this should be justified and evidence should be given for validity vis-à-vis parent scale.
  • Analysis
    • Latent variables are generally preferred over manifest variables for all analyses due to enhanced reliability, precision, and power. However, descriptives and associations for manifest variables should also be reported (typically in supplementals) and/or be easy to compute (from online data).
    • There should be a clear justification for why between- or within-person analyses were performed, and the authors should avoid mixing these levels of analyses when going from results to interpretation (e.g., interpreting a between-person association as evidence of within-person if-then sequences).
    • It will often be important to attend to the possibility that nomothetic patterns may not hold for individuals. Studies that explicitly test idiographic models that examine the degree to which nomothetic findings hold for individuals in longitudinal data are encouraged.
    • It is generally preferable to keep all data and use imputation or full information maximum likelihood estimation. If data are missing, there should be a clear and ideally preregistered justification and explanation of how this is handled.
    • Simple models are generally preferred over complex ones that are likely to have overfit the data and thus be unlikely to replicate in new samples. Likewise, latent variable models that have been trimmed within a given sample to improve model fit, in the absence of cross-validation, are unlikely to replicate and are thus generally discouraged. Mediation must be justified in terms of order and timing.
    • Mediation models from cross-sectional data, or in data whose timescale does not closely match the underlying theory of the process being studied, should not be interpreted as indicative of a temporal or causal process, and in most cases should not be tested at all. For more on mediation, please see
      • Bullock, J. G., Green, D. P., & Ha, S. E. (2010). Yes, but what's the mechanism? (don't expect an easy answer). Journal of Personality and Social Psychology, 98(4), 550–558. https://doi.org/10.1037/a0018933
      • Maxwell, S. E., & Cole, D. A. (2007). Bias in cross-sectional analyses of longitudinal mediation. Psychological Methods, 12(1), 23–44. https://doi.org/10.1037/1082-989X.12.1.23
      • Maxwell, S. E., Cole, D. A., & Mitchell, M. A. (2011). Bias in cross-sectional analyses of longitudinal mediation: Partial and complete mediation under an autoregressive model. Multivariate Behavioral Research, 46(5), 816–841. https://doi.org/10.1080/00273171.2011.606716
      • Nguyen, T. Q., Schmid, I., & Stuart, E. A. (2021). Clarifying causal mediation analysis for the applied researcher: Defining effects based on what we want to learn. Psychological Methods, 26(2), 255–271. https://doi.org/10.1037/met0000299
      • O'Laughlin, K. D., Martin, M. J., & Ferrer, E. (2018). Cross-sectional analysis of longitudinal mediation processes. Multivariate behavioral research, 53(3), 375–402. https://doi.org/10.1080/00273171.2018.1454822
      • Rohrer, J. M., Hünermund, P., Arslan, R. C., & Elson, M. (2022). That’s a lot to PROCESS! Pitfalls of popular path models. Advances in Methods and Practices in Psychological Science, 5(2), 25152459221095827. https://doi.org/10.1177/25152459221095827
      • Shrout, P. E. (2011). Commentary: Mediation analysis, causal process, and cross-sectional data. Multivariate Behavioral Research, 46(5), 852–860. https://doi.org/10.1080/00273171.2011.606718
    • Sufficient power for moderation effects often requires substantially larger samples than have been common in psychological research. Authors should consider the sample sizes they will need to adequately test interaction effects prior to data collection and when reporting findings, please see
      • Murphy, K. R., & Russell, C. J. (2017). Mend it or end it: redirecting the search for interactions in the organizational sciences. Organizational Research Methods, 20(4), 549-573. https://doi.org/10.1177/1094428115625322
      • Sommet, N., Weissman, D. L., & Elliot, A. (2022, September 7). How many participants do i need to test an interaction? Conducting an appropriate power analysis and achieving sufficient power to detect an interaction. https://doi.org/10.31219/osf.io/xhe3u
      • Vize, C. E., Baranger, D. A. A., Finsaas, M. C., Goldstein, B. L., Olino, T. M., & Lynam, D. R. (2022). Moderation effects in personality disorder research. Personality Disorders: Theory, Research, and Treatment. https://doi.org/10.1037/per0000582
    • Whenever possible, the use of unrelated measures, control groups, or sensitivity analyses should be used to test for the specificity of effects.
    • Specific and ideally preregistered rationale should be given for including or not including covariates.
    • Groups should be randomized either through assignment or statistical control (e.g., propensity score matching).
    • Measurement invariance should be tested across groups from different populations.
    • Efforts should be made to deal with autoregressive variance in longitudinal analyses. Measurement invariance should be examined across waves. Time-varying and time-invariant covariates should be distinguished. As described above, between- and within-person variance should be distinguished in longitudinal models.
  • Reporting
    • Confirmatory hypotheses should be explicitly distinguished from exploratory hypotheses. Only hypotheses that were preregistered prior to data collection/access should be described as confirmatory. Preregistrations should be anonymous to facilitate masked review.
    • Constraints on generalizability should be explicitly reported.
    • Effect sizes should be reported for all tests.
    • Confidence Intervals should be reported for all tests.
    • Power should be considered in the interpretation of all primary effects.
    • Efforts should be made to correct for possibility of increased Type 1 error due to multiple testing.
    • It is acceptable to use rules of thumb for the interpretation of things like Type 1 error, model fit statistics, and effect sizes. However, it is always preferable to use the context of the particular study to interpret findings rather than using generalized benchmarks.
    • Non-causal and causal hypotheses should be clearly distinguished. Strong causal interpretations of non-causal results should be avoided. However, interpreting an effect as plausibly causal from non-causal test is appropriate under some circumstances.
    • All deviations from preregistrations should be clearly and explicitly reported in a supplementary document.