Overview
Full papers submitted to HCOMP 2022 will be evaluated according to the following review
criteria. Our intent in posting review criteria online is to further improve transparency
of the conference's peer-review process and to provide additional guidance to authors in
preparing their submissions (especially for young researchers, as well as researchers
from diverse disciplines).
Feedback or suggestions for further improving these review criteria? Tweet us @hcomp_conf or email us at hcompconference@gmail.com.
Relevance
- How relevant is the work to HCOMP? Are many conference attendees likely to be interested?
- From the Call for Papers
- "To ensure relevance, submissions are encouraged to include research questions and
contributions of broad interest to crowdsourcing and human computation, as well as
discuss relevant open problems and prior work in the field."
- "When evaluation is conducted entirely within a specific domain, authors are
encouraged to discuss how findings might generalize to other communities and
application areas using crowdsourcing and human computation."
- This year, we especially encourage work that generates new insights into the
connections between human computation and crowdsourcing, and humanity. For
example, how to support the well-being and welfare of participants of
human-in-the-loop systems? How to promote diversity and inclusion of the
crowd workforce? How can crowdsourcing be used for social good, e.g.,
to address societal challenges and improve people's lives? How can human
computation and crowdsourcing studies advance the design of trustworthy, ethical,
and responsible AI? How can crowd science inform the development of AI that
extends human capabilities and augments human intelligence? Any submission
that is related to this theme should be considered as relevant to HCOMP.
However, submissions do not need to be related to this theme to be considered
as relevant to HCOMP.
Novelty & Originality
- Does the paper establish a new problem?
- Are new methods or theorems proposed?
- Is evaluation conducted on novel data?
- Are new evaluation procedures described for establishing validity?
- Is the problem or approach so novel that it may be difficult for the authors to rigorously
evaluate or for the reviewers to assess?
Significance & Impact
- How significantly will this work change future research and practice in the field?
- Where is it on the spectrum from incremental to transformative?
- Does it pose important new problems, challenge accepted knowledge and practice, or otherwise
prompt or enable new avenues of research?
- How likely is this paper to be cited?
Soundness & Validity
- Are research methods appropriate, sufficient, and correctly employed?
- Is the experimental design sound, analyses thorough, proofs valid, and findings supported by
evidence?
- Is the work brought to an appropriate state of completion?
- HCOMP-specific Note: Do the authors consider the impact of their own task
design in any evaluation of crowd reliability & quality?
Presentation & Writing
- Is the English writing correct and comprehensible?
- Are the ideas, methods, results and discussions well-presented?
- Are research questions and major findings clearly articulated?
- Is any domain-specific terminology and methodology explained for a diverse HCOMP audience?
- HCOMP-specific Note: Is any language use criticizing the crowd balanced with
praise when appropriate? Do the authors either avoid inflammatory language in characterizing
crowd contributors (e.g., incompetent, cheaters, spammers, etc.) or provide rigorous
evidence justifying use of such terms?
Citations & Prior Work
- Do authors cite prior work most relevant to their own study?
- Is the review of prior work correct and sufficiently detailed?
- Do authors discuss all prior work needed to interpret and assess this paper?
Other Aspects to Consider
- DATA, CODE, & RESOURCES: Do the authors commit to sharing
any new datasets, sourcecode, and/or other resources for others to use?
If so, are many people likely to use these new resources? How greatly
would these new resources impact future research and practice?
- REPRODUCIBILITY: Are method descriptions sufficiently
detailed and clear? Are the resources used in the paper (e.g.,
data, code, computing infrastructure) already publicly available,
committed to be shared by the authors, or easily substitutable
with similar resources? Could someone else conduct similar
experiments to verify the results?