Full papers submitted to HCOMP 2020 will be evaluated according to the following review
criteria. Our intent in posting review criteria online is to further improve transparency of the
conference's peer-review process and to provide additional guidance to authors in preparing
their submissions (especially for young researchers, as well as researchers from diverse
"To ensure relevance, submissions are encouraged to include research questions and
contributions of broad interest to crowdsourcing and human computation, as well as
discuss relevant open problems and prior work in the field."
"When evaluation is conducted entirely within a specific domain, authors are
encouraged to discuss how findings might generalize to other communities and
application areas using crowdsourcing and human computation."
Presentation & Writing
Is the English writing correct and comprehensible?
Are the ideas, methods, results and discussions well-presented?
Are research questions and major findings clearly articulated?
Is any domain-specific terminology and methodology explained for a diverse HCOMP audience?
HCOMP-specific Note: Is any language use criticizing the crowd balanced with
praise when appropriate? Do the authors either avoid inflammatory language in characterizing
crowd contributors (e.g., incompetent, cheaters, spammers, etc.) or provide rigorous
evidence justifying use of such terms?
Citations & Prior Work
Do authors cite prior work most relevant to their own study?
Is the review of prior work correct and sufficiently detailed?
Do authors discuss all prior work needed to interpret and assess this paper?
Does the paper establish a new problem?
Are new methods or theorems proposed?
Is evaluation conducted on novel data?
Are new evaluation procedures described for establishing validity?
Is the problem or approach so novel that it may be difficult for the authors to rigorously
evaluate or for the reviewers to assess?
Significance & Impact
How significantly will this work change future research and practice in the field?
Where is it on the spectrum from incremental to transformative?
Does it pose important new problems, challenge accepted knowledge and practice, or otherwise
prompt or enable new avenues of research?
How likely is this paper to be cited?
Data, Code, & Resources
Do the authors commit to sharing any new datasets, sourcecode, and/or other resources for
others to use?
If so, are many people likely to use these new resources?
How greatly would these new resources impact future research and practice?
Soundness & Validity
Are research methods appropriate, sufficient, and correctly employed?
Is the experimental design sound, analyses thorough, proofs valid, and findings supported by
Is the work brought to an appropriate state of completion?
HCOMP-specific Note: Do the authors consider the impact of their own task
design in any evaluation of crowd reliability & quality?
Are method descriptions sufficiently detailed and clear?
Are the resources used in the paper (e.g., data, code, computing infrastructure) already
publicly available, committed to be shared by the authors, or easily substitutable with
Could someone else conduct similar experiments to verify the results?
Thank Your Crowd!
When writing acknowledgments in the final, camera-ready versions of accepted papers,
we encourage authors to thank their crowd contributors. After all, we couldn't conduct our
research or run our crowd-powered systems without the many individual contributors who choose to
Stay CONNECTED: HCOMP COMMUNITY
We welcome everyone who is interested in crowdsourcing and human computation to:
Google Group (mailing list) to post and receive crowdsourcing and human computation email announcements
(e.g., calls-for-papers, job openings, etc.) including updates here about the conference. To subscribe
send an email to email@example.com.