# | Paper Title | Author Names |
---|---|---|
1 | Crowd Access Path Optimization: Diversity Matters | Besmira Nushi, ETH Zurich; Adish Singla, ETH Zurich; Anja Gruenheid, ETH Zurich; Erfan Zamanian, Brown University; Andreas Krause, ETH Zurich; Donald Kossmann, ETH Zurich |
2 | Modeling Temporal Crowd Work Quality with Limited Supervision | Hyun Joon Jung, University of Texas at Austin; Matt Lease, "University of Texas, Austin" |
3 | Identifying and Accounting for Task-Dependent Bias in Crowdsourcing | Ece Kamar, Microsoft Research; Ashish Kapoor, Microsoft Research; Eric Horvitz, Microsoft Research |
4 | Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms | Akash Das Sarma, Stanford University; Ayush Jain, University of Illinois; Arnab Nandi, The Ohio State University; Aditya Parameswaran, University of Illinois; Jennifer Widom, Stanford University |
5 | CrowdAR: Augmenting Live Video with a Real-Time Crowd | Elliot Salisbury, University Of Southampton; Sebastian Stein, University of Southampton; Sarvapali Ramchurn, University of Southampton |
6 | To Play or not to Play: Interactions between Response Quality and Task Complexity in Games and Paid Crowdsourcing | Markus Krause, UC Berkeley |
7 | Tropel: Crowdsourcing Detectors with Minimal Training | Genevieve Patterson, Brown University; James Hays, Brown University; Pietro Perona, perona@caltech.edu; Grant Van Horn, California Institute of Technology; Serge Belongie, Cornell University |
8 | Publishable Humanly-Usable Secure Password Creation Methods | Manuel Blum, CMU; Santosh Vempala, Georgia Tech |
9 | Learning Supervised Topic Models from Crowds | Filipe Rodrigues, CISUC; Mariana Lourenço, CISUC; Francisco Pereira, MIT; Bernardete Ribeiro, CISUC |
10 | From “In” to “Over”: Behavioral Experiments on Whole-Network Computation | Lili Dworkin, University of Pennsylvania; Michael Kearns, University of Pennsylvania |
11 | Using Anonymity and Communal Efforts to Improve Quality of Crowdsourced Feedback | Julie Hui, Northwestern University; Amos Glenn, ; Rachel Jue, ; Steven Dow, "Carnegie Mellon University, Pittsburgh, PA"; Liz Gerber, Northwestern University |
12 | PISCES: Participatory Incentive Strategies for effective Community Engagement in Smart cities | Tridib Mukherjee, Xerox Research Center India; Arpita Biswas, Xerox Research Center India; DEEPTHI CHANDER, XEROX RESEARCH CENTRE INDIA; Koustuv Dasgupta, Xerox Research Center India; Koyel Mukherjee, Xerox Research Center India; Mridula Singh, Xerox Research Center India |
13 | Crowdlines: Supporting Synthesis of Diverse Information Sources through Crowdsourced Outlines | Kurt Luther, Virginia Tech; Nathan Hahn, Carnegie Mellon University; Steven Dow, "Carnegie Mellon University, Pittsburgh, PA"; Aniket Kittur, Carnegie Mellon University |
14 | Crowdsourcing Feature Discovery via Adaptively Chosen Comparisons | James Zou, Microsoft Research; Kamalika Chaudhuri, UCSD; Adam Kalai, Microsoft Research |
15 | Crowdsourced Nonparametric Density Estimation using Relative Distances | Antti Ukkonen, FIOH; Behrouz Derakhshan, Rovio Entertainment; Hannes Heikinheimo, Reaktor |
16 | Combining Crowd and Expert Labels using Decision Theoretic Active Learning | An Nguyen, University of Texas at Austin; Byron Wallace, University of Texas at Austin; Matt Lease, "University of Texas, Austin" |
17 | Crowdsourcing from Scratch: A Pragmatic Experiment in Data Collection by Novice Requesters | Alexandra Papoutsaki, Department of Computer Science, Brown University; Hua Guo, Department of Computer Science, Brown University; Danae Metaxa-Kakavouli, Department of Computer Science, Brown University; Connor Gramazio, Department of Computer Science, Brown University; Jeff Rasley, Department of Computer Science, Brown University; Wenting Xie, Department of Computer Science, Brown University; Guan Wang, Department of Computer Science, Brown University; Jeff Huang, Department of Computer Science, Brown University |
18 | Reliable aggregation of boolean crowdsourced tasks | Luca De Alfaro, UC Santa Cruz; Vassilis Polychronopoulos, UC Santa Cruz; Michael Shavlovsky, UC Santa Cruz |
19 | Online Assignment of Heterogeneous Tasks in Crowdsourcing Markets | Shahin Jabbari, University of Pennsylvania; Sepehr Assadi, University of Pennsylvania; Justin Hsu, University of Pennsylvania |
20 | Crowdsourcing in the Field: A Case Study Using Local Crowds for Event Reporting | Elena Agapie, University of Washington; Andres Monroy-Hernandez, Microsoft Research; Jaime Teevan, Microsoft Research |
21 | Guardian: A Crowd-Powered Spoken Dialogue System for Web APIs | Ting-Hao Huang, Carnegie Mellon University; Walter S. Lasecki, University of Michigan; Jeff Bigham, Carnegie Mellon University |
# | Paper Title | Author Names |
1 | Crowdsouring based on GPS | Ketan Vazirabadkar, PICT, Pune; Siddhant Gadre, Pune Institute of Computer Technology; Nikhil Dwivedi, Pune Institute of Computer Technology; Rajeev Sebastian, Pune Institute of Computer Technology |
2 | Planning in Open Worlds with Crowd Sensing | Jie Gao, Jilin University Zhuhai Campus; Hankz Hankui Zhuo, ; yasong liu, ; lei Li, |
3 | Cheaper and Better: Selecting Good Workers for Crowdsourcing | Hongwei Li, Statistics Depart., UCBerkeley; Qiang Liu, |
4 | Moral Reminder as a Way to Improve Worker Performance on Amazon Mechanical Turk | Heeju Hwang, University of Hong Kong |
5 | Learning to Hire Teams | Adish Singla, ETH Zurich; Eric Horvitz, Microsoft Research; Pushmeet Kohli, Microsoft Research; Andreas Krause, ETH Zurich |
6 | Crowdsourcing Reliable Ratings for Underexposed Items | Beatrice Valeri, University of Trento; Shady Elbassuoni, American University of Beirut; Sihem Amer-Yahia, Laboratoire d' Informatique de Grenoble |
7 | The ActiveCrowdToolkit: An Open-Source Tool for Benchmarking Active LearningAlgorithms for Crowdsourcing Research | Matteo Venanzi, University of Southampton; Oliver Parson, University of Southampton; Alex Rogers, University of Southampton; Nick Jennings, University of Southampton |
8 | The Rise of Curation in GitHub | Yu Wu, Penn State University; Jessica Kropczynski, Penn State University; Raquel Prates, Federal University of Minas Gerais; John Carroll, Penn State University |
9 | On the role of task design in crowdsourcing campaigns | Carlo Bernaschina, Politecnico di Milano; Ilio Catallo, Politecnico di Milano; Piero Fraternali, Politecnico di Milano; Davide Martinenghi, Politecnico di Milano |
10 | Modeling and Exploration of Crowdsourcing Micro-Tasks Execution | Pavel Kucherbaev, University of Trento; Florian Daniel, University of Trento; Stefano Tranquillini, University of Trento; Maurizio Marchese, University of Trento |
11 | Assigning Tasks to Workers by Referring to Their Schedules in Mobile Crowdsourcing | Mayumi Hadano, NTT Corporation; Makoto Nakatsuji, NTT Corporation; Hiroyuki Toda, NTT Corporation; Yoshimasa Koike, NTT Corporation |
12 | A Crowdsourced Approach for Obtaining Rephrased Questions | Hiroyoshi Tanji, University of Tsukuba; Nobuyuki Shimizu, Yahoo! Japan; Atsuyuki Morishima, University of Tsukuba; Ryota Hayashi, University of Tsukuba |
13 | Tight Policy Regret Bounds for Task Assignment to Improving and Decaying Workers | Hoda Heidari, University of Pennsylvania; Michael Kearns, University of Pennsylvania; Aaron Roth, University of Pennsylvania |
14 | A Game with a Purpose for Recommender Systems | Sam Banks, Insight Centre for Data Analytics, University College Dublin; Rachael Rafter, Insight Centre for Data Analytics, University College Dublin; Barry Smyth, Insight Centre for Data Analytics, University College Dublin |
15 | Proposal of Grade Training Method in Private Crowdsourcing System | Masayuki Ashikawa, Toshiba; Takahiro Kawamura, Toshiba Corporation; Akihiko Ohsuga, Graduate School of Information Systems The University of Electro-Communications |
16 | Extracting structured information via automatic + human computation | Ellie Pavlick, University of Pennsylvania; Chris Callison-Burch, University of Pennsylvania |
17 | Understanding Socially Constructed Concepts Using Blogs Data | Francisco Iacobelli, Norhteastern Illinois Universi; Alastair Gill, Kings College |
18 | LoRUS: A Mobile Crowdsourcing System for Efficiently Retrieving the Top-$k$ Relevant Users in a Spatial Window | Anirban Mondal, Xerox Research; Guurulingesh Raravi, Xerox; Amandeep Chugh, Xerox Research; Tridib Mukherjee, Xerox Research Center India |
19 | Flexible reward plans to elicit truthful predictions in crowdsourcing | Yuko Sakurai, Kyushu University; Satoshi Oyama, Hokkaido University; Masato Shinoda, Nawa Women's University; Makoto Yokoo, Kyushu University |
20 | A cross-cultural study of motivations to participate in a crowdsourcing project to support people with disabilities | Helen Petrie, University of York; Fatma Layas, University of York; Christopher Power, University of York |
21 | Enabling Physical Crowdsourcing On-the-Go with Context-sensitive Notifications | Yongsung Kim, Northwestern University; Haoqi Zhang, ; Liz Gerber, Northwestern University; Darren Gergle, ; Emily Harburg, ; Shana Azria, |
22 | Predicting the Quality of Crowdsourced Image Drawings from Crowd Behavior | Mehrnoosh Sameki, ; Danna Gurari, Boston University; Margrit Betke, Boston University |
23 | Boosting Engagement in Social Network Challenges for Content Creation | Marco Brambilla, Politecnico di Milano; Stefano Ceri, Politecnico di Milano; Chiara Leonardi, Politecnico di Milano; Andrea Mauri, Politecnico di Milano; Riccardo Volonterio, Politecnico di Milano |
24 | How Effective an Odd Message Can Be: Inappropriate Topics Encourage Dialog and Trust in Speech-Based Vehicle Interfaces | David Sirkin, Stanford University; Kerstin Fischer, University of Southern Denmark; Lars Christian Jensen, University of Southern Denmark; Brian Mok, Stanford University; Wendy Ju, Stanford University |
25 | Using Expertise for Crowd-sourcing | David Merritt, University of Michigan; Mark Newman, University of Michigan; Pei-Yao Hung, University of Michigan; Mark Ackerman, University of Michigan; Erica Ackerman, |
26 | On Optimizing Human-Machine Task Assignments (.pdf) | Andreas Veit, ; Michael Wilber, Cornell University; Rajan Vaish, ; Serge Belongie, Cornell University; James Davis, UC Santa Cruz; Vishal Anand, ; Anshu Aviral, ; Prithvijit Chakrabarty, ; Yash Chandak, ; Sidharth Chaturvedi, ; Chinmaya Devaraj, ; Ankit Dhall, ; Utkarsh Dwivedi, ; Sanket Gupte, ; Sharath Sridhar, ; Karthik Paga, ; Anuj Pahuja, ; Aditya Raisinghani, ; Ayush Sharma, ; Shweta Sharma, ; Darpana Sinha, ; Nisarg Thakkar, ; K. Vignesh, ; Utkarsh Verma, |
27 | Identifying topical content and experts in Twitter using text and visual content (.pdf) | Eleonora Ciceri, Politecnico di Milano |
28 | Exploring Cognitive Benefits as an Alternative Motivation for Engaging Older Adults in Crowdwork (.pdf) | Robin Brewer, Northwestern University; Anne Piper, Northwestern University; Meredith Morris, Microsoft |
29 | Dynamic Filter: An Adaptive Algorithm for for Processing Data with the Crowd (.pdf) | Katherine Reed, Harvey Mudd College; Austin Shin, Harvey Mudd College; Beth Trushkowsky, Harvey Mudd College |
30 | End-to-End Crowdsourcing Approach for Unbiased High-Quality Transcription of Speech (.pdf) | Michael Levit, Microsoft; Shuangyu Chang, Microsoft; Omar Alonso, Microsoft; Anil Kumar Yadavalli, Microsoft |
31 | Spying with the Crowd (.pdf) | Malay Bhattacharyya, IIEST, Shibpur |
32 | Studying the Reality of Crowd-powered Healthcare (.pdf) | Malay Bhattacharyya, IIEST, Shibpur |
33 | Beyond Task-Based Crowdsourcing Database Research (.pdf) | Rman Lukyanenko, FIU; Jeffrey Parsons, |
34 | Analyzing crowdsourced assessment of user traits through Twitter posts (.pdf) | Lucie Flekova, UKP Lab; Daniel Preotiuc-Pietro, University of Pennsylvania; Jordan Carpenter, University of Pennsylvania; Salvatore Giorgi, University of Pennsylvania; Lyle Ungar, University of Pennsylvania |
35 | Towards Generating Virtual Movement from Textual Instructions A Case Study in Quality Assessment (.pdf) | Himangshu Sarma, University of Bremen; Robert Porzel, University of Bremen; Jan Smeddinck, University of Bremen; Rainer Malaka, University of Bremen |
36 | VoiceTranscriber: Crowd-powered Oral Narrative Summarization System (.pdf) | Hung-Chi Lee, CSIE, NTU; Jane Hsu, National Taiwan University |
37 | Bull-O-Meter: Predicting the Quality of Natural Language Responses (.pdf) | Markus Krause, UC Berkeley |
38 | A Method to automatically choose Suggestions to Improve Perceived Quality of Peer Reviews based on Linguistic Features (.pdf) | Markus Krause, UC Berkeley |
39 | Making Legacy Open Data Machine Readable by Crowdsourcing (.pdf) | Satoshi Oyama, Hokkaido University; Yukino Baba, Kyoto University; Ikki Ohmukai, ; Hiroaki Dokoshi, ; Hisashi Kashima, |
40 | A GWAP Approach for Collecting Consumer Perception of a Product (.pdf) | Yu Kagoshima, Aoyama Gakuin University; Hajime Mizuyama, Aoyama Gakuin University; Tomomi Nonaka, Aoyama Gakuin University |
41 | Reactive Learning: Actively Trading off Larger Noisier Training Sets Against Smaller Cleaner Ones (.pdf) | Christopher Lin, University of Washington; Mausam , "Indian Institue of Technology, Delhi"; Daniel Weld, University of Washington |
42 | Predicting Sales E-Mail Responders using a Natural Language Model (.pdf) | Anand Kulkarni, LeadGenius; Markus Krause, UC Berkeley |
43 | Crowdsourcing Crowdsourcing: Using Developers in the Crowd to Contribute Code to a Crowdsourcing Platform (.pdf) | Anand Kulkarni, LeadGenius; Max Mautner, ; Andrew Schriner, LeadGenius; Brianna Salinas, LeadGenius; Egor Vinogradov, LeadGenius |
44 | Product Concept Evaluation Game Combining Preference Market and GA (.pdf) | Miku Imai, Aoyama Gakuin University; Hajime Mizuyama, Aoyama Gakuin University; Tomomi Nonaka, Aoyama Gakuin University |
45 | This Image Intentionally Left Blank: Mundane Images Increase Citizen Science Participation (.pdf) | Alex Bowyer, University of Oxford; Veronica Maidel, Adler Planetarium; Chris Lintott, University of Oxford; Alexandra Swanson, University of Oxford; Grant Miller, University of Oxford |
46 | An Online Approach to Task Assignment and Sequencing in Expert Crowdsourcing (.pdf) | Ioanna Lykourentzou, LIST; Heinz Schmitz, Trier University of Applied Sciences |
47 | Chatmood: A Mixed-initiative Labeling Approach to Build Natural Dialogue Corpus for Sentiment Analysis (.pdf) | Chi-Chia Huang, National Taiwan University; Yi-Ching Huang, National Taiwan University; Jane Hsu, National Taiwan University |
48 | SimplyCity - A Simple Mobile Crowdsourcing Platform for Emerging Cities (.pdf) | Naveen Nandan, SAP Research |
49 | Panoptes, a Project Building Tool for Citizen Science (.pdf) | Alex Bowyer, University of Oxford; Chris Lintott, University of Oxford; Greg Hines, University of Oxford; Campbell Allan, University of Oxford; Ed Paget, Adler Planetarium |
50 | On Transcribing Russian with a Highly Mismatched Indian Crowd (.pdf) | Purushotam Radadia, TCS Innovation Labs-TRDDC; Shirish Karande, ; Sachin Lodha, TCS Innovation Labs-TRDDC |
51 | Enhancing Diversity and Coverage of Crowd-Generated Feedback through Social Interaction (.pdf) | Yi-Ching Huang, National Taiwan University; Hao-Chuan Wang, National Tsing Hua University; Jane Hsu, National Taiwan University |
52 | A Multi-Layer Feature-Assisted Approach in Crowd-Labeling (.pdf) | Ping-hao Chen, National Taiwan University; Meng-Ying Chan, ; Chi-Chia Huang, National Taiwan University; Yi-Ching Huang, ; Jane Hsu, National Taiwan University |
53 | Crowdsourcing a large scale multilingual lexico-semantic resource (.pdf) | Fausto Giunchiglia, University of Trento; Mladjan Jovanovic, ; Mercedes Huertas-Miguelez, University of Trento; Khuyagbaatar Batsuren, |
54 | Crowdsourcing Data Understanding: A Case Study using Open Government Data (.pdf) | Yukino Baba, Kyoto University; Hisashi Kashima, |
55 | Crowdsourcing the Creation of Quality Multiple Choice Exams (.pdf) | Sarah Luger, Edinburgh |
56 | Dropout Prediction in Crowdsourcing Markets (.pdf) | Malay Bhattacharyya, IIEST, Shibpur |
57 | BlueSky: Charting entire idea spaces through iterative refinement (.pdf) | Gaoping Huang, Purdue University; Alexander Quinn, Purdue University |
58 | Labeling Synonyms for Query Expansion Using Crowdsourcing and a Search Engine (.pdf) | Bruce Smith, Intuit; Chris Collins, Intuit, Inc.; Pankaj Andhale, Intuit, Inc. |
59 | Difficulties with Self-Assessment in Computer Programming (.pdf) | Rowland Pitts, George Mason University |
60 | Feature Selection and Validation for Human Classifiers (.pdf) | Jay B. Martin, Stitch Fix Inc.; Eric Colson, Stitch Fix Inc.; Brad Klingenberg Stitch Fix Inc. |
The HCOMP conference is cross-disciplinary, and we invite submissions across the broad spectrum of crowdsourcing and human computation work. Human computation and crowdsourcing is unique in its direct engagement and reliance on both human-centered studies and traditional computer science. The HCOMP conference is thus aimed at promoting the scientific exchange of advances in human computation and crowdsourcing among researchers, engineers, and practitioners across a spectrum of disciplines who may otherwise not have the opportunity to hear from one another.
The theme for HCOMP 2015 is “Intersections”: intersections between people and technology, intersections between different stakeholders, intersections of the physical and virtual, and intersections between diverse perspectives in the HCOMP community and beyond.
The conference was created by researchers from diverse fields to serve as a key focal point and scholarly venue for the review and presentation of the highest quality work on principles, studies, and applications of human computation and crowdsourcing. The meeting seeks and embraces work on human computation and crowdsourcing in multiple fields, including human-centered fields like human-computer interaction, psychology, design, economics, management science, and social computing, and technical fields like databases, systems, information retrieval, optimization, vision, speech, robotics, machine learning, and planning.
Paper submissions are double-blind, so please do not include author information in the paper. The papers should be formatted according to the AAAI style guidelines. While we do not have a strict page limit for the submissions, we expect the average paper length to be around 8 pages, but papers may be submitted with any length commensurate with their contribution.
Submissions are invited on principles, studies, and applications of systems that rely on programmatic access to human intellect to perform some aspect of computation, or where human perception, knowledge, reasoning, or physical activity and coordination contributes to the operation of larger computational systems, applications, and services.
The conference will include presentations of new research, works-in-progress and demo sessions, and invited talks. A day of workshops and tutorials will precede the main conference.
HCOMP 2015 builds on a series of four successful earlier workshops (2009, 2010, 2011, and 2012) and two AAAI HCOMP conference held in 2013 and 2014. All full papers accepted will be published as AAAI archival proceedings in the AAAI digital library. While we encourage visionary and forward-looking papers, the paper track will not accept work recently published or soon to be published in another conference or journal. However, to encourage exchange of ideas, such work can be submitted to the non-archival works-in-progress and demo track. For submissions of this kind, the authors should include the venue of previous or concurrent publication.
Paper (double-blind) submissions are due on May 1,
2015 (11:59pm Pacific Time).
Workshop and tutorial
proposals are due on June 8, 2015.
Works-in-progress
and demos will be due August 25, 2015.
Deadline for optional rebuttals: June 6, 2015
Authors of papers, workshops, and tutorials will be notified about the acceptance and rejection of their submissions on June 19, 2015. Accepted papers will be due in camera-ready form on August 11, 2015. Authors of works-in-progress and demos will be notified by September 7, 2015.
Summary: As the field of crowdsourcing is rapidly growing, new challenges are constantly encountered in the ever expanding applications of data collection, curation, and beyond. In turn, new and innovative strategies are continually being developed to solve these challenges. We would like to focus on recent breakthroughs that can be applied to the field of language technology and take crowdsourcing to the next level. Breakthroughs can be considered effective solutions to new problems, or novel solutions to old problems. For more information about the workshop and details re: submitting proposals, please see: http://hcomp2015.voicebox.com/
CrowdCamp is a one-day hack-a-thon for crowdsourcing, human computation, social media, collective intelligence ideas. The focus is on creating a deliverable prototype, result, or insight within the workshop itself. Prior CrowdCamp projects over the last five years have resulted in top-tier conference publications, blog posts, and on-going research.
This year's Crowd Camp will be held before the main conference on November 9th.
Submission to the Works-in-Progress track is done via the HCOMP CMT page (https://cmt.research.microsoft.com/HCOMP2015/). When you create a new submission, select the 'Works-in-Progress' options.
Works-in-progress and demonstrations submissions to HCOMP are due on August 25, 2015 at 5pm Pacific time. Submissions are limited to two pages in AAAI format, plus auxiliary information. Submission will not be anonymous, so do include author names.
We encourage practitioners and researchers to submit Works-in-Progress as it provides a unique opportunity for sharing valuable ideas, eliciting useful feedback on early-stage work, and fostering discussions and collaborations among colleagues. Accepted submissions will be presented as a poster at the conference and made available to the community as a two-page poster abstract.
A Work-in-Progress is a concise report of recent findings or other types of innovative or thought-provoking work relevant to the HCOMP community. The difference between Works-in-Progress and other contribution types is that Work-in-Progress submissions represents work that has not reached a level of completion that would warrant the full Refereed selection process. That said, appropriate submissions should make some contribution to the body of HCOMP knowledge, whether realized or promised. A significant benefit of a Work-in-Progress derives from the discussion between the author and conference attendees that will be fostered by the face-to-face presentation of the work.
Each WIP poster will be provided a 30"x40" poster board and thumb tacks to hang your poster; make sure to print your poster ahead of time according to these dimensions.
A demo is a high-visibility, high-impact forum of the HCOMP program that allows you to present your hands-on demonstration, share novel interactive technologies, and stage interactive experiences. We encourage submissions from any area of human computation and crowdsourcing. Demo promotes and provokes discussion of the role of technology, and invites contributions from industry, research, the arts and design.
The demo track showcases this year's most exciting crowd and human computation prototypes and systems. If you have an interesting prototype, system, exhibit or installation, we want to know about it. Sharing hands-on experiences of your work is often the best way to communicate what you have created.
We look forward to seeing your submissions, and to see you at HCOMP in November!
Steven Dow (Carnegie Mellon University)
Walter Lasecki (University of Michigan)
The HCOMP 2015 Doctoral Consortium will provide doctoral students with a unique opportunity to meet each other and experienced researchers in the field. Students will be mentored by a group of faculty who are leaders in the diverse specialties that make up the HCOMP field. The objectives of the Doctoral Consortium are:
Brent Hecht (University of Minnesota)
Laura Dabbish (Carnegie Mellon University)
Jeff Bigham, Carnegie Mellon University
Lilly Irani, University of California at San Diego
Matt Lease, University of Texas at Austin
Haoqi Zhang, Northwestern University
The Consortium will take place on November 8, 2015 in San Diego, CA, immediately before the main HCOMP 2015 conference.
Prospective: attendees should have written, or be close to completing, a thesis proposal (or equivalent). We will give preference to students who have proposed or are about to propose but are far enough from completing their thesis that the feedback they receive at the event can impact their work. Before submitting, students should discuss this criterion with their advisor or supervisor. Those accepted are required to attend the event in person.
Selection Criteria: Selection will be based upon the expected potential of both the student and their proposed work, as well as the expected benefit to the student from participation. Priority will be given to students whose research goes beyond locally available expertise at their home institutions.
Required Materials: Applicants must submit: 1) a solely-authored overview of their doctoral research, and 2) a supplementary paragraph describing their motivation for attending and proposal status.
Doctoral Research Overview: You should submit a paper that describes your doctoral research. Your paper submission will be distributed to mentors and other attendees of the doctoral consortium. Proceedings of the Doctoral Consortium will NOT be archived. As such, students may freely submit their research contributions for official publication in other venues. Abstracts will be publicized on the conference website.
Sections of the submitted paper should include:
Format Requirements: Submitted papers must be written in English, formatted according to AAAI Format guidelines, and submitted as a single PDF file (embedding all required fonts). The paper should be no more than 3 pages in length including all figures and references, but not including the one page appendix. The first page must contain the title of the paper, full author name, affiliation and contact details, an abstract of up to 250 words, and up to 3 keywords describing the research topic areas.
Supplementary Paragraph (Paper appendix). The paper should also include an appendix (placed after the references) containing a short paragraph written by the student explaining 1) why she/he wants to participate in the consortium at this point in their doctoral studies and how she/he expects to benefit from the consortium, 2) the student’s proposal status (writing proposal or recently proposed), and 3) expected defense date (approximately).
Submission Guidelines: Papers must be submitted via the CMT online submission system (https://cmt.research.microsoft.com/HCOMP2015/) to the "Doctoral Consortium" track. It is the responsibility of the student to ensure that their submissions use no unusual formatting and are printable on a standard printer. Submissions will be reviewed by the members of the Doctoral Consortium Program Committee.
Schedule:
14 September 2015: Submissions due
22 September 2015: Acceptance notifications
8 November 2015: Doctoral Consortium
Miscellany: Students who attend the doctoral consortium will receive significant assistance in offsetting the expenses of the conference. More details will be available soon. Students who are accepted into the consortium will be required to make and present a poster at the main conference.