- (read the award panel's appreciation)
Computer Science Department, University of Colorado, USA
Machine learning is ubiquitous, but most users treat it as a black box: a handy tool that suggests purchases, flags spam, or autocompletes text. I present qualities that ubiquitous machine learning should have to allow for a future filled with fruitful, natural interactions with humans: interpretability, interactivity, and an understanding of human qualities. After introducing these properties, I present machine learning applications that begin to fulfill these desirable properties. I begin with a traditional information processing task---making sense and categorizing large document collections---and show that machine learning methods can provide interpretable, efficient techniques to categorize large document collections with a human in the loop. From there, I turn to techniques to help computers understand and detect when texts reveal their writer's ideology or duplicity. Finally, I end with a setting combining all of these properties: language-based games and simultaneous machine translation.
Jordan Boyd-Graber is an assistant professor in the University of Colorado Boulder's Computer Science Department, formerly serving as an assistant professor at the University of Maryland. He is a 2010 graduate of Princeton University, with a PhD thesis on "Linguistic Extensions of Topic Models" with David Blei. Jordan's research focus is in applying machine learning and Bayesian probabilistic models to problems that help us better understand social interaction or the human cognitive process. This research often leads him to use tools such as large-scale inference for probabilistic methods, natural language processing, multilingual corpus understanding, and human computation.
- (read the award panel's appreciation)
Department of Computer Science, University College London, UK
The need for search often arises from a person's need to achieve a goal, or a task such as booking travels, organizing a wedding, buying a house, investing in the stock market, etc. Current search engines focus on retrieving documents relevant to the query submitted as opposed to understanding and supporting the underlying information needs (or tasks) that have led the person to submit the query, search engine users often have to submit multiple queries to achieve a single information need. For example, booking travels to a location such as London would require the user to submit various different queries such as flights to London, hotels in London, points of interest around London as all of these queries are related to possible subtasks the user might have to perform in order to arrange their travels.
Historically, search engines have focused on identifying and retrieving documents relevant to a query submitted by a user, as opposed to helping the user achieve the actual task that has led them issue the query. Ideally, a search engine should be able to understand the reason that caused the user to submit a query and it should help the user achieve the actual task by guiding her through the steps (or subtasks) that need to be completed. Devising such task based information retrieval systems have several challenges that have to be tackled. I will start with describing the problems that need to be solved when designing such systems, as well as the progress that we have made in these areas. I will then focus on how a task based perspective to information retrieval requires the design of new evaluation methodologies and the research challenges that need to be tackled.
Emine Yilmaz is a senior lecturer (associate professor) at University College London, Department of Computer Science and also works as a research consultant for Microsoft Research Cambridge. Her main research interests are information retrieval and applications of information theory, statistics and machine learning. She is a recipient of the Google Faculty Research Award in 2014/2015 and she has published research papers extensively at major information retrieval venues such as SIGIR, CIKM and WSDM. The sampling methods she has designed for efficient retrieval evaluation have become one of the commonly used methods at the Text REtrieval Conference (TREC) organized by NIST.
She has given several tutorials at SIGIR 2015, SIGIR 2012 and SIGIR 2010 Conferences and at the RuSSIR/EDBT Summer School in 2011. She has also organized several workshops on Crowdsourcing (WSDM2011, SIGIR 2011 and SIGIR 2010) and User Modeling for Retrieval Evaluation (SIGIR 2013). She has served as one of the organizers of the ICTIR Conference in 2009, as the demo chair for the ECIR Conference in 2013, as the PC Chair for the SPIRE 2015 and EVIA 2016 Conferences and as the Practice and Experience Track Chair for the ACM WSDM 2017 Conference. She is also one of the organizers of the TREC Tasks Track 2015 and 2016.
Gravity R&D Inc., Hungary
Gravity R&D has been providing recommendation services as SaaS solutions since 2009. Founded by top contenders in the Netflix Prize, the company can be considered as an offspring of the competition. In this talk it is shown how Gravity’s recommendation technology was created from the big pile of task specific program codes to scalable services that serve billions of recommendation requests monthly. Having academic origin with strong research focus, the recommendation quality has always been the primary differentiating factor at Gravity. But we also learnt that machine learning competitions are different from scalable and robust services. We discuss some lessons learnt on this road to create a solution that can equally encompass complex algorithms, yet fast and scalable.
Domonkos Tikk is Chief Executive Officer at Gravity R&D Inc., a recommender solution vendor company. Domonkos obtained his PhD in 2000 in computer science from Budapest University of Technology and Economics. He has been working on machine learning and data and text mining topics in the last decade. His team, Gravity, participated at the Netflix Prize challenge, and was a leader of the The Ensemble team finished tied at the first position of the challenge. The team members founded the company Gravity to exploit the results achieved in Netflix Prize. Domonkos published actively in the field of recommender systems, co-authored about 25 papers in the last years. He also acted as the co-chair of the recommender system-related KDD-Cup 2007, RecsysChallenge 2012 and 2014, Recsys Doctoral Symposium in 2011. In 2012 he gave a tutorial on best practices in Recommender System Challenges at ACM RecSys.