This past week, a number of heavy hitters in the machine learning community launched the first online, interactive and peer reviewed machine learning journal with a focus on publishing original research and expository papers with an emphasis on clear explanations of the core ideas.
The new journal is called Distill. The steering committee consists of several researchers from the University of Montreal (Deep Learning guru Yoshua Bengio), Deep Mind, Google Brain (another Deep Learning guru Ian Goodfellow – read his new book on Deep Learning), and Elon Musk’s Open AI (Andrej Karpathy) among others.
On first reading, Distill sounds like yet another medium of publication for desperate academics looking to pad their CVs in academia’s, out of control, publish or perish culture. However, the steering committee views Distill as a different kind of journal, one that moves past the traditional printed publication medium and into the Web 2.0 world of interactive content. Interestingly, each submitted paper will have a corresponding GitHub and the review process will revolve around Git primitives such as issues. I definitely hasn’t seen anything like this before so it will be interesting to see how well it works for peer review.
Distill papers focus on four main goals (my summary from reading the journal’s website):
- New research results
- A clear explanation of the research ideas
- An interactive playground allowing the reader to better understand the presented research
- Free dissemination of the papers to the general public under the Creative Commons Attribution license
Of the above four goals, the first is rather standard for any serious, peer-reviewed research publication. Similarly, for the last one much progress has been achieved in the last 10 years with some major conferences and journals making published articles available online at no cost. What truly differentiates Distill from its peers are the two remaining goals of clear explanations and interactive exploration. I think the sample paper explaining t-SNE (a state-of-the-art method for dimensionality reduction of data) is a prime example of what Distill founders consider a clear explanation via exploratory interaction.
The two goals are not really independent. Interactive sessions will make understanding of the core research ideas an easier task. The journal uses modern HTML structures to give authors the necessary tools to create interactive papers. However, making papers interactive will require extra effort from the authors and it will be interesting to see if the academic community under enormous pressure to publish high volumes will embrace Distill and support its founders’ efforts.
If Distill is successful in publishing papers that satisfy these goals then it will be a welcomed addition to the academic literature. If not, then it will end up either being marginalized and eventually cease to exist or transform to another standard journal.
Although I find the motivating ideas behind Distill’s creation interesting and I wish them the best of luck in being successful (not to mention that I will be reading any published articles), I will offer one small criticism.
Truth be told, research papers are difficult to understand (generally speaking) not because of poor writing but because the target audience is expected to have a good and broad understanding of the research literature as well as a good handle on mathematics. These are necessary prerequisites for the understanding of a research paper and the reason why so many people spend years in post-graduate studies in an effort to master Machine Learning or any other academic discipline. For example, it is often next to impossible to understand a machine learning paper if the reader doesn’t have a good understanding of linear algebra, calculus and numerical optimization.
I understand that today industry has much need for machine learning talent and any effort to get people skilled up to do the job is welcomed by everyone. I would suggest that if industry needs more well trained personnel, then we should increase funding to academia to train qualified workers. Instead, the last few years, academic institutions have been stripped bare of good people by big business who can both afford to pay huge salaries but also keep data (our data, btw) hostage. The result is a diminishing ability for traditional academia to train the workforce. Online courses (MOOCs in particular) are trying to compensate for this loss.
Today, we often come across the phenomenon of poorly trained “data scientists” who learn how to use some of the available software tools to do average (at best) quality work without much understanding of the underlying mathematical principles. I would argue that this approach is much more dangerous to our future economic prosperity than investing money in the academic training of the workforce and waiting a few years before qualified scientists can serve the industry. To me, Distill feels like yet another bandage effort to get the workforce up to date with the latest in, one part of, Artificial Intelligence. It will help, but I think our best option is to increase funding to our academic institutions that, after all, are in the business of teaching people skills.