Sunday, January 1, 2017

Research Story: Factorized Machine Learning

This post is a follow-up to my earlier one on whether joins are even needed for ML over multi-table datasets. When joins are indeed needed, they often become a major bottleneck in the end-to-end process of building ML models. I tackled this problem in my dissertation by proposing the paradigm of factorized machine learning. This problem also recently attracted the attention of a few other researchers. This post covers this recent line of work, presents the interesting stories behind its progression, and discusses some recent results.

Introduction

Machine learning (ML) is critical for deriving insights from data in numerous data-driven applications. An overwhelming majority of such applications dealing with structured data have data in the so-called normalized form. In other words, features needed for ML are spread across multiple tables, connected by database dependencies such as primary key-foreign key (PK-FK) dependencies. Since almost all ML tools expect the entire feature vector to be present in a single table, data scientists are forced to join the base tables to materialize/denormalize a single table. A more detailed example of such scenarios was discussed in my previous post.

As data management folks, we know all too well that joins often introduce redundancy in the data, which could blow up the size of the intermediate single table. This also causes runtime wastage because the ML over the single table often has significant computational redundancy. In fact, in one real-world instance at a major Web company that I spoke to, such joins before ML by an analytics team brought down the cluster they shared with other teams because they shot through their allocated storage! Such runtime and storage bottlenecks caused by this problem of "learning after joins" will likely get exacerbated as more analytics workloads move to the cloud, where every bit of extra time or storage purchased requires more $$$. Of course, this problem seems obvious in hindsight but I did not realize its severity until after speaking to practitioners.


Enter Factorized Machine Learning

We highlighted the above problem in our first paper, "Learning Generalized Models over Normalized Data," which appeared at SIGMOD'15 (PDF). The crux of our solution is the "factorized learning" technique, whose core insight is stunningly simple: decompose the ML computations and push them through joins to the base tables! Essentially, this applies the classical RDBMS idea of pushing selections/projections/aggregates through joins to ML. Of course, ML is not one algorithm and there are far too many to cover in a single piece of work. So, for tractability sake, we picked a simple but popular set of ML algorithms to study the feasibility and efficiency of our idea: generalized linear models (GLMs). These models, which include linear regression and logistic regression, essentially construct a hyperplane (the co-efficient vector w) in the d-dimensional feature space for regression or classification. An optimal hyperplane is defined by a loss function, denoted F, over the training data. GLMs are typically trained using gradient descent methods. For this initial work, we focused on the simple batch gradient descent (BGD) method, which is similar in its data access pattern to conjugate gradient and L-BFGS (I will discuss stochastic gradient and coordinate methods later).

Learning a GLM with BGD requires computing the gradient, ∇ F, of the GLM's loss function iteratively. At each iteration, fixing the co-efficient vector w, this is simply a vectorial aggregation over the training examples in which each term multiplies the feature vector x with a GLM-specific scalar function (denoted g) of the inner product of w with x: wTx. Now, the feature vector is actually a concatenation of feature sub-vectors from the base tables created by the join. Denoting the base tables as S and R, we can view x as [xS, xR]. The decomposition now becomes clear: wTx = wTSxR + wTSxR. But there is more: the gradient computation requires post-multiplying the entire feature vector with the scalar output of g. This means the push-down is more complicated than for SQL aggregates like SUM or AVG. Given these observations, the factorized learning technique virtually constructs itself, with the following figure illustrating the relational-style "query" plan/dataflow:

  1. One sequential pass over R to compute the partial inner products over R's features; store these in an in-memory associative array HS indexed by the ID of the R tuples (the join attribute)
  2. One sequential pass over S to compute the partial inner products over the features in S, while simultaneously looking up into HS to retrieve and add the pre-computed partial inner products, apply g over the full inner product, and update an entry in another in-memory associative array HR indexed by the foreign key (the join attribute) to store the GROUP BY SUM of g; also, multiply g with xS and aggregate these to obtain the portion of ∇ F over S's features.
  3. A second sequential pass over R to multiply the scalars in HR with xR and aggregate these to obtain the portion of ∇ F. over R's features.

The gradient value is technically unaffected, which means ML accuracy is not affected. But equally importantly, the above formulation lets us leverage decades of work on scalable relational dataflows, including RDBMSs, Hive, and Spark. Almost all of these systems let us define a user-defined aggregate (UDA) on top of a table. We thus implement each of the three steps as a UDA to realize factorized learning without affecting the scalability of the ML algorithm. Our prototype, named Orion, was built on top of PostgreSQL and Hive (subsequently extended to Spark and in-memory R). Orion is open-sourced on GitHub. Empirically, we found that factorized learning provides speed-ups over the regular materialized execution that are commensurate with the amount of redundancy introduced by the join.

Of course, several practical questions remain: Why not do a lazy join with an index (short response: it does not avoid computational redundancy but it creates a new performance trade-off)? What if HR/HS do not fit in memory (short response: we propose alternatives to scale with new trade-offs)? What if more than two tables are joined (short response: an optimal extension to star schema joins is NP-Hard and we introduce an effective heuristic)? What if the data is distributed (short response: factorized learning is easy to distribute)? All the technical details are in our SIGMOD'15 paper.


Extensions as a Means to Learn Advising

Apart from machine learning, this research was also a human learning experience for me! :) Orion provided the perfect setup to study extensions and generalizations with junior students both to teach them how to do research and for me to learn the ropes of being an advisor from my advisors. I realize I was fortunate in this regard because not many PhD students get such an opportunity. Anyway, with many brilliant undergrad and Masters students at Wisconsin, we extended factorized learning to a bunch of other classes of ML models:
  • Boqun Yan and Mona Jalal helped build an R-based toolkit named Santoku that extended the idea to probabilistic classifiers like Naive Bayes and decision trees, which ultimately rely on a collection of SQL-style aggregates to estimate probabilities. Mona and I demonstrated Santoku at VLDB'15 (PDF and code on GitHub).
  • Boqun Yan further worked on "factorizing" GLMs when using SGD. It turns out that there is no computational redundancy caused by joins, since SGD updates w after each example! The only savings are in IO/data movement reductions, which at scale could introduce a runtime-accuracy trade-off due to data re-ordering (a form of biased sampling). Mini-batch SGD bridges the gap between SGD and BGD.
  • Zhiwei Fan worked on factorizing GLMs when using stochastic coordinate descent (SCD). Unlike SGD, SCD does have computational redundancy and factorized learning helps avoid it. However, it turns out the overheads caused by individual co-ordinate update dominate the savings in many cases. Once again, block CD bridges the gap between SCD and BGD.
  • Fengan Li worked on extending factorized learning to clustering algorithms such as K-Means, DBSCAN, and HAC. Each presented its own new trade-offs due to differences in their data access patterns.
  • Microsoft, who generously supported my dissertation work at the Gray Lab in Madison, helped me explore collaborations with many Microsoft teams that also faced the learning after joins problem. In particular, Robert McCann from their Web security team (who later became a co-author) gave me the opportunity to prototype our idea on the Cosmos platform and play with some large real datasets.
  • In the background, I was also mulling over a "generic" way to extend factorized learning to other ML algorithms rather than take a piecemeal approach. My internship at IBM Almaden's SystemML team in 2011 showed that linear algebra provides a common substrate to express many ML algorithms and scaling linear algebra automatically scales all these ML algorithms. But I was not sure if there would be enough interest to invest more time into this idea.


Tales from SIGMOD and VLDB

Our paper had its share of reviewer troubles (don't we all?). It first got rejected at VLDB with the key complaints being that "no RDBMS vendor will implement this" (seriously!) and that it was "too straightforward" (if only I had a nickel ...). However, I trusted my advisors' sagely advice that a problem-setting paper like this will face such troubles (apparently, Jim Gray's now-famous data cube paper faced similar troubles!). True to their advice, after a rewrite of just the introduction, our paper got accepted to SIGMOD, with the reviewers now commending the paper as "elegant" and "exactly the kind of paper database folks should be writing about ML"! :) Ironically (or presciently?), our paper was scheduled for the "Advanced Query Optimization" session at SIGMOD. Apparently, ML has finally become a part of the bread-and-butter of our community!

At SIGMOD in Australia, the work was well-received and generated some excitement. In particular, Molham Aref, the head of the innovative database/analytics start-up LogicBlox, really liked our idea and explained how some of his company's retail customers faced this problem often. Molham later put me in touch with the rest of his top-notch team at LogicBlox and was incredibly gracious in advertising my work there. Interacting with them validated the utility of this line of work even more. I connected them to one my students, Zhiwei, who did an internship at LogicBlox and worked on some nice problems, including applications of factorized learning, with real customer datasets. Another development was that the IBM SystemML folks also seemed interested in this work. Matthias Boehm asked me if we can extend this factorize learning idea to ... wait for it ... linear algebra! :) The soft-spoken but incredibly deep Dan Suciu also asked if we have formalized the scope of the "language" that our factorized ML idea applies to. All of this feedback encouraged me to push forward with the idea of integration with linear algebra (more below).

At VLDB in Hawaii, I got to meet the super-nice Dan Olteanu, the creator of "factorized databases," which is the namesake for our technique. Interestingly, Dan was aware of our work and stayed in touch with me on this topic. Eventually, his group came out with an interesting new algorithm for linear regression over factorized databases, which generalizes the factorized learning idea to a wide class of join queries (their SIGMOD'16 paper). As an interesting co-incidence (or was it?), their paper was presented in the same session as my Hamlet paper! Another interesting co-incidence: Dan Olteanu had started working part time for LogicBlox and spread the word about my the Hamlet paper! Small world? :) The indefatigable Peter Bailis also attended our demo and suggested that I start blogging about my cool new research ideas. Of course, since I enjoy writing, I did not think twice about starting this blog although I do wish I blogged more often!


Morpheus: Generalizing Factorized Learning using Linear Algebra

Pre-occupied with teaching Wisconsin's undergrad database course and preparing for my job search, I was on the look out for another student to help generalize factorized ML using linear algebra (LA) as a formal representation language for ML. The intuition for this project is simple: many data scientists use LA and LA-based frameworks such as R and Matlab to use ML algorithms and write new ones. The data management community has embraced LA and R as key environments for ML workloads, with tools such as IBM's SystemML, Oracle R Enterprise, and SparkR providing physical data independence for LA scripts: the "data frame" or matrix can be a larger-than-memory HDFS file or an in-RDBMS table. Thus, such tools provide a mechanism to let data scientists focus on their LA scripts rather than worry about scalability or distribution. Alas, when given normalized data, even these LA tools expect a single denormalized "data matrix," leading to the learning after joins problem again.

Lingjiao Chen, whom I met at guest lecture I gave about Orion in Jeff's graduate database class, seemed the prefect fit for this new ambitious goal of solving the above problem: factorize linear algebra! Our insight is simple: since ML algorithms are just scripts in LA that use individual basic and derived LA operators, if we factorize these LA operators, we can automatically factorize many ML algorithms in one go. This could dramatically reduce the development overhead inherent in manually extending factorized learning to different ML algorithms. We set about creating a new framework and system that factorized linear algebra; we call our framework Morpheus and we recently released a pre-print of the Morpheus paper on arXiv. The following figure illustrates the relationship between Morpheus and prior work on factorized ML, including my own work on Orion.

Our approach presented three main technical challenges: generality, closure, and efficiency, which I discuss next. Unlike relational algebra, LA has far too many popular operators. We solve this issue by identifying a subset of LA that is large enough to let us cover a wide variety of ML algorithms, but small enough to be tractable. Also, unlike Orion, we wanted to handle both PK-FK joins and more general "M:N" joins. We do so by introducing a new logical multi-matrix data type to LA: the normalized matrix, which helps represent normalized data and encodes the join relationship using an indicator matrix. Essentially, we bring the classical database notion of logical data independence to LA sytems!

Closure means that we want to rewrite an LA script only into a different LA script; this ensures that we are not forced to modify the internals of the LA system used, which could make practical adoption easier. We do so by devising an extensive framework of algebraic rewrite rules to transform LA operators over a given denormalized matrix into a set of LA operators over the normalized matrix. Herein lies the meat of this work: some operators are trivial to rewrite and are reminescent of pushing relational aggregates through joins but some others present novel rewrite opportunities with no known counterparts in relational optimization. The paper goes into these technical details. Finally, we also identify new efficiency trade-offs in the rewrites and present a heuristic decision rule to predict if the rewrites will not improve runtime performance.

Overall, Morpheus can automatically factorize numerous ML algorithms. We illustrate several popular ones in the paper: logistic regression with gradient descent (as done in our Orion paper), linear regression (three algorithmic variants of this, including Dan Oltenau's SIGMOD'16 algorithm), K-Means clustering, and Gaussian non-negative matrix factorization for feature extraction. On the real-world normalized datasets from the Hamlet paper, we found Morpheus to yield speed-ups of even over an order of magnitude. On synthetic data for M:N joins, we see over two orders of magnitude speed-ups. All of this without forcing data scientists to learn factorized ML!

We have prototyped Morpheus as an easy-to-use library in R as well as on Oracle R Enterprise but the framework itself is abstract and applicable to any LA system. We plan to open-source Morpheus on R soon. Distributed extensions on top of SystemML and TensorFlow are also on their way.


Concluding Remarks

This whole story of the factorized learning idea, its follow-ons and extensions, and ultimately, its generalization is instructive (at least to me) of what is often a common theme in research: ideas rarely form and grow in isolation. Speaking and iterating with practitioners helps obtain new perspectives, evaluate the efficacy of research ideas in practice, obtain new insights, and identify new problems to work on for the future. And it is often satisfying to go beyond just a research or vision paper; building real prototype systems, open sourcing them, and demonstrating them is a valuable part of the process. And finally, research, at least in academia, is not just about ideas, it is also about people: the students, the advisors, the student-advisors (I was one!), the collaborators, the reviewers (ugh!), the practitioners you talk to, the people you meet at conferences, and everybody else you brainstorm with or obtain feedback from.

From a technical perspective, this line of work shows the continued importance of data management-inspired ideas in the context of ML no matter where the ML computations are done. Far from being the "plumbers" that just build faster ML tools, I think the data management community could drive an ever-closer union of these two fields to open up new problems that neither community addresses on its own but solving which could substantially benefit numerous data-driven applications and maybe even create new ones!

Bringing this post to a close, I summarize that simple insight that drove this whole line of work on factorized machine learning, or more generally, on learning "over joins" rather than "after joins" in one succinct stanza:

A sum of sums is the same sum,
Factorizing avoids repeating some.
So, learn after joins no more,
Do learn over joins herefore!

I invite you to read our papers on this topic (Orion, Santoku, and Morpheus) and/or check out the code (Orion and Santoku; Morpheus coming soon). Questions, suggestions, and comments -- critical or otherwise -- are welcome!

Saturday, June 25, 2016

Research Paper: To Join or Not to Join?

There was once a paper,
Which got badly rejected at PODS.
Boldly, it went forth to SIGMOD,
Against it were overwhelming odds.
Branded "extremely controversial,"
Yet "profound," it waited a long while.
At last, deliverance was in sight,
Though many a reviewer it did rile!

This is a post about my SIGMOD 2016 paper titled To Join or Not to Join? Thinking Twice about Joins before Feature Selection. It is co-authored with my advisors, Jeff Naughton and Jignesh Patel, and our ML collaborator, Jerry Zhu.

Introduction

Machine learning (ML) is all the rage in CS right now. The "traditional" view of training data in ML is that it is a single table with all the features about the entity to model. However, in most real-world applications, datasets are often multi-table and connected by primary key-foreign key (KFK) relationships. Thus, data scientists often join the base tables to gather as many features as they can for ML. Here is a concrete example (from the paper):

Example: Consider a data scientist at an insurance company modeling their customers using ML to prevent customer churn. They have a table with customer features such as their age, gender, income, employer, etc. One of those features, the employer's ID is a foreign key (FK) that refers to a separate table with information about companies and other organization. Given this situation, the data scientist instinctively joins these two tables to create a single table in which the feature vectors from the customer and employer tables are concatenated. They then use it to train their favorite ML model, e.g., Naive Bayes, logistic regression, neural networks, etc.

Joins before ML do not just arise in insurance; they are ubiquitous and arise in practically every data-driven application. Recommendation systems join tables with ratings, users, and movies. Web security systems join tables about login events, user accounts, and webpages. The list goes on and on. In all these cases, feature selection is typically performed implicitly or explicitly along with ML to help improve accuracy. For example, explicit greedy subset search techniques such as backward or forward selection are popular for Naive Bayes, while implicit L1 or L2 regularization are popular for logistic regression and neural networks.

In this paper, we ask a simple but radical question: Do we really "need" these joins before ML? The motivation for this question is a simple information theoretic result we prove in the paper: the features brought in by such KFK joins are redundant. In the ML literature, many feature selection methods explicitly search for redundant features and remove them. This motivates us to short-circuit the feature selection process using database schema information: simply avoid KFK joins before ML! Clearly, this could help improve runtime performance because several features are avoided before feature selection without even looking at the data. Avoiding the join processing cost also saves some time but this component is usually minor. Empirically, we saw speed-ups of up to two orders of magnitude (186x) on real-world datasets.

But the million dollar question for avoiding joins is: Will avoiding a KFK join reduce ML accuracy? This paper is an in-depth exploration of this rather surprising and counter-intuitive question that combines a statistical learning theory-based analysis with a database-style practical approach to avoiding joins in the real world.

Summary

Our first technical results show that information theory may not be perspicacious enough to explain whether avoiding joins would reduce accuracy or not. Thus, we resort to applying statistical learning theory, which explains that the prediction (test) error of an ML classifier can be decomposed into three components: bias, variance, and noise. Informally speaking, the bias quantifies how complex an ML model's hypothesis space is -- more complex models have lower bias. The variance quantifies the instability of the training process -- more complex models have higher variance (conversely, fixing the model, having more training examples lowers the variance). At the heart of ML is a trade-off between bias and variance, which is often illustrated as follows: as the model complexity increases, the bias decreases but the variance increases, which leads to higher test errors than train errors -- a situation known colloquially as "overfitting."


Thus, our question now becomes: What is the effect of avoiding a KFK join on bias and variance? The paper goes into the technical details and arrives at the following answer: Avoiding a join will not increase the bias but it might increase the variance. The crux of why this happens is a simple two-fold reason, which I explain briefly next. First, the foreign key (the employer's ID) encodes all information about the employer because all employer features are merely functions of the employer's ID. In database parlance, this is also called a functional dependency. Thus, any ML classifier, which is simply a function of the feature vector, can be recast as another function that only uses the employer's ID and avoids other employer features. Essentially, a function of a function is just another function! Thus, the hypothesis space of the model does not shrink when avoiding the join, which means the bias will not increase. In contrast, it might turn out that some specific employer feature/features is/are sufficient to represent the true concept. But usually, the foreign key has a much larger domain than all features it refers to (for example, there are only 50 states, but there are millions of employers). Thus, the hypothesis will be far larger if the employer ID is used a representative of all employer features, which get avoided. Since the number of training examples is still the same whether or not we avoid the join, this implies that the variance might shoot up if we avoid the join.

How do we use the above theoretical analysis to help data scientists in practice? One practical way, which we take, is to bound the potential increase in variance and enable data scientists to apply a threshold on it; essentially, this creates a simple decision rule. We call this process avoiding the join "safely." Basically, the data scientist tolerates a certain risk of slightly higher test error in return for potentially much faster runtime performance. The paper goes into the details of how we adapt classical VC dimension-based bounds on variance from the learning theory literature for our purpose.

After a lot of analysis and conservative simplifications, we arrive at a stunningly simple and easy-to-check criterion in the end: the Tuple Ratio (TR) rule. Essentially, we only need to compare the ratio of the number of tuples in the tables that are joined: in our example, this is the ratio of the number of customers to the number of employers. If this ratio is above a certain threshold, then it is "safe" to avoid the join. A key distinguishing aspect of our approach is that our TR threshold is tied only to the VC dimension of an ML model and is independent of the dataset instance (unlike traditional ML hyper-paramaters, which require tuning per instance). We tuned our rule using a simulation study and found that for an error tolerance of 0.001, a reasonable value for the TR threshold is 20.

An extensive empirical validation using 7 real-world datasets (14 joins between them) shows that the TR rule with a threshold of 20 works out of the box for both Naive Bayes and logistic regression combined with several different feature selection methods! Overall, it correctly avoids 7 out of 11 joins that are safe to avoid and correctly does not avoid 3 out of 3 joins that are not safe to avoid. The remaining 4 cases are missed opportunities to avoid joins, which happens because our decision rule is conservative.

Concluding Remarks

Our paper opens up a new connection between learning theory and joins, which are fundamental relational operations that arise during the process of feature engineering for ML over multi-table datasets. Much work remains to be done, however, and this paper is only the beginning. Our ideas assume that the ML model has a finite VC dimension (specifically, we assume a linear VC dimension). It is not clear what happens if we use models with higher (potentially infinite) VC dimensions, e.g., neural networks, RBF-kernel SVMs, or decision trees. We also assume discrete feature spaces. It is not clear what happens with numeric features spaces. Several such interesting questions remain unanswered and answering them requires new research that truly lies at the intersection of data management and machine learning, both of which are critical for today's data-driven world.

Interestingly, the SIGMOD reviewers found our work "extremely controversial," as the opening poem alludes to. In fact, we got six reviewers instead of the usual three! In the first round, two gave us "champion" accepts, while the rest, including an external ML expert, gave us a reject. After the revision round, however, all six accepted our paper. The key bone of controversy was whether foreign keys can indeed be used as features for ML. The short answer is: it depends but in most applications, foreign keys are routinely used as features for ML. The paper explains this issue in more detail.

To Join or Not to Join,
That is the question.
Tuple Ratio, we did coin,
Use at your discretion!

I invite you to read our research paper. Questions, suggestions, and comments -- critical or otherwise -- are welcome! Controversy is even more welcome! :)

Links

Project webpage.
Research paper at SIGMOD 2016.
Technical report.
Code on GitHub.
Real datasets.

Sunday, December 6, 2015

Vision Paper: Model Selection Management Systems

This is a post about my vision paper that appears in ACM SIGMOD Record, Dec 2015. It is titled Model Selection Management Systems: The Next Frontier of Advanced Analytics. It is co-authored with my advisors, Jeff Naughton and Jignesh Patel, and our collaborator from Microsoft, Robert McCann, who is an applied researcher and practitioner of ML.

Prelude

The field of advanced analytics -- the intersection of data management and machine learning (ML) -- is undoubtedly booming. It seems that not a month goes by without news of a new startup in this already crowded space: Alation, Dato, Palantir, and Scaled Inference, only to name a few. Established companies such as IBM, Microsoft, Oracle, Pivotal, etc. are also investing heavily in building ML-related infrastructure software to improve support for moneyed enterprise users, while Baidu, Google, Facebook, and other Web companies are also building vast internal ML infrastructure. However, after a spurt of research in the early 2000s on scalable in-RDBMS data mining and ML, academic and research efforts in this space by the data management community seem to have stagnated and fragmented. As data management researchers, do we have anything new and non-trivial to contribute to this booming field other than just to build faster or more scalable toolkits that implement individual ML algorithms? Or is all the interesting work in this space under the sole purview of engineers in industry?

Introduction

In our vision paper, we make the case for a new, cohesive, and potentially impactful direction of research in advanced analytics for the data management community. We start with the observation that building ML modes in practice is seldom a one-shot "slam dunk". Based on numerous conversations with analysts at both enterprise and Web companies, we found that using ML is often an iterative exploratory process that involves three key challenging practical tasks -- feature engineering (FE), algorithm selection (AS), and parameter tuning (PT), collectively called model selection. Surprisingly, these three tasks have largely been overlooked by the data management community even though these are often the most time-consuming tasks for ML users. Consequently, there is little end-to-end systems support for this iterative process of model selection, which causes pain to analysts and wastes system resources. In our paper, we envision a unifying abstract framework for building systems that can support the end-to-end process of model selection and identify the research challenges posed by such systems.

Summary

To make the process of model selection easier and faster, we envision a unifying abstract framework based on the idea of a Model Selection Triple (MST) -- a combination of choices for FE, AS, and PT. While a large body of work in ML focuses on various theoretical aspects of model selection, in practice, analysts typically use an iterative exploratory process that combines the ML techniques and their domain-specific expertise. Nevertheless this iterative process has structure. We divide it into three phases -- Steering, Execution, and Consumption. In the Steering phase, they specify the precise MST they want to explore. The Execution phase runs the MST and obtains the results. In the Consumption phase, they post-process the results to steer the next iteration. They then modify their MST and iterate. Overall, the model selection process is iterative and exploratory because the space of MSTs is usually infinite and analysts have no way of telling a priori which MST will yield "satisfactory" accuracy and/or insights. Alas, most existing ML systems force analysts to explore only one MST per iteration, which overburdens the analysts and wastes system time.

We envision a new class of analytics systems we call Model Selection Management Systems (MSMS) that enables analysts to explore a logically related set of MSTs per iteration. The abstraction of sets of MSTs acts as a "narrow" waist that helps us decouple the higher layers (how to specify them) from the lower layers (how to implement and optimize them) in order to make it easier to build an MSMS. Moreover, we explain how some existing research systems such as Columbus, MLBase, etc., can be subsumed by our unified framework. However, realizing our vision immediately poses the challenge of how to enable analysts to easily specify sets of MSTs and how to handle them efficiently. We discuss how repurposing three key ideas from database research -- declarativity, optimization, and provenance -- could help us improve the respective phase of an iteration. This can help reduce both the number of iterations and the time per iteration of the model selection process, thus making it easier and faster. However, as we identify in the paper, applying these three ideas to the model selection process raises several non-obvious research challenges:

  • What should the declarative interfaces to enable analysts to intuitively specify a set of MSTs look like? We discuss the tradeoffs in the design decisions involved.
  • What are the new system optimization opportunities that can exploit the set-oriented nature of specifying MSTs to reduce the runtime of an iteration? We discuss a slew of new optimization ideas that combine data management and ML techniques.
  • How to perform provenance management for ML to enable analysts to consume results easily and improve the iterations? We explain the need for a notion of "ML provenance" and discuss its applications.
Solving the above challenges requires new research in data processing algorithms, systems, and theory at the intersection of data management, ML, and HCI. In our recent research, we have already started tackling some of the technical questions. I hope the data management community joins us in our quest to make the end-to-end process of using machine learning for data-powered applications easier and faster!

I invite you to read our 6-page vision paper. Questions, suggestions, and comments -- critical or otherwise -- are welcome!

Links

Vision paper.
Project webpage.
Associated survey to explain the gaps in the existing landscape of ML systems that motivated our vision.

Thursday, August 13, 2015

A poem on the Relational Data Model

We start off with a poem on the Relational Data Model, which truly transformed computing and our modern civilization!

The critical core of data applications,
Due to you, pointers we no longer chase.
Letting us get data with just declarations,
Incomparable you are in your grace.

Infusing a really revolutionary rigor,
Into a field that was fraught with pain.
Birthing a community filled with vigor,
And engaging our minds again and again.

Schools and banks, airlines and apps,
Almost all services benefit from you.
Yea, sans you, our economy might collapse,
'Tis indeed true and not just ballyhoo!

Rejecting you, some chose to walk away,
Hoping against hope you are not needed.
Repeating history and suffering dismay,
They crawled back to you and conceded.

Of all models, you, we did select,
To join forces with you was sensible.
For no matter what future we project,
You seem, in aggregate, indispensable.

Hello World!

So, I am finally joining the bandwagon of maintaining a research blog! Since my research interests revolve around data, this blog will carry articles and other posts that have something to do with data - how we store, query, analyze, understand, and more generally, use (and abuse!) data. From a "traditional" perspective, this would span the topics of data management (including databases), statistical machine learning, optimization, systems, and HCI. For those interested in tech blogs about data, there are many other excellent blogs as well (for example, this, this, and this). My posts are probably going to be a bit more eclectic. I expect to have five major kinds of posts (in no particular order):

1. Articles about my research in a language that is simpler/less technical than my papers.

2. Articles about other people's research that I find interesting/important.

3. Articles about general data applications and software that I find interesting/important.

4. Rants and raves about research events or my general experience as a researcher.

5. Poems about research in this field! :)

Better late than never to blog about data in this new golden age of data management and analysis! Feedback and discussions on my posts, online and offline, are always welcome. Thanks!