Many applications require data analysis of one form or another. MongoDB provides powerful support for running analytics natively using the aggregation framework. In this chapter, we introduce the aggregation framework and some of the fundamental tools this framework
provides:. In the next chapter, we’ll dive deeper, looking at more advanced aggregation features, including the ability to perform joins across collections.
The aggregation framework Aggregation stages
Aggregation expressions Aggregation accumulators
Pipelines, Stages, and Tunables
The aggregation framework is a set of analytics tools within MongoDB that allow you to do analytics on documents in one or more collections.
The aggregation framework is based on the concept of a pipeline. The idea with an aggregation pipeline is that we take input from a MongoDB collection and pass the documents from that collection through one or more stages, each of which performs a different
operation on its inputs. Each stage takes as input whatever the stage before it produced as output. The inputs and outputs for all stages are documents a stream of documents, if you will.
Playlists History Topics Tutorials Offers & Deals Highlights Settings Support Sign Out
If you’re familiar with pipelines in a Linux shell, such as bash, this is a very similar idea. Each stage has a specific job that it does. It’s expecting a specific form of document and produces a specific output, which is itself a stream of documents. At the end of the pipeline, we get access to the output, much in the same way that we would by executing a find query. By that I simply mean that we get a stream of documents back that we can then do additional work, with whether it’s creating a report of some kind, generating a website, or some other type of task.
Now, let’s dive in a little deeper and consider individual stages. An individual stage of an aggregation pipeline is a data processing unit. As I mentioned, that stage takes a stream of input documents one at a time, processes each document one at a time, and produces an output stream of documents one at a time.
Each stage provides a set of knobs or tunables, that we can control to parameterize the stage to perform whatever task we’re interested in doing. A stage performs a generic task a general purpose task of some kind and we parameterize the stage for the particular collection that we’re working with and exactly what we would like that stage to do with those documents.
These tunables typically take the form of operators that we can supply that will modify fields, perform arithmetic operations reshape documents, or do some sort of accumulation task as well as a variety of other things.
Before we dive in to look at some concrete examples, I want to introduce one more aspect of pipelines that is expecially important to keep in mind as you begin to work with them.
Frequently, we want to include the same type of stage multiple times within a single pipeline.
For example, we may want to perform an initial filter so that we don’t have to pass the entire collection into our pipeline. But then later on, following some additional processing, we might want to want to filter further, applying a different set of criteria.
To recap, pipelines work with a MongoDB collection. They’re composed of stages, each of which does a different data processing task on its input and produces documents as output to be passed to the next stage. Finally, at the end, a pipeline produces output that we can then do something with in our application. In many cases, in order to perform the analysis we need to do, we will include the same type of stage multiple times within an individual pipeline.
Getting Started with Stages: Familiar Operations
Getting Started with Stages: Familiar Operations
As our first steps in developing aggregation pipelines, we will look at building some pipelines that involve operations that are already familiar to you. For this we will look at the match, project, sort, skip, and limit stages.
To work through these aggregation examples, we will use a collection of company data. As an overview, let’s look at an example based on a document containing data on Facebook, Inc. In this collection we have a have a number of fields that specify details on companies, such as number of employees, when the company was founded.
There are also fields for the rounds of funding a company has gone through, important milestones for the company, whether or not a company has been through an initial public offering (IPO), and if so, the details of the IPO.
{
"_id" : "52cdef7c4bab8bd675297d8e", "name" : "Facebook",
"category_code" : "social", "founded_year" : 2004,
"description" : "Social network", "funding_rounds" : [{
"id" : 4,
"round_code" : "b",
"raised_amount" : 27500000, "raised_currency_code" : "USD", "funded_year" : 2006,
"investments" : [ {
"company" : null, "financial_org" : {
"name" : "Greylock Partners", "permalink" : "greylock"
},
"person" : null },
{
"company" : null, "financial_org" : {
"name" : "Meritech Capital Partners", "permalink" : "meritechcapitalpartners"
},
"person" : null },
{
"company" : null, "financial_org" : {
"name" : "Founders Fund", "permalink" : "foundersfund"
},
"person" : null },
{
"company" : null, "financial_org" : { "name" : "SV Angel", "permalink" : "svangel"
},
"person" : null }
] }, {
"id" : 2197,
"round_code" : "c",
"raised_amount" : 15000000, "raised_currency_code" : "USD", "funded_year" : 2008,
"investments" : [ {
"company" : null, "financial_org" : {
"name" : "European Founders Fund", "permalink" : "europeanfoundersfund"
},
"person" : null }
] }], "ipo" : {
"valuation_amount" : NumberLong("104000000000"), "valuation_currency_code" : "USD",
"pub_year" : 2012, "pub_month" : 5, "pub_day" : 18,
"stock_symbol" : "NASDAQ:FB"
} }
As our first aggregation example, let’s do a simple filter looking for all companies that were founded in 2004.
db.companies.aggregate([
{$match: {founded_year: 2004}}, ])
This is equivalent to the following operation using the find query.
db.companies.find({founded_year: 2004})
Now, let’s add a project stage to our pipeline to reduce the output to just a few fields per document. Let’s exclude the _id, but include the name and founded_year. Our pipeline will be as follows.
db.companies.aggregate([
{$match: {founded_year: 2004}}, {$project: {
_id: 0, name: 1,
founded_year: 1 }}
])
If we run this, we get output that looks like the following.
{"name": "Digg", "founded_year": 2004 } {"name": "Facebook", "founded_year": 2004 } {"name": "AddThis", "founded_year": 2004 } {"name": "Veoh", "founded_year": 2004 }
{"name": "Pando Networks", "founded_year": 2004 } {"name": "Jobster", "founded_year": 2004 }
{"name": "AllPeers", "founded_year": 2004 } {"name": "blinkx", "founded_year": 2004 } {"name": "Yelp", "founded_year": 2004 } {"name": "KickApps", "founded_year": 2004 } {"name": "Flickr", "founded_year": 2004 } {"name": "FeedBurner", "founded_year": 2004 } {"name": "Dogster", "founded_year": 2004 } {"name": "Sway", "founded_year": 2004 } {"name": "Loomia", "founded_year": 2004 } {"name": "Redfin", "founded_year": 2004 } {"name": "Wink", "founded_year": 2004 } {"name": "Techmeme", "founded_year": 2004 } {"name": "Eventful", "founded_year": 2004 } {"name": "Oodle", "founded_year": 2004 } ...
Let’s unpack this aggregation pipeline in a little more detail. The first thing you will notice is that we’re using the aggregate method. This is the method we call when we want to run an aggregation query. To aggregate, we pass in an aggregation pipeline. A pipeline is an array with documents as elements. Each of the documents must stipulate a particular stage operator. In the aggregation above, we have a pipeline that has two stages: a match stage for filtering and a project stage with which were limiting the output to just two fields per document.
The match stage filters against the collection and passes the resulting documents to the project stage one at a time. Project performs its operation, reshaping the documents and passes the output out of the pipeline and back to us.
So now let’s extend our pipeline a bit further to include limit. In this case, we are still going to match using the same query, but limit our result set to five, and then project out the fields we want. For simplicity, let’s limit our output to just the name of each company.
db.companies.aggregate([
{$match: {founded_year: 2004}}, {$limit: 5},
{$project: { _id: 0, name: 1}}
])
The result is as follows.
{"name": "Digg"}
{"name": "Facebook"}
{"name": "AddThis"}
{"name": "Veoh"}
{"name": "Pando Networks"}
Note that I have constructed this pipeline so that we limit before the project stage. If we ran the project stage first and then the limit as in the following query, we would get exactly the same results. However, the difference is that we would pass hundreds of documents through the project stage before finally limiting to five.
db.companies.aggregate([
{$match: {founded_year: 2004}}, {$project: {
_id: 0, name: 1}}, {$limit: 5}
])
Regardless of what type of optimizations the MongoDB query planner might be capable of in a given release, I pause here to encourage you to always consider the efficiency of your
aggregation pipeline. Ensure that you are limiting the number of documents that need to be passed on from one stage to another as you buid your pipeline.
This requires careful consideration of the entire flow of documents through a pipeline. In the
case of the query above, we really are only interested in the first five documents that match our query, regardless of how they are sorted, so it’s perfectly fine to limit, as our second stage.
However, if the order matters, then we’ll need to sort before the limit stage. Sort works in a manner similar to what we have seen already, except that in the aggregation framework, we specify sort as a stage within a pipeline as follows. In this case, we will sort by name in ascending order.
db.companies.aggregate([
{ $match: { founded_year: 2004 } }, { $sort: { name: 1} },
{ $limit: 5 }, { $project: { _id: 0, name: 1 } } ])
with the following result from our companies collection.
{"name": "1915 Studios"}
{"name": "1Scan"}
{"name": "2GeeksinaLab"}
{"name": "2GeeksinaLab"}
{"name": "2threads"}
Note that we’re looking at a different set of five companies now, getting instead the first five documnents in alphanumeric order by name.
Finally, to close this subsection, let’s take a look at including a skip stage. Here we will sort first. We will then skip the first 10 documents and, again, limit our result set to five documents.
db.companies.aggregate([
{$match: {founded_year: 2004}}, {$sort: {name: 1}},
{$skip: 10}, {$limit: 5}, {$project: { _id: 0, name: 1}}, ])
Let’s review our pipeline one more time. We have five stages. First we’re filtering the
companies collection looking only for documents where the founded year is 2004. Then we’re sorting based on the name in ascending order, skipping the first 10, and then limiting our end
results to five. Finally, we pass those five documents on to project, where we’re going to reshape the documents such that our output documents contain just the name.
Here, we’ve looked at constructing pipelines using stages that perform opera2tions that should already be familiar to you. These operations are provided in the aggregation framework because they are necessary for the types of analytics that we want to accomplish using stages discussed in later sections. As we move through the rest of this chapter, we will take a deep dive into the other operations that the aggregation framework provides.
Expressions
As we move deeper into our discussion of the aggregation framework, it is important to have a sense for the different types of expressions avaiable for use as you construct aggregation pipelines. The aggregation framework support many different classes of expressions:
Boolean expressions allow us to and, or, and not other expressions.
Set expressions that allow us to work with arrays as sets. In particular, we can get the intersection or union of two or more sets. We can also take the difference of two sets and perform a number of other set operations.
Comparison expressions enable us to express many different types of range filters.
Arithmetic expressions enable us to calculate the ceiling, the floor, natural log and log, as well as simple arithmetic operations like multiplication, division, addition, and subtraction.
We can even perform more complex operations such as calculating the the square root of a value.
String expressions allow us to concatenate, find substrings, perform operations having to do with case, and text search operations.
Array expressions provide a lot of power for manipulating arrays including the ability to filter array elements, slicing an array, or just taking a range of values from a specific array.
Variable expressions, which we won’t dive into too deeply, allow us to work with literals, expressions for parsing date values, and conditional expressions.
Accumulators provide the ability to calculate sums, descriptive statistics, and many other types of values.
Project Stage and Reshaping Documents
Now I’d like to take a deeper dive into the project stage and reshaping documents. Here we will explore the types of reshaping operations that should be most common in the applications that you develop. We have seen some simple projections in aggregation pipelines. Now let’s look at some that are a little more complex.
First, let’s look at promoting nested fields. In the following pipeline, we are doing a match.
db.companies.aggregate([
{$match: {"funding_rounds.investments.financial_org.permalink": "greylock" }}, {$project: {
_id: 0, name: 1,
ipo: "$ipo.pub_year",
valuation: "$ipo.valuation_amount",
funders: "$funding_rounds.investments.financial_org.permalink"
}}
]).pretty()
As an example of the relevant fields for documents in our companies collection, we can look at a portion of the Facebook document again.
{
"_id" : "52cdef7c4bab8bd675297d8e", "name" : "Facebook",
"category_code" : "social", "founded_year" : 2004,
"description" : "Social network", "funding_rounds" : [{
"id" : 4,
"round_code" : "b",
"raised_amount" : 27500000, "raised_currency_code" : "USD", "funded_year" : 2006,
"investments" : [ {
"company" : null, "financial_org" : {
"name" : "Greylock Partners", "permalink" : "greylock"
},
"person" : null },
{
"company" : null, "financial_org" : {
"name" : "Meritech Capital Partners", "permalink" : "meritechcapitalpartners"
},
"person" : null },
{
"company" : null, "financial_org" : {
"name" : "Founders Fund", "permalink" : "foundersfund"
},
"person" : null },
{
"company" : null, "financial_org" : { "name" : "SV Angel", "permalink" : "svangel"
},
"person" : null }
] }, {
"id" : 2197,
"round_code" : "c",
"raised_amount" : 15000000, "raised_currency_code" : "USD", "funded_year" : 2008,
"investments" : [ {
"company" : null, "financial_org" : {
"name" : "European Founders Fund", "permalink" : "europeanfoundersfund"
},
"person" : null }
] }], "ipo" : {
"valuation_amount" : NumberLong("104000000000"), "valuation_currency_code" : "USD",
"pub_year" : 2012, "pub_month" : 5, "pub_day" : 18,
"stock_symbol" : "NASDAQ:FB"
} }
Going back to our match:
db.companies.aggregate([
{$match: {"funding_rounds.investments.financial_org.permalink": "greylock" }}, {$project: {
_id: 0, name: 1,
ipo: "$ipo.pub_year",
valuation: "$ipo.valuation_amount",
funders: "$funding_rounds.investments.financial_org.permalink"
}}
]).pretty()
we are filtering for all companies that had a funding round in which Greylock Partners
participated. The permalink value, “greylock”, is the unique identifier for such documents. Here is another view on the Facebook document with just the relevant fields displayed.
{ ...
"name" : "Facebook", ...
"funding_rounds" : [{
...
"investments" : [{
...
"financial_org" : {
"name" : "Greylock Partners", "permalink" : "greylock"
}, ...
}, { ...
"financial_org" : {
"name" : "Meritech Capital Partners", "permalink" : "meritechcapitalpartners"
}, ...
}, { ...
"financial_org" : {
"name" : "Founders Fund", "permalink" : "foundersfnd"
}, ...
}, {
"company" : null, "financial_org" : { "name" : "SV Angel", "permalink" : "svangel"
}, ...
}],
...
]}, { ...
"investments" : [{
...
"financial_org" : {
"name" : "European Founders Fund", "permalink" : "europeanfoundersfund"
}, ...
}]
}],
"ipo" : {
"valuation_amount" : NumberLong("104000000000"), "valuation_currency_code" : "USD",
"pub_year" : 2012, "pub_month" : 5, "pub_day" : 18,
"stock_symbol" : "NASDAQ:FB"
} }
The project stage we have defined in this aggregation pipeline will supress the _id and include the name. It will also promote some nested fields. This project uses dot notation to express fieds paths that reach into the ipo field and the funding_rounds field to select values from those nested documents and arrays. This project stage will make those the values of toplevel fields in the documents it produces as output.
{
"name" : "Digg", "funders" : [ [
"greylock",
"omidyarnetwork"
], [
"greylock",
"omidyarnetwork", "floodgate",
"svangel"
], [
"highlandcapitalpartners", "greylock",
"omidyarnetwork", "svbfinancialgroup"
] ] }
{
"name" : "Facebook", "ipo" : 2012,
"valuation" : NumberLong("104000000000"), "funders" : [
[
"accelpartners"
], [
"greylock",
"meritechcapitalpartners", "foundersfund",
"svangel"
], ...
[
"goldmansachs",
"digitalskytechnologiesfo"
] ] } {
"name" : "Revision3", "funders" : [
[
"greylock", "svangel"
], [
"greylock"
] ] } ...
In the output, each document has a field for name. Each also has a field for funders. For those companies that have gone through an IPO, the field “ipo” contains the year the company went public and the “valuation” field contains the value of the company at IPO. year and valuation.
Note that in all of these documents, these are top level fields and that the values for those fields were promoted from nested documents and arrays.
The $ character used to specify the value for ipo, valuation, and funders in our project stage, indicates that the value should be interpreted as a field path and used to select the value that should be projected for each field respectively.
One thing you might have noticed is that we’re seeing multiple values printed out for funders. In fact, we’re seeing an array of arrays. The reason for that is because for every funding round, we’re going to get potentially many investors because we know that funding rounds are
represented by an array.
Based on our review of the Facebook example document, we know that all of the funders are listed within an array called investments. What’s happening here is that our stage specifies that we want to project the financial_org.permalink value, for each entry in the investments array, for every funding round. So, an array of arrays of funders’ names is built up.
In later lessons we will look at how to perform arithmetic operations and other operations on strings, dates, and a number of other value types to project documents of all shapes and sizes.
Just about the only thing we can’t do from a project stage, relatively speaking, is change the data type for a value. We have lots of power in what type of documents we construct using a project stage. Here is where we begin to see some hints of that.
$unwind
When working with array fields in an aggregation pipeline, it is often necessary to include one or more unwind stages. Unwind allows us to produce output such that there is one output document for each element in a specified array field.
In this example in figure TODO, we have an input document that has three keys and values. The third key has an array as its value, with elements one, two, and three. Unwind, if run on this type of input document, and configured to unwind the “key3” field, will produce documents that look like this. The thing that might not be intuitive to you about this is that in each of these output documents, there will be a key3 field. That key3 field will have a single value rather than an array value, one for each one of the elements that were in this array.
If there were 10 elements in this array, unwind would produce 10 output documents. Let’s go
back to our companies examples, and take a look at the use of unwind stages. We’ll start with the following aggregation pipeline. Note that in this pipeline, as in an earlier section, we are simply matching on a specific funder and promoting values from embedded funding_rounds documents using a project stage.
db.companies.aggregate([
{$match: {"funding_rounds.investments.financial_org.permalink": "greylock" } }, {$project: {
_id: 0, name: 1,
amount: "$funding_rounds.raised_amount", year: "$funding_rounds.funded_year"
}}
])
Once again, here is our example of the data model for documents in this collection.
{
"_id" : "52cdef7c4bab8bd675297d8e", "name" : "Facebook",
"category_code" : "social", "founded_year" : 2004,
"description" : "Social network", "funding_rounds" : [{
"id" : 4,
"round_code" : "b",
"raised_amount" : 27500000, "raised_currency_code" : "USD", "funded_year" : 2006,
"investments" : [ {
"company" : null, "financial_org" : {
"name" : "Greylock Partners", "permalink" : "greylock"
},
"person" : null },
{
"company" : null, "financial_org" : {
"name" : "Meritech Capital Partners", "permalink" : "meritechcapitalpartners"
},
"person" : null },
{
"company" : null, "financial_org" : {