Update 2016-04-27: Imke Feldmann sent me an M query that accomplishes the same goal of this post with less than half the code! The query is here. Imke’s blog is a great resource and you can also find her on Twitter. Thank you, Imke!
Basic JSON structures can be parsed pretty easily by Power Query, as they often represent tabular structures that are familiar to those that work with data. Power Query simply converts the JSON to a table and you’re good to go. Working with a JSON array in Power Query, however, can be difficult and may result in duplicate rows in your dataset.
JSON is built on two structures (Source):
- A collection of name/value pairs. In various languages, this is realized as an object, record, struct, dictionary, hash table, keyed list, or associative array.
- An ordered list of values. In most languages, this is realized as an array, vector, list, or sequence.
In the example below, properties is a collection of name/value pairs, and coordinates is an array that contains the longitude, latitude, and depth of an earthquake. This is known from the USGS data feed documentation.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
{ "features": [ { "type": "Feature", "properties": { "mag": 6.4, "place": "95km NW of Port-Olry, Vanuatu", "time": 1460670627630, "type": "earthquake", "title": "M 6.4 – 95km NW of Port-Olry, Vanuatu" }, "geometry": { "type": "Point", "coordinates": [ 166.3518, -14.5225, 16 ] }, "id": "us20005i6p" } ] } |
When the array is imported into Power Query, it doesn’t know what those values represent. Power Query handles this by duplicating rows for each array value as shown in the image below.

The data needs to be pivoted so that there is a single earthquake per row, with the array values of longitude, latitude, and depth as column headings. But we have a problem. These names don’t exist in the data, so we don’t have the necessary values that can be pivoted into column headings. We need to establish a new column that contains values that we can pivot, and this can be accomplished by performing the Power Query equivalent of SQL’s ROW_NUMBER() OVER(PARTITION BY xxxx ORDER BY xxxx) pattern. If you’re not familiar with that pattern, it allows you to number rows based on a specified group and order. In Power Query, this requires some tricky M code that I borrowed from Curt Hagenlocher.
First, we’ll create an index on the entire table. This allows for sorting and preserving the correct order of the array values.

Next we’ll group the table on the Earthquake ID. Power Query allows for grouping without performing an aggregation, which means you can group all rows for each ID. Power Query displays this as a Table of values on each row.

The next step is where the magic happens, and it is where the hand-edited M code comes into play. The expression below first sorts the grouped data on the index added previously, and then adds another index. This index is performed on each table within the grouped data, which results in values of 0,1,2 for each earthquake ID (Power Query starts counting at zero by default). You will not see the results of this step in the preview window until the grouped tables are expanded.
= Table.TransformColumns(#”Grouped Rows”,{{“Values”, each Table.AddIndexColumn(Table.Sort(_, “Index”),”GroupIndex”)}})
After the table is expanded, the original index and grouped index are visible.

Now that there is a number for each array value, the number can be converted to longitude, latitude, and depth with an expression.

The data can finally be pivoted, so that there is one row per earthquake, with the coordinates stored in separate columns.

The video below shows an example of how the earthquake data can be visualized in Power Map.
The index per group, or ROW_NUMBER() OVER(PARTITION BY xxxx ORDER BY xxxx) as it’s called in SQL, is a very useful pattern that can be used for much more than pivoting. I use it in SQL fairly often.
Transform JSON Array in Power Query Source Code
This text file contains the complete M code used in the example above. You can paste it into the Power Query advanced editor to work with live data and try everything yourself.