Building The Data Warehouse-less Data Warehouse (Part 1 of 2)

*Update – part 2 of this series is now live, and can be found here*

In times of yore, those who maketh the houses of data would bring forth vast swathes of tables and hurl them down in the forts of staging. Here the ancient priests of Ee, Tee and El would perform arcane magicks and transform these rows of chaos into purest order. This order would be rebuilt into the fabled Data Warehouses, and all who looked upon them would gasp in awe and wonder.

But that was then. Today’s data world is different, isn’t it? Data is varied. It’s big and small, fast and slow. It’s tougher to wrangle and make into shapes fit for use. But through all of this, we still need the hallowed, fabled, data warehouse.

Or do we?

This post is basically me exploring a different approach to building a data warehouse that’s based on the data lake paradigm and a form of ELT (Extract, Transform and Load), but leaves out the actual data warehouse part. I’m still going to build a star schema, but it’s going to be file based, using modern data engineering tools to do the munging and “schema-tizing” before sucking into a semantic layer for reporting. It’s also me exploring the capabilities of my current favourite tool – Azure Databricks.

What are the potential benefits of this approach? A few spring to mind, such as cost, flexibility and simplicity. By keeping all the processing within the data lake means it’s easier to control and govern, and the reduced data movement (you’re not copying into a data warehouse) makes an altogether more simpler structure.

Conversely, I’m well aware that this approach brings it’s own challenges. Traditional data warehouse constructs like Slowly Changing Dimensions, Surrogate Keys and other elements of the Kimball checklist will be harder or even not possible with this, so it won’t suit every scenario.

My aim here though was simple – can we build a BI solution without the data warehouse element and is it a viable approach for certain scenarios?

In short, the solution looks like this:

conceptualmodeldwlessdw

File-based Data Warehouse Conceptual Model

The above construct largely follows the traditional ETL model with data flowing in from source systems, and comprises the following:

  1. Data ingested as raw files into a staging zone in the data lake
  2. Files processed using data engineering platform into cleansed outputs
  3. Scrubbed files then shaped and moved into relevant serving zones. In this example I’ve kept it simple, with one zone for the star schema files and one for a data assets folder that provides cleansed, curated data sets to be consumed by analysts and data scientists.
  4. The star schema files are subsequently loaded into a semantic layer for enterprise reporting and ad hoc slice n’ dice functionality, whilst the asset files are consumed using the variety of tools preferred by today’s analysts.

To bring this to life, I’ll be using a handful of Azure data services for the relevant solution components –

solutioncomps

Solution Components

In this example, I’ll be using an Azure SQL Database as my data source with the AdventureworksLT database.

Prerequisites

For each of the above, ensure you put everything in the same Azure region.

Extract

There are plenty of options in this space that could move data from source to my lake, including ETL tools such as SSIS, Talend and Azure Data Factory. For this example I’m using Azure Data Factory (version 2), with copy activities moving data from my source SQL database and dropping as *.csv files. I’m also taking advantage of the new Databricks functionality built into Azure Data Factory that allows me to call a Databricks Notebook as part of the data pipeline.

datafactory

Data Factory Copy Activity

The above data factory copies the source tables into my “raw” directory, and from there I process the files, with the end result being to create my dimension and fact files ready to be loaded into Azure Analysis Services.

files

Load Files Into Spark DataFrames

With this files safely copied into our raw landing zone, I can now extract the source tables directly into Spark DataFrames. An example is shown below for the product table. We create a DataFrame for each source file.

dataframesrpoduct

Example DataFrame Creation

Transform

With the DataFrames created, I then create temporary SQL tables from them so that we can use SQL code to define my dimension. You can of course manipulate them natively as DataFrames using Python/Scala/R if you’re more familiar with those languages. I’m a SQL dude, and am familiar in building data warehouse routines in SQL code so have opted for that method here.

createview

Create Temporary Views From DataFrames

With these views created we can use good ‘old SQL to create DataFrames that reflect the dimensions and facts we want to load into Azure Analysis Services.

exampledim

Dimension Example

and fact tables:

examplefact

Fact Example

When I run my notebook now I have my dimensional and fact tables created as Spark dataframes. In order for them to be consumed by Azure Analysis Services I need to write the dataframes into CSV files that can then be imported into my tabular model.

Output Dimension and Fact Files

Writing files back from Spark into csv wasn’t as seamless as I thought it would be. Whilst the commands are fairly straight forward, as Spark is a distributed system it writes multiple files as an output, including crc and SUCCESS metadata files. It also doesn’t name them as the file you specify, but instead names it based on the partition name.

We don’t need these files, and need a proper, consistent filename, so I wrote some extra code to rename and move the files back under our Data Warehouse directory.

writetofile

Writing Out To CSV

With all this done, it leaves us nicely with a set of files ready for ingestion into Azure Analysis Services, which is the subject of Part 2 of this series.

outputfiles

Outputted Dimension And Fact Files

Coming in the second and final part of this series…

  • Build an Azure Analysis Services model directly off the dimension and fact files within the Data Lake Store
  • String it all together using Azure Data Factory

Part 2 is now live, and can be found here….

Power BI Premium Vs Analysis Services

What? Is that even a valid question?

At first glance this isn’t an obvious topic, but I’ve had a couple of conversations recently with partners that echoed some of my own thoughts about where and how Power BI Premium sits with Analysis Services.

Let’s recap what Power BI Premium brings in terms of performance enhancements (read about its wider capabilities here):

  • Increased dataset limits
  • Real time cache/pin to memory
  • Secondary model copies
  • Geo-replicas
  • Incremental/Dedicated Refreshes

With the new support for very large data sets, incremental refreshes etc, this has led to the following question being asked:

“If I have Power BI Premium with all these enhancements, why would I need Analysis Services?”

To me, there’s one main reason above all others why you would use Analysis Services in conjunction with Power BI, and you can skip to the summary to see it, but in order to answer this, let’s first look at the two products in more detail:

Power BI

Until recently, we had Power BI Pro (or Free) and Analysis Services. Both had quite a clear, defined place in the BI ecosystem. Power BI is described as the following:

…a suite of business analytics tools to analyze data and share insights”

These tools include Power BI Desktop (a powerful desktop data mashup and reporting tool), Power BI Mobile (mobile apps allowing you to access reports from anywhere) and the Power BI Service (allows you to publish and share reports throughout the organisation).

So, Power BI allows users to ingest data from a variety of sources, mash it up, build some reports and publish them to the Power BI Service for sharing and collaboration.

Power BI Premium is an add on to Power BI Pro that brings dedicated capacity, performance and scalability (see earlier in this article).

Analysis Services

With Analysis Services, you can mashup and combine data from multiple sources, define metrics, and secure your data in a single, trusted semantic data model.

Sound familiar?

In many ways, Power BI and Analysis Services are very similar. In fact, Power BI has at its heart a Tabular Analysis Services engine. When you ingest and mash up data into Power BI Desktop, you’re actually populating a local version of Analysis Services.

In fact with both tools you have very similar features:

  • Create hierarchies
  • Measures
  • Calculated columns
  • Row Level Security
  • Live Connection/Direct Querying

Data Visualization and reporting aside (Power BI does this, Analysis Services doesn’t – it’s not a reporting tool) there are also differences, but the lines have become blurred in recent times.

The Main Difference

The key difference is that with a data model created in Power BI directly, it’s essentially a closed model to Power BI only.  Users can create pages (reports) in Power BI Desktop with multiple visualizations and then publish this workbook to the Power BI Service. Within the Service itself, users can edit the published reports on this dataset as well as adding new ones:

EditReportPowerBI

Edit Report In Power BI Service

The point here is that this model and its data can be reused within and by Power BI only. Users must login to Power BI to edit the report, or they can publish a Power BI Desktop file as a central model and connect to it as a Power Service Live Connection. All of this functionality is pure Power BI.

Where Analysis Services differs is that it is essentially an open model. Yes, you create a data model by ingesting data and creating a user-friendly reporting model in exactly the same way you do with Power BI, but that’s where you stop. You publish the Analysis Services model as a central, trusted data source and – and this it the crux of the matter – ANY tool that can submit DAX or MDX can use your Analysis Services model and get exactly the same answer. This is the ultimate “single version of the truth” that is the goal of so many BI projects. You create your central model, publish it, then connect to it from any of the following tools, plus many more:

 

Summary

In the end, Power BI Premium on its own is a compelling option if you’re just a Power BI shop (and Excel, sort of) and nothing else. Your analysts and developers can develop Power BI models in Desktop, and these models can be used to create reports in the Service, or from Power BI Desktop using the Power BI Service Live Connection. My advice in this scenario is to start with Power BI as your data models, especially if your models are small or moderate in size, then consider moving to Analysis Services as a scaling up option. This is an especially appealing option now with the release of the Analysis Services Web Designer, which allows you to publish Power BI workbooks to Analysis Services and create a data model from them. This is an ideal scenario where you’re running a PoC and want to promote it to something more “enterprise”.

ScaleUpAAS

Scale Up Power BI to Analysis Services

 

However, if you want a central data model to act as the single version of the truth and have many different tools in use across the organization, then Analysis Services is your friend:

aaswithpowerbi

Power BI & Other Tools With Analysis Services

In conclusion, I recommend Analysis Services as a key component for any relational data warehouse platform in combination with Power BI. Whilst the decision isn’t as clear cut if you only use Power BI, the ability to create a completely abstracted, semantic layer that serves as your golden view of data regardless of what visualization tool is used is a point that can’t be over sold.

Agree? Disagree? Please feel free to comment below.

Related Links

Power BI Premium

Azure Analysis Services

Power BI Service Live Connection

Why Analysis Services

 

 

 

 

 

Azure Analysis Services Web Designer

I’m a long time fan of Analysis Services, and this latest feature is a really cool addition to the offering. Currently in Preview, Azure Analysis Services Web Designer offers the following functionality that extends Azure Analysis Services, all through a simple web UI:

  • Add a new logical server in Azure
  • Create a new Analysis Services model from SQL DB, Azure SQL DW and….Power BI workbooks! (More data sources to come)
  • Browse an existing Analysis Services model and add new measures for quick validation
  • Open an existing Analysis Services model as a Visual Studio Project, in Excel, or in Power BI
  • Edit an existing Analysis Services model using TMSL (Tabular Model Scripting Language)

Example – Creating a Model from Power BI Desktop

In this example, I’ve created a simple Power BI workbook (.pbix file) that’s connecting to an Azure SQL DB instance running the sample AdventureWorks database:

Simple Report

Simple Power BI Report

To connect to the new service, I can access via the Azure portal or go direct to https://analysisservices.azure.com:

AAS Web Designer Portal

Azure Analysis Services Web Designer Portal

On this occasion, I’ll use the server I already have and instead go straight to adding a new model:

CreateModelFromPowerBI

Importing a Model from Power BI

Once created, I can browse the model in the Designer:

BrowseModel

Browse Model

Or open it using Power BI, Excel or Visual Studio:

Open Options

Open Model Options

Pbi From AAS Designer

Power BI From Published Model

This is in preview right now, with many features still to come. The ability to import Power BI models into Analysis Services is a massive feature in its own right, but aside from that it already shows how you can quickly create and modify models without having to delve into Visual Studio/SQL Server Data Tools. New features are coming every month, so keep on eye on the team blog (below) to follow its progress.

Further Reading

Introducing Azure Analysis Services Web Designer

Analysis Services Team Blog

Connecting to Power BI Datasets with Power BI Service Live Connection

One of the most useful features to come to Power BI in recent months is the Power BI Service Live Connection. You can find out more here.

What is it?

In a nutshell, it allows developers to connect to an already published Power BI dataset and create new reports without having to redo all the queries and data ingestion.

This scenario is great as an Analysis Services “lite” approach, allowing teams to better collaborate and reduce duplication of effort by sharing standardized models and content. As most of the effort involved in self-service BI tools is spent doing the data mashing, with this approach you can “model once and reuse many times”.

How does it work?

As Power BI Service Live Connection is a preview feature, you first have to enable it in “options”:

powerbioptions_powerbiservice

Once enabled, restart Power BI and create a new dataset using the normal Get Data functionality. This becomes your data model to publish and share. Instead of building any reports, simply publish the pbix file to the Power BI Service:

powerbidatamodel

Adventure Works Data Model In Power BI

 

emptymodel

Blank Power BI Workbook with Data Model

Once published, this dataset can then be connected to from any other Power BI workbook using the new functionality. Developers no longer need to do all the heavy lifting work with the data mashing, and instead just use the already built data model to create their own reports and dashboards.

powerbigetdatafrombilive

Browse to the published data set:

workspace

You then have access to the published data model, and can build your own content. Note the “Edit Queries” option is grayed out.

connecttomodel

New Power BI Workbook Connecting to the Published Dataset

This new functionality is a great addition to the Power BI toolset, and allows for a much improved sharing and collaboration experience within teams.

Find Out More:

Blog Announcement

Step by Step Guide