Incremental ETL Processing With Azure Data Factory v2

Data Factory V2 was announced at Ignite 2017 and brought with it a host of new capabilities:

  • Lift your SSIS workloads into Data Factory and run using the new Integrated Runtime (IR)
  • Ability to schedule  Data Factory using wall-clock timers or on-demand via event generation
  • Introducing the first proper separation of Control Flow and Data Flow to allow more complex orchestrations such as looping,  branching and other conditional flows
  • Incremental Loads using the new Lookup Activity

And it’s this last item that today’s article is about.

It’s fair to say that in its initial incarnation, Data Factory didn’t allow for more traditional ETL workloads without some complex coding (more than you were used to if you came from the world of SSIS and similar ETL tools). For the big data focused ELT workloads where data is moved between data services (SQL Server, Blob Storage, HDInsight and so forth) and activities applied whilst the data is in place (SQL queries, Hive, USQL, Spark) Data Factory V1 really excelled, but for those who wanted to move their traditional ETL delta extracts to Data Factory, it wasn’t quite there.

In this new public preview the product team have taken great steps in remedying this, allowing ETL developers to implement proper incremental loading patterns with their relational data warehouses.  It’s also worth knowing I’ve seen where Data Factory is going, and it’s looking amazing…

The following example is based on the official tutorial here. The differences in this example are based on the scenario where you wish to perform incremental extracts from a source database to a staging area inside another database. This example uses Azure SQL Database as both the source and sink, but can be adapted for other data sources.

The solution files used in this example can be found here

Prerequisites

This example assumes you have previous experience with Data Factory, and doesn’t spend time explaining core concepts. For an overview of Data Factory concepts, please see here.

So for today, we need the following prerequisites:

  • An Azure Subscription
  • An Azure SQL Database instance setup using the AdventureWorksLT sample database

That’s it!

Incremental Loading in Data Factory v2

The diagram demonstrates a high level overview of the process, and should be familiar to those who have built similar data flows with other ETL tools such as SSIS:

advv2incloadingoverview

Incremental Loading Overview

Process

There are four main steps to this process to create an end to end incremental extract.

  1. Identify the trigger column. In the Data Factory documentation this is referred to as the Watermark column so I’ll use this latter term for consistency going forwards. This column is one that can be used to filter new or updated records for each run. This column is normally an automatically updating datetime column (e.g. Modified_Date_Time) or an ever increasing integer. We use the maximum value of this column as the watermark.
  2. Create a data store to hold the watermark value. In this example we store the watermark value in our SQL Database.
  3. Create a pipeline that does the following:
    • Creates two Lookup Activities. The first looks up the last watermark value and the second retrieves the current watermark value.
    • Create a Copy activity copies rows from the source where any watermark values are greater than the old watermark value. These delta rows are then written to our target table.
  4. Create a stored procedure that updates the watermark value, ready for the next run.

Building the Solution

Create a Staging Table

In this example, we’ll use the SalesOrderDetail table in the AdventureWorksLT database:

salesorderdetail

SalesOrderDetail

And we will use the ModifiedDate as the watermark column.

We therefore need to create a target or “staging” table to load our delta rows into.

Run the following SQL in your created database to create a staging version of the SalesOrderDetails table:

CREATE TABLE [SalesLT].[SalesOrderDetail_Staging]

(

[SalesOrderID] [int] NOT NULL,

[SalesOrderDetailID] [int] NOT NULL,

[OrderQty] [smallint] NOT NULL,

[ProductID] [int] NOT NULL, [UnitPrice] [money] NOT NULL,

[UnitPriceDiscount] [money] NOT NULL,

[LineTotal] [money] NULL,

[rowguid] [uniqueidentifier] NOT NULL,

[ModifiedDate] [datetime] NOT NULL,

[ExtractDate] [datetime] Default Getdate()

)

Create a table for our watermark values

Run the following command in your SQL Database to create a table named watermark that will store our watermark value:

create table watermark
(

TableName varchar(255),
WatermarkValue datetime,
);

Set the default value of the watermark with the name of our source table. In this example, the table name SalesOrderDetail.

Insert into watermarktable
values (‘SalesLT.SalesOrderDetail’, ‘1-jan-2017’)

Create a stored procedure in our SQL Database:

This stored procedure is used to update the watermark table when new rows have been extracted.

CREATE PROCEDURE sp_write_watermark @LastModifiedtime datetime, @TableName varchar(50)
AS

BEGIN

UPDATE watermarktable
SET [WatermarkValue] = @LastModifiedtime
WHERE [TableName] = @TableName

END

Create Some Trigger Data

Run the following command on your SQL database to update a couple of rows in the source table. This is to validate the process is working correctly as it marks the rows as modified with today’s date, which is greater than the row in our watermark table. If the process is successful it will extract these changed rows.

update [SalesLT].[SalesOrderDetail]
set modifieddate = getdate()
where salesorderid = 71774

Create a Data Factory

Create a new Data Factory. For ease, do it via the portal following this guide.

(Ensure you create it using ADFv2):

adfv2

Creating a Data Factory via the Azure Portal

Create your Data Factory Artifacts

You will need to create the following (I’ve included my own samples in the link at the beginning of this article):

  • AzureSQLLinkedService
  • A source SQL dataset (for our source table)
  • A target SQL dataset (for our destination table)
  • A watermark dataset (that stores our watermark value entries
  • A pipeline that coordinates the activities

Modify the files to suit your needs and setup for your database server and database.

I won’t show the different JSON here, but just to highlight, the key area is the new Lookup Activity shown below:

pipelinelookupactivity

 

Create a folder on your root C Drive to hold the created JSON files

I created a folder called “adf” and put all my JSON files there:

adffolder

Deploy Data Factory Artifacts

For the current preview, you cannot deploy using the GUI, so you need to deploy programmatically. For this example I’m using Powershell. Note you will need to update your Powershell install with the latest cmdlets by running the following command:

Install-Module AzureRM

After updating the AzureRM cmdlets, run the following Powershell script below (attached at the start). This will deploy the necessary artifacts to your Data Factory.

powershellex2

Run the pipeline

Once deployed, you can run the deployed pipeline via Powershell with the following command:

$dataFactoryName = “[Your DATA FACTORY NAME]”
$resourceGroupName = “[Your RESOURCE GROUP NAME]”

$RunId = Invoke-AzureRmDataFactoryV2Pipeline -PipelineName “mainPipeline” -ResourceGroup $resourceGroupName -dataFactoryName $dataFactoryName

This fires off the pipeline. You can monitor its progress using Monitor & Manage:

minotoradf

Monitor Data Factory

And this allows you to see the status of the running pipelines:

pipelineruns

With the pipeline processed successfully,  I’ll check my database tables to check everything has updated correctly:

stagingdemo

Delta Rows Extracted

Bingo!

And the watermark table has been updated successfully:

watermarkupdate

Updated Watermark Table

This completes this article covering a simple ETL incremental extraction using the new Data Factory. The roadmap for this product is very exciting, and I encourage you to check out the links below for further reading:

Introduction to Data Factory v2

Data Factory Ignite 2017

 

Power BI Premium Vs Analysis Services

What? Is that even a valid question?

At first glance this isn’t an obvious topic, but I’ve had a couple of conversations recently with partners that echoed some of my own thoughts about where and how Power BI Premium sits with Analysis Services.

Let’s recap what Power BI Premium brings in terms of performance enhancements (read about its wider capabilities here):

  • Increased dataset limits
  • Real time cache/pin to memory
  • Secondary model copies
  • Geo-replicas
  • Incremental/Dedicated Refreshes

With the new support for very large data sets, incremental refreshes etc, this has led to the following question being asked:

“If I have Power BI Premium with all these enhancements, why would I need Analysis Services?”

To me, there’s one main reason above all others why you would use Analysis Services in conjunction with Power BI, and you can skip to the summary to see it, but in order to answer this, let’s first look at the two products in more detail:

Power BI

Until recently, we had Power BI Pro (or Free) and Analysis Services. Both had quite a clear, defined place in the BI ecosystem. Power BI is described as the following:

…a suite of business analytics tools to analyze data and share insights”

These tools include Power BI Desktop (a powerful desktop data mashup and reporting tool), Power BI Mobile (mobile apps allowing you to access reports from anywhere) and the Power BI Service (allows you to publish and share reports throughout the organisation).

So, Power BI allows users to ingest data from a variety of sources, mash it up, build some reports and publish them to the Power BI Service for sharing and collaboration.

Power BI Premium is an add on to Power BI Pro that brings dedicated capacity, performance and scalability (see earlier in this article).

Analysis Services

With Analysis Services, you can mashup and combine data from multiple sources, define metrics, and secure your data in a single, trusted semantic data model.

Sound familiar?

In many ways, Power BI and Analysis Services are very similar. In fact, Power BI has at its heart a Tabular Analysis Services engine. When you ingest and mash up data into Power BI Desktop, you’re actually populating a local version of Analysis Services.

In fact with both tools you have very similar features:

  • Create hierarchies
  • Measures
  • Calculated columns
  • Row Level Security
  • Live Connection/Direct Querying

Data Visualization and reporting aside (Power BI does this, Analysis Services doesn’t – it’s not a reporting tool) there are also differences, but the lines have become blurred in recent times.

The Main Difference

The key difference is that with a data model created in Power BI directly, it’s essentially a closed model to Power BI only.  Users can create pages (reports) in Power BI Desktop with multiple visualizations and then publish this workbook to the Power BI Service. Within the Service itself, users can edit the published reports on this dataset as well as adding new ones:

EditReportPowerBI

Edit Report In Power BI Service

The point here is that this model and its data can be reused within and by Power BI only. Users must login to Power BI to edit the report, or they can publish a Power BI Desktop file as a central model and connect to it as a Power Service Live Connection. All of this functionality is pure Power BI.

Where Analysis Services differs is that it is essentially an open model. Yes, you create a data model by ingesting data and creating a user-friendly reporting model in exactly the same way you do with Power BI, but that’s where you stop. You publish the Analysis Services model as a central, trusted data source and – and this it the crux of the matter – ANY tool that can submit DAX or MDX can use your Analysis Services model and get exactly the same answer. This is the ultimate “single version of the truth” that is the goal of so many BI projects. You create your central model, publish it, then connect to it from any of the following tools, plus many more:

 

Summary

In the end, Power BI Premium on its own is a compelling option if you’re just a Power BI shop (and Excel, sort of) and nothing else. Your analysts and developers can develop Power BI models in Desktop, and these models can be used to create reports in the Service, or from Power BI Desktop using the Power BI Service Live Connection. My advice in this scenario is to start with Power BI as your data models, especially if your models are small or moderate in size, then consider moving to Analysis Services as a scaling up option. This is an especially appealing option now with the release of the Analysis Services Web Designer, which allows you to publish Power BI workbooks to Analysis Services and create a data model from them. This is an ideal scenario where you’re running a PoC and want to promote it to something more “enterprise”.

ScaleUpAAS

Scale Up Power BI to Analysis Services

 

However, if you want a central data model to act as the single version of the truth and have many different tools in use across the organization, then Analysis Services is your friend:

aaswithpowerbi

Power BI & Other Tools With Analysis Services

In conclusion, I recommend Analysis Services as a key component for any relational data warehouse platform in combination with Power BI. Whilst the decision isn’t as clear cut if you only use Power BI, the ability to create a completely abstracted, semantic layer that serves as your golden view of data regardless of what visualization tool is used is a point that can’t be over sold.

Agree? Disagree? Please feel free to comment below.

Related Links

Power BI Premium

Azure Analysis Services

Power BI Service Live Connection

Why Analysis Services

 

 

 

 

 

Creating Animated Reports In Power BI With The Drilldown Player Custom Visual

This is cool…

Chris Webb's BI Blog

Last week I had the chance to do something I have not done before: build a Power BI report to be displayed on a big screen hanging on a wall. To make up for the loss of user interactivity, I used the new Drilldown Player custom visual to cycle through different selections and display a new slice of data every few seconds; Devin Knight’s blog post here has a great summary of how to use it. However I wasn’t happy about the look of the Drilldown Player visual in this particular report: the play/stop/pause buttons aren’t much use if you can’t click on them and the visual doesn’t show all of the values that it is cycling through. As a result I hid the visual behind another one and came up with a different way of displaying the currently-displayed selection.

Here’s a simple example of what I did. Imagine you…

View original post 271 more words

Azure Analysis Services Web Designer

I’m a long time fan of Analysis Services, and this latest feature is a really cool addition to the offering. Currently in Preview, Azure Analysis Services Web Designer offers the following functionality that extends Azure Analysis Services, all through a simple web UI:

  • Add a new logical server in Azure
  • Create a new Analysis Services model from SQL DB, Azure SQL DW and….Power BI workbooks! (More data sources to come)
  • Browse an existing Analysis Services model and add new measures for quick validation
  • Open an existing Analysis Services model as a Visual Studio Project, in Excel, or in Power BI
  • Edit an existing Analysis Services model using TMSL (Tabular Model Scripting Language)

Example – Creating a Model from Power BI Desktop

In this example, I’ve created a simple Power BI workbook (.pbix file) that’s connecting to an Azure SQL DB instance running the sample AdventureWorks database:

Simple Report

Simple Power BI Report

To connect to the new service, I can access via the Azure portal or go direct to https://analysisservices.azure.com:

AAS Web Designer Portal

Azure Analysis Services Web Designer Portal

On this occasion, I’ll use the server I already have and instead go straight to adding a new model:

CreateModelFromPowerBI

Importing a Model from Power BI

Once created, I can browse the model in the Designer:

BrowseModel

Browse Model

Or open it using Power BI, Excel or Visual Studio:

Open Options

Open Model Options

Pbi From AAS Designer

Power BI From Published Model

This is in preview right now, with many features still to come. The ability to import Power BI models into Analysis Services is a massive feature in its own right, but aside from that it already shows how you can quickly create and modify models without having to delve into Visual Studio/SQL Server Data Tools. New features are coming every month, so keep on eye on the team blog (below) to follow its progress.

Further Reading

Introducing Azure Analysis Services Web Designer

Analysis Services Team Blog

Connecting to Power BI Datasets with Power BI Service Live Connection

One of the most useful features to come to Power BI in recent months is the Power BI Service Live Connection. You can find out more here.

What is it?

In a nutshell, it allows developers to connect to an already published Power BI dataset and create new reports without having to redo all the queries and data ingestion.

This scenario is great as an Analysis Services “lite” approach, allowing teams to better collaborate and reduce duplication of effort by sharing standardized models and content. As most of the effort involved in self-service BI tools is spent doing the data mashing, with this approach you can “model once and reuse many times”.

How does it work?

As Power BI Service Live Connection is a preview feature, you first have to enable it in “options”:

powerbioptions_powerbiservice

Once enabled, restart Power BI and create a new dataset using the normal Get Data functionality. This becomes your data model to publish and share. Instead of building any reports, simply publish the pbix file to the Power BI Service:

powerbidatamodel

Adventure Works Data Model In Power BI

 

emptymodel

Blank Power BI Workbook with Data Model

Once published, this dataset can then be connected to from any other Power BI workbook using the new functionality. Developers no longer need to do all the heavy lifting work with the data mashing, and instead just use the already built data model to create their own reports and dashboards.

powerbigetdatafrombilive

Browse to the published data set:

workspace

You then have access to the published data model, and can build your own content. Note the “Edit Queries” option is grayed out.

connecttomodel

New Power BI Workbook Connecting to the Published Dataset

This new functionality is a great addition to the Power BI toolset, and allows for a much improved sharing and collaboration experience within teams.

Find Out More:

Blog Announcement

Step by Step Guide

Optimizing SSIS Data Loads with Azure SQL Data Warehouse & PolyBase

To gain the best loading performance with Azure SQL Data Warehouse, you want to leverage the power of PolyBase, which allows for massively parallel data loading into the underlying storage.

In standard scenarios, ETL developers using SSIS will leverage a data flow that takes data from a source (Database, File, etc) and load it directly into the target database:

SSIS Simple Data Flow

Simple SSIS Data Flow (Table to Table)

Unfortunately, adopting this pattern with an SSIS data flow into Azure SQL Data Warehouse will not naturally leverage the power of PolyBase. In order to understand this we need to take a little peek under the covers of SQL Data Warehouse:

Azure SQL DWH Overview

Azure SQL Data Warehouse Overview

As part of the underlying architecture, SQL DW has at its head the Control Node. This node manages and optimizes queries, and serves as the access point for client applications when accessing the SQL DW cluster. What this means for SSIS, is that when it makes its connection to the SQL DW using standard data connectors, this connection is made to the Control Node:

SSIS Front Load

SSIS Front Loading

This method introduces a bottleneck as the Control node serves as the single throughput point for the data flow. As the Control node does not scale out, this limits the speed of your ETL flow.

As stated earlier, the most efficient way of loading into Azure SQL DW is to leverage the power of PolyBase in a process known as “Back Loading”:

SSIS Back Load

Back loading via Polybase

Using this method, data is bulk loaded in parallel from staging files held in Blob Storage or Azure Data Lake Store, which allows for the full data processing power of PolyBase to be used.

Leveraging Polybase Within SSIS

Previously, ingesting data from files into SQL DW using PolyBase was a slightly drawn out process of uploading files to storage, defining credentials, file formats and external table definitions before finally loading the data into SQL DW using a CTAS (Create Table As Select) statement. This works perfectly well, but introduces a lot of extra SQL coding on top of the norm.

Thankfully, with the SSIS Azure Feature Back, the above is now made much, much easier with the introduction of the Azure SQL DW Upload task. This process automates the creation of the SQL scripts required, allows you to seamlessly incorporate this task into an SSIS ETL process that will fully leverage Polybase.

Azure DW Upload Control Flow

Azure SQL DW Upload Control Flow

Azure DW Upload Task

Azure SQL DW Upload Task

Further Reading

Azure SQL DW Best Practices

Azure SQL DW With Azure Data Lake Store (ADLS)

Polybase Guide

First post

Greetings. This my first post for the SQL of the North blog. This blog will serve as a platform for sharing the latest and greatest from the Azure data & analytics world, as well as sharing hints, tips and best practices obtained in the field.

By way of background, my name’s Mike and I’m a DSA (Data Solutions Architect) at Microsoft, where I specialise in helping customers on their data journey into the cloud. I have over 15 years in the data and analytics space, mainly focused on the Microsoft platform, however I’ve picked up a few other useful skills and technologies along the way.

Finally – why “SQL of the North”? Well, I’m a northern (UK) lad by origin, and am also an avid Games of Thrones fan. I wanted something that had a bit of a link to that,  and unfortunately the “Data Watch” was already taken 🙂