SSIS – Package design pattern for loading a data warehouse

Posted by

I recently had a chat with some BI developers about the design patterns they’re using in SSIS when building an ETL system. We all agreed in creating multiple packages for the dimensions and fact tables and one master package for the execution of all these packages.

These developers even created multiple packages per single dimension/fact table:

  • One extract package where the extract(E) logic of all dim/fact tables is stored
  • One dim/fact package with the transform(T) logic of a single dim/fact table
  • One dim/fact package with the load(L) logic of a single dim/fact table

I like the idea of building the Extract, Transform and Load logic separately, but I do not like the way the logic was spread over multiple packages.
I asked them why they chose for this solution and there were multiple reasons:

  • Enable running the E/T/L parts separately, for example: run only the entire T phase of all dim/fact tables.
  • Run the extracts of all dimensions and fact tables simultaneously to keep the loading window on the source system as short as possible.

To me these are good reasons, running the E/T/L phases separately is a thing a developer often wants during the development and testing of an ETL system.
Keeping the loading window on the source system as short as possible is something that’s critical in some projects.

Despite the good arguments to design their ETL system like this, I still prefer the idea of having one package per dimension / fact table, with complete E/T/L logic, for the following reasons:

  • All the logic is in one place
  • Increase understandability
  • Perform unit testing
  • If there is an issue with a dimension or fact table, you only have to make changes in one place, which is safer and ore efficient
  • You can see your packages as separate ETL “puzzle pieces” that are reusable
  • It’s good from a project manager point of view; let your customer accept dimensions and fact tables one by one and freeze the appropriate package afterwards
  • The overview in BIDS, having an enormous amount of packages does not make it clearer 😉
  • Simplifies deployment after changes have been made
  • Changes are easier to track in source control systems
  • Team development will be easier; multiple developers can work on different dim/fact tables without bothering each other.

So basically my goal was clear: to build a solution that has all the possibilities the aforesaid developers asked for, but in one package per dimension / fact table; the best of both worlds.

Solution:

The solution I’ve created is based on a parent-child package structure. One parent (master) package will execute multiple child (dim/fact) packages. This solution is based on a single (child) package for each dimension and fact table. Each of these packages contains the following Sequence Containers in the Control Flow: 
 ChildControlFlow 

Normally it would not be possible to execute only the Extract, Transform, Load or (cube) Process Sequence Containers of the child (dim/fact) packages simultaneously.

To make this possible I have created four Parent package variable configurations, one for each ETL phase Sequence Container in the child package:
clip_image003

clip_image005

Each of these configurations is set on the Disable property of one of the Sequence Containers:
clip_image007

Using this technique makes it possible to run separate Sequence Containers of the child package from the master package, simply by did- or enabling the appropriate sequence containers with parent package variables.
Because the default value of the Disable property of the Sequence Containers is False, you can still run an entire standalone child package, without the need to change anything.

Ok, so far, so good. But, how do I execute only one phase of all the dimension and fact packages simultaneously? Well quite simple:

First add 4 Sequence Containers to the Master package. One for each phase of the ETL, just like in the child packages


Add Execute Package Tasks for all your packages in every Sequence Container

clip_image008


If you would execute this master package now, every child package would run 4 times as there are 4 Execute Package Tasks that run the same package in every sequence container.
To get the required functionality I have created 4 variables inside each Sequence Container (Scope). These will be used as parent variable to set the Disable properties in the child packages. So basically I’ve created 4 variables x 4 Sequence Containers = 16 variables for the entire master package.

Variables for the EXTRACT Sequence Container (vDisableExtract False):
clip_image009

Variables for the TRANSFORM Sequence Container (vDisableTransform False):
clip_image010

The LOAD and PROCESS Sequence Containers contain variables are based on the same technique.

Results:

Run all phases of a standalone package: Just execute the package:
clip_image011

Run a single phase of the ETL system (Extract/Transform/Load/Process): Execute the desired sequence container in the main package:

RunAllTransforms 

Run a single phase of a single package from the master package:
RunSinglePhaseOfOnePackage

Run multiple phases of the ETL system, for example only the T and L: Disable the Sequence Containers of the phases that need to be excluded in the master package:

RunMultiplePhasesAndExcludeOthers

Run all the child packages in the right order from the master package:
When you add a breakpoint on, for example, the LOAD Sequence Container you see that all the child packages are at the same ETL phase as their parent: 
RunCompleteMasterPackageBreakPoint


When pressing Continue the package completes: 
RunCompleteMasterPackageBreakPointCompleted


Conclusion:

This parent/child package design pattern for loading a Data Warehouse gives you all the flexibility and functionality you need. It’s ready and easy to use during development and production without the need to change anything.

With only a single SSIS package for each dimension and fact table you now have the functionality that separate packages would offer. You will be able to, for example, run all the Extracts for all dimensions and fact tables simultaneously like the developers asked for and still have the benefits that come with the one package per dimension/fact table approach.

Of course having a single package per dimension or fact table will not be the right choice in all cases but I think it is a good standard approach.
Same applies to the ETL phases (Sequence Containers). I use E/T/L/P, but if you have different phases, which will be fine, you can still use the same technique.

Download the solution with template packages from the URL’s below. Only thing you need to do is change the connection managers to the child packages (to your location on disk) and run the master package!

Download for SSIS 2008

Download for SSIS 2005

If you have any suggestions, please leave them as a comment. I would like to know what your design pattern is as well!

ATTENTION: See Part-2 on this subject for more background information!

Backgrounds:

How to: Use the Values of Parent Variables in a Child Package: http://technet.microsoft.com/en-us/library/ms345179.aspx

35 comments

  1. Hi Jorg,
    Nice article and examples. I agree with the design of master-child implementation and using package variables to control the execution. But I am slightly of a different opinion in implementing the T and L part.
    I feel we do not need a separate package all together for T and L. With a proper design, for ex. using package variables to decide whether to stage data or directly load them after transform, and using variables values from config to pass hints whether to load data from staged area or the stream, same can be implemented in one package. More layering by creating more packages for a single ETL cycle of an entity, also means that if different developers are working on it, they need a standard interface for passing data. Also separting T & L so much for each and every package makes staging inevitable after transform.
    Extract, Transform & Load, Cube processing definitely needs separate package and controlling the same thru a master package is definitely an effective design.

    Like

  2. Hi Siddharth,
    Thanks for reading and commenting!
    I agree with your comment and I believe we share the same ideas.
    You say you have a slightly different opinion in implementing the T and L part, can you clarify this some more?
    You say we don’t need a separate package for T and L and I agree with that, that’s why I want to use a single package with all phases in it. So I wonder on which specific part your opinion is different, as it looks like we say the same here.
    Thanks,
    Jorg

    Like

  3. Sorry, I missed the detail. I understand that you are including all the phases in one package, and executing the same package 4 times, and each time you execute a different phase from the same package.
    The tricky part of this design is staging becomes inevitable to achieve flexibility of execution of each phase E/T/L at will. But this seems a valid trade-off, compared to the flexibility achieved.
    Yes you are correct. We share the same ideas, it’s just I thought it was a different package for E/T/L/P part for a single dim/fact i.e 4 packages per entity. Now I feel it’s even better than I thought it was 🙂

    Like

  4. Hi Jorg,
    Nice article! I am only wondering about the Extract step. How do you implement the extract of a table that is used in multiple dimensions? Do you keep track of the tables you already extracted or do you simply extract them multiple times (which would mean doing things more than necessary)
    Regards,
    Pieter

    Like

  5. Hi Pieter,
    Nice to hear that you like the article and thanks for the useful comment!
    I would say that’s typically logic that should be handled in the master package. So I would still create the dimension packages as stand alone “objects” that have all their logic in it and handle things like this from the master package. How I would handle it exactly depends on the specific situation, it can be as easy as running only the extract of one of the two dimensions. If it’s more complicated you can create some more variables to handle this, just like Siddharth already suggested. I think there are many ways to solve challenges like that.
    Thanks,
    Jorg

    Like

  6. Jorg,
    ETL steps in all my child packages have a OLEDB source (E), a Data Conversion Task (T), and an ADO.Net Destination (L). I was trying to seperate them in to the ETLP blocks of the child package but couldn’t figure it out. I looked in to your SQL2008 example but except for the blocks in the control flow section I couldn’t find any tasks in the data flow section. I am new to this and totally confused. Do you have a complete example that I can refer to?

    Like

  7. Please disregard my previous question. I figured it out after reading the part-2.
    The following statement should have been made in part-1 so it’s very clear for starters like me. Overall I really appreciate your good work.
    I personally always use four Sequence Containers in my SSIS packages:
    – SEQ Extract (extract the necessary source tables to a staging database)
    – SEQ Transform (transform these source tables to a dimension or fact table)
    – SEQ Load (load this table into the data warehouse)
    – SEQ Process (process the data warehouse table to the cube)

    Like

  8. Hi Pandkothia,
    It’s nice to hear that you, as a starter, benefit from my blog posts 🙂
    I have added a link to part-2 of the article in the first part so others won’t be confused. Thanks for bringing this to my attention.
    –Jorg

    Like

  9. Guys, could someone describe to me exactly what goes in the Transform part of this exercise? For me I get the data out of the source system and store in a cache table (extract) then I have a kimball method component which loads the data into the dimension/fact table (Load). The query that the kimball method component takes is in an sql query which effectively does the “transform” part, but this is transitory. I presume that to have a transform layer you would store the output of the kimball method component into a number of places and then in the load simply merge the data in. Could someone give me an idea what the benefit of this extra storage step is as for me I can’t see any?

    Like

  10. Hi Jorg,
    Great article. I was thinking how transactions/checkpoint can be handled in this design while addressing re-run, when there is a failure in E/T/L/P?
    ~Sandy.

    Like

  11. Never mind! I was trying to “Save Link As”. I clicked on the link and it took me to a Skydrive page, where the download was indeed available. Thanks Jorg!

    Like

  12. Hi. I am unable to download the files .. i want to see running thoses packages..Can you please send me 2008 version to  maheshdare@gmail.com

    Like

  13. its really an interesting design , but how its one package per dimension and i have to make same package 4 times , Customer_Extract.dtsx , Customer_Transfer.dtsx , Customer_Load.dtsx
    Customer_Process.dtsx , and if that right if i have 4 packages then i have to create 16 packages , do you have any idea how we can reduce these number of packages

    Like

  14. Hi here my question:
    As far as I understand SSIS sequences in the model above, in every sequence (for examople TRANSFORM) the packages execute multanioously. Here you have just a couple of dimensions and facts. But what if you have 70 dimensions and 10 facts, does this reduce performance when all dims and facts are being processed simultaneously? And second, what about the readability of such a big SSIS project?
    Thanks!

    Like

  15. I love the design with regards to seperating out the phases of the ETL process as well as the ability to control different aspects through configuration – my only concern would be the unwieldiness of deployments and configurations if the warehouse contains a large number of facts/dimensions

    Like

  16. Hi Jorg,
    The post is awesome. This is the first time I even heard about SSIS for the purpose of my project and I followed your method. I was able to easily do it considering this is my first time. Everything works well for me except when I use “Derived Column” component to see what is loaded into my Fact table the keys are “Missing Reference”. My Master package structure is exactly like yours except that I have two different sets of Dim and Fact packages. I created the variables exactly like yours and connected the two child packages with the parent one. I am not sure where i went wrong. Will you be able to help me in this? Thanks a lot in advance

    Like

  17. Oh i forgot to mention. This problem only occurs when i try to run individual sequences from the master package or run the entire sequence from the master package. When i run the child packages individually its perfect! Thanks

    Like

  18. How do you justify your comment that having a smaller number of packages improves source code control?  We have split up our packages to be very granular just because having larger packages makes it difficult for multiple developers to work on the same project at the same time (the file format for dtsx makes it almost impossible to logically merge two changed dtsx files in a source code control system)

    Like

  19. Hi Jorg,
    thank you for sharing this design pattern, I have begun implementing it with the project deployment model and so far it is a very feasible solution!

    Like

  20. I’ve been involved in developing both 1 monster package and granulated packages. In my own experience based on enterprise projects, the granulated packages works best.

    Like

  21. Jorg,
    I have 180 packages in my solution. I have a Master package to call them all, but was concerned about that many in one container, so when I generated the Master package with BIML, I assigned them in sensible groups to different containers and sequenced those by dependency.
    I am still worries about having up to 20 packages all running together in one container. Is there guidance of how to configure that to run robustly? Have not seen any so far ?
    Tks for any help.
    JK

    Like

  22. I’m in a similar situation as JoeK – we have 100+ packages that are called in each of the E-T-L-P phases of the master package.  We’re using SQL Server 2014, with basic logging in SSISDB. Even with basic logging, this architecture slows down significantly when calling (and re-calling) this many sub-packages.  The bottleneck is in writing all of the messages to the SSISDB tables.
    In 2014, unfortunately the only real work-around appears to be turning off the built in SSIS logging feature and capturing the errors by other means in your SSIS packages.  SQL Server 2016 allows custom logging levels to be set up (ex. only record errors and warnings) which should also alleviate the issue.

    Like

  23. Hi,
    Really liked the post, how would this be applied with the new Project Deployment Model as i’m trying to integrate it into it.
    I have tried converting the example package from 2008 but if i run the extract package all the packages of the child run not just the execute package.

    Like

  24. Thanks for sharing!
    During our ETL process, I’m redirecting failed rows from Extract (truncation) and Transform (truncation/data conversion) to staging tables and flat files.  I’ve not figured out how to manage cleansing/re-processing the data yet but I’d like to re-use existing package/code if possible by using your method; however, I’d like to be able to execute the package from command line, DTEXEC, with parameters rather than having to develop parent packages for all my interfaces.  
    I’ve not been able to figure out how to pass a parameter as a parent variable … any idea if this is possible?

    Like

  25. Hi Jorg,
    Thanks for your great article.
    The download link is empty, I am just wondering if you could please share your package via email provided below?

    Regards,
    Buhuo

    Like

Leave a comment