BI on your terms with SQL Server 2016

The last
few years Microsoft’s strategy was all about cloud first (or cloud only?), releasing
new BI products and updates to existing products to the cloud in high pace
without almost any investments in on-premises BI. In 2015 Microsoft seems to
change its course, they now aim more on the enabling of hybrid scenarios,
investing a lot in both cloud (Power BI/Azure) and on-premises with SQL Server
2016.
Microsoft’s message regarding BI for 2015/2016 is:  “BI on your terms”.

BI on your
terms means leveraging up-to-date possibilities for one or a combination (hybrid)
of the following architectures:

  • Cloud
    with Azure and Power BI

  • On-Premises
    with SQL Server 2016

  • Server
    driven or Self-Service

To be able
to offer quality hybrid architectures Microsoft invests a lot in the
on-premises BI suite with SQL Server 2016 and they have announced to keep
investing in it the coming years. So not only cloud first like we have seen in previous
years, but more on hybrid possibilities, and if you desire on-premises only.

For the
first time in many years an exciting version of SQL Server is coming in terms
of BI. The main topics are:

  • Hybrid
    BI (Cloud/On-Premises)

  • Modern
    Reports

  • Enhanced
    Analysis

  • Mobile
    BI

Below is an
overview of the new BI related features per SQL Server 2016 service or product.
As the length of this list shows, SQL Server 2016 will be a massive BI
version!!

Analysis Services Tabular

  • Enhanced
    modeling capabilities in the semantic layer

    • Many-to-many
      relationships

    • BI
      Directional cross filtering. This means you can not only filter on the 1 side
      of a 1 to many relationship in your tabular model, but also on the many side. For
      example, two connected tables, Sales
      à Product:

      • Product: product, product category

      • Sales: sales date, connection to
        product table

        Now select products sold
        filtering on sales date(many side) while also filtering on product category (1
        side). This is not possible in today’s version of SSAS tabular.

  • Time
    intelligence

    • Date/time
      columns are automatically converted to rich date/time tables starting from the
      column’s MIN date till the MAX date found

  • New
    DAX functions

    • A
      lot of new functions that at the moment require quite complex formulas like
      present time, date difference, percentile, product, geomean, median, etc.

  • Performance
    improvements

    • For
      end users

      • Query engine optimized

    • For
      developers

      • Metadata operations; modeling
        related operations are much faster

    • For
      data processing

      • Parallel partition processing

  • Expose
    on-premises tabular models in the cloud (hybrid)
    à Power BI feature, possible already today
    with SQL Server 2012
    .

Analysis Services Dimensional

  • Netezza
    as a Data Source (Netezza Data Warehouse | IBM – NDM Technologies)

  • Performance
    improvements

    • Unnatural
      hierarchies

    • Distinct
      counts

    • Other
      performance improvements in areas where multidimensional is not performant at
      the moment

  • DBCC
    (DataBase Check Consistency) support.
    Checks the logical and physical integrity of objects in the specified
    database.

  • Expose
    on-premises multidimensional cubes in the cloud with Power BI (hybrid)

SQL Server Database Engine

  • Integration
    of R analytical engine, predictive analytic capabilities via T-SQL queries

  • PolyBase
    available without the need of PDW, makes it possible to query both structured relational
    SQL, and unstructured Hadoop data through T-SQL statements

  • Data
    encryption for stored data and data in motion

  • Row-level
    security

  • Updates
    to the in-memory OLTP engine, for example updateable in-memory nonclustered
    columnstore indexes

  • Parsing
    and storing native JSON data

  • XEvents-based
    monitoring in Management Studio

Reporting Services

  • New
    look and feel and possibility to apply themes and branding using CSS

  • New
    visualizations, chart types like tree maps and sun bursts

  • Improved
    flexible parameter panel with support for:

    • Autocomplete

    • Search

    • Hierarchical
      tree display

  • Runs
    in all modern browsers on both desktops as tablets (any device)

  • Integration
    of R analytical engine

  • Power
    Query as a data source

  • Pin
    on-premises SSRS reports to Power BI Dashboards (hybrid)

Integration Services

  • High
    Availability support

  • Power
    Query integration

  • Azure
    Data Factory integration (hybrid)

    • Execute
      on-premises SSIS packages from Azure Data Factory

    • Azure
      Data Factory data flow task

    • Azure
      storage connector

    • Azure
      commandlets

  • OData
    4.0 support

  • Hadoop
    File System (HDFS) support

  • JSON
    support

  • New
    Oracle/Teradata connector (4.0)

  • Incremental
    deployment options

  • Custom
    logging levels

  • SSIS
    package templates to reuse ETL code

Mobile BI

  • In
    the cloud with Power BI

    • Power
      BI App for Windows Phone (coming soon) and iOS

  • On-premises
    with Datazen Server

    • Now
      available for free for SQL Enterprise Edition customers (2008 or later)

    • All
      major platforms: Windows Phone, Android, iOS

    • Beautiful
      interface and data visualizations

    • Optimizable
      for Phone, Tablet and Laptop

SharePoint vNext integration

  • Edit
    Mode of PowerPivot Excel workbooks in browser

  • Support
    for Excel vNext (Office 2016) DAX functions

Master Data Services

  • Improved
    performance for large models

  • Row-level
    compression per entity

  • Improved
    user interface

  • Configurable
    retention settings

  • Enhanced
    security possibilities for read, write, delete and create operations and
    support for multiple system administrators with specific permissions

  • Excel
    Add-in is 15 times faster and is updated to support bulk entity based staging
    operation

Visual Studio

  • Database
    and BI project types merged into one Visual Studio

  • New
    scripting language for tabular models. Currently tabular models are wrapped
    into multidimensional constructs and when you deploy it will be reverse
    engineered to the tabular model. The new native language for tabular will be
    easy to understand, modify and deploy.

  • SSIS
    designer supports previous versions of SQL Server

Of course
there is still also a lot of exiting news coming from the cloud side of
Microsoft BI, for example the
Azure Data Lake is announced
, following the principles of myblogpost about the relational data lake. You can expect a post about the
Azure Data Lake on this blog soon!



P.S. Don’t forget
to suggest and vote for feature requests for SQL Server yourself at:  
http://aka.ms/SqlBiUserVoice

 

Implement SSAS MD cell security using dimension security with blazing performance

SQL Server Analysis Services (SSAS) Multidimensional (MD) is a great product, and in my opinion it’s still the only real option to go for when building complex enterprise BI solutions. It’s still very fast when implemented correctly and it’s mature and therefore very stable.

 

The only real downside is cell security, which, in my opinion, is useless. It makes performance drop dramatically because it evaluates security cell-by-cell. I have seen reports that run in a few seconds without cell security taking 20 minutes with cell security implemented! Try to explain that to your customer.. It’s obvious that you can’t.

 

Quite some workarounds exist for quite a while:

 

  • Make your measures invisible and create MDX calculations that either show or hide the measure value based on a dummy dimension. Drawbacks are the measure is hidden and not really secured and you need to create dummy dimensions/attributes and maintain them. http://www.sqljason.com/2010/04/implementing-measure-security-in-ssas.html
  • SSAS offers us the “Measures Dimension” which give you the possibility to secure measures like dimension members. Great, this is just what we need, but implementing it and creating MDX calculations based on the secured measures will give you errors for SSAS roles that do not have access to these measures. This is caused by the fact that the MDX script is executed after the security has been implemented. So if a user that doesn’t have access to a measure (set by dimension security) tries to connect to the cube while the MDX script contains a reference to this secured measure, this raises an error.
  • Create a hidden dimension on which you apply dimension security like described here: http://cwebbbi.wordpress.com/2011/12/22/replacing-cell-security-with-dimension-security/ Unfortunately this doesn’t work for measures.

 

For a few years I’ve always implemented a workaround that uses the measures dimension. To prevent errors in the MDX script as described by option B above, I’ve added IsError() checks around all my calculation parts that could raise an error. For example, a simple calculation like Quantity * Price, where the price measure could be secured, looks like this:

 

CREATE MEMBER CURRENTCUBE .[Measures].[Amount] AS
 
IIf (
    IsError ( [Measures].[Price] ),
   
NULL,
    [Measures].[Quantity] * [Measures].[Price]
  ),
 
NON_EMPTY_BEHAVIOR =
  {
   
IIf (
      IsError ( [Measures].[Price] ),
     
NULL,
      [Measures].[Price]
    )
  } ;

 

This calculation would not raise an error but NULL if a user doesn’t have permission to the Price measure. Quite straightforward, only trick here is the IsError() check in the NON_EMPTY_BEHAVIOR, of course you would have to do this in the script view of your SSAS calculations tab. In the form view this will look a bit strange but it doesn’t cause any errors: 

clip_image001 

 

Using this approach you are able to implement measure security using the dimension security while you are still able to create MDX calculations with NON_EMPTY_BEHAVIOR based on these secured measures. This made the report I talked about before to run in 5 seconds instead of the 20 minutes. I’ve used this approach for quite some years now, and it has always been sufficient.

 

At the moment I’m working on a very big enterprise cube that contains almost 100 dimensions, 25 measure groups and millions of records. I’ve also implemented measure security like this and after a while I noticed some really heavy queries (reports) took quite some time, for example 30 seconds or 1 minute. Reason enough to re-think this approach. When I was at the SQL Server Days in Belgium last year, I’ve discussed this approach with Chris Webb. Of course he could understand what I was doing here and we agreed to email about this approach later on. Chris emailed me about a blogpost of him from some time ago: http://cwebbbi.wordpress.com/2008/01/22/dimension-security-tips-tricks-and-problems/. In this post Chris explains how to create secured calculations while using dimension security by using named sets and scope statements:

 

CREATE MEMBER CURRENTCUBE .Measures.Test AS
  NULL
;

 

CREATE SET myset1 AS
 
IIf (
    IsError (
StrToMember ( “Measures.[Internet Sales Amount]” ) ),
    { },
    { Measures.Test }
  ) ;

 

SCOPE ( myset1 ) ;
This = Measures.[Internet Sales Amount] ;
END SCOPE ;

 

CREATE SET myset2 AS
 
IIf (
    IsError (
StrToMember ( “Measures.[Internet Sales Amount]” ) ),
    { Measures.[Internet Tax Amount] },
    { Measures.[Internet Sales Amount], Measures.[Internet Tax Amount] }
  ) ;

 

SCOPE ( myset2 ) ;
This = Measures.CurrentMember * 2 ;
END SCOPE ;

 

I did not know about this approach, so I tried it out. To be able to get good test results I used a very heavy query that hopefully nobody would ever try: a calculation over all sales data (millions of records) against all customers, all products and all time.  Unfortunately, the results were not very pleasing yet:

 

1. Calculation with IIF/IsError checks in both the calculation part and the non empty part: ran in 50 seconds.

 

2. Calculation with named set/scope approach: ran longer than 5 minutes, after that I stopped the query.

 

So Chris and I emailed again and I was wondering if the lack of NON_EMPTY_BEHAVIOR (NEB) could be the cause of the slow performance of the named set/scope approach. Chris said that since SSAS 2008 the usage of NEB was not necessary anymore, something I’ve heard about before. I always had the idea adding NEB did make a difference in some cases so I never stopped using it. So I kind of merged Chris’ and mine approach, adding the NEB using an IsError() function and I replaced the StrToMember check with a direct reference to the measure in the named sets because in my opinion that part was not necessary:     

IsError ( StrToMember ( “Measures.[Internet Sales Amount]” ) ) à became: Measures.[Internet Sales Amount].

 

The result was almost unbelievable, the query now took not 5 minutes, not 50 seconds but only 2 seconds!! So adding NEB still can make a huge difference sometimes! After finding out about this, I contacted Chris again and he was also very surprised by the result.

 

I’ve created an MDX calculation template with some explanation added as comments. The big performance gain is achieved because the IsError() check doesn’t need to be executed every time the calculation is executed because it’s already executed when the static named set is created (one single time), afterwards this value just seems to be reused. I think the same applies for the IsError() check in the NEB, it seems it’s only executed once and is reused. Anyway, the result is, your MDX calculations are just as fast with security applied as without. Something that was not possible for a long time for me and I guess everybody else too. Just use the template below as a standard for creating your calculations and benefit from this solution:

 

–Calculation Template:

—————————————-

   ———-1: CHECKED CALCULATION: The secured “end product” used in other calculations———-

CREATE MEMBER CURRENTCUBE.[Measures].[Quantity_CHECKED] AS

  NULL

  , VISIBLE = 0 ;

 

   ———-2: CHECK NAMED SET: Actual permission check performed here———-  

CREATE SET [Quantity_CHECK] AS

  IIf (

   IsError ( [Measures].[Quantity] )

   , { }

   , { [Measures].[Quantity_CHECKED] }

  ) ;

   ———-3: SCOPE: Assignment of either the measure or “nothing” to CHECKED calculation (1)———-  

SCOPE ( [Quantity_CHECK] ) ;

    This = [Measures].[Quantity] ;

END SCOPE ;

   ———-4: Second secure calculation created here———-  

CREATE MEMBER CURRENTCUBE.[Measures].[Price_CHECKED] AS

  NULL

  , VISIBLE = 0 ;

CREATE SET [Price_CHECK] AS 

  IIf (

   IsError ( [Measures].[Price] )

   , { }

   , { Measures.[Price_CHECKED] }

  ) ;

SCOPE ( [Price_CHECK] ) ;

    This = [Measures].[Price] ;

END SCOPE ;

 

   ———-5: Calculation based on the secure calculations.

   ———-Non_Empty_Behavior set with IIf/IsError on measure. Using calculations inside NEB is not possible———-

CREATE MEMBER CURRENTCUBE.[Measures].[Sales Amount] AS

  [Measures].[Quantity_CHECKED] * [Measures].[Price_CHECKED]

  , NON_EMPTY_BEHAVIOR =

  {

   IIf (

    IsError ( [Measures].[Price] )

    , NULL

    , { [Measures].[Price] }

   )

  }

  , VISIBLE = 1 ;

   ——————————————-

 

Your calculations tab in SSAS will look like:
clip_image003

When I was investigating the inner working of the first part of the script I’ve added some comments to make it easy for myself to remember. I guess it’s also useful for anyone that want to know what’s really happening here:  

CREATE MEMBER CURRENTCUBE.[Measures].[Quantity_CHECKED] –> Calculation is initially NULL and will be filled based on user rights later on.

AS NULL,

VISIBLE = 0; –> Make it invisible as this is a pure technical calculation that should not be used by end users. End users can use the measure this calculation is based on, if they have permission (set by dimension security on the Measures dimension).

CREATE SET [Quantity_CHECK] AS –> Named Set will be used to perform the actual check for user rights, as this is a static named set this check will only be executed one time, at initialization, and will be reused.  

IIf(IsError([Measures].[Quantity]) –> An IsError on the measure that needs to be checked will raise an error if a particular user has no rights on it because in that case the measure will simply not exist.

, {} –> If an error was raised by the IsError function set the value of this set to nothing: {}

, {[Measures].[Quantity_CHECKED]}); –> If no error was raised the user has rights to access the measure, in this case set the value of the set to the required CHECKED calculation created in the previous step.

SCOPE([Quantity_CHECK]);  –> If in the Scope of the CHECK calculation, either the CHECK calculation can be {} (Nothing) or it can be the CHECKED calculation (based on user rights).  

This=[Measures].[Quantity]; –> Assign the measure to This. This can be the CHECK calculation if the user has rights which will pass the measure through to the initial CHECKED calculation or it can be {}/Nothing which will pass the value of the measure to nothing instead of to the CHECKED calculation.

END SCOPE; — So this Scope function either passes the measure to the CHECKED calculation or to nothing (the empty set {})

Will SSAS, Cubes and MDX be abandoned because of the BI Semantic Model?

At the PASS Summit that is happening in Seattle at the moment Microsoft announced the “BI Semantic Model” (BISM).

It looks like BISM is something like the UDM that we now know from SSAS. While the UDM was the bridge between relational data to multidimensional data, BISM is the bridge between relational data to the column-based Vertipaq engine. Some compare BISM to Business Objects universes.

The next version of SSAS will be able to either run in the old “UDM” mode or in “BISM” mode, a combination is not possible. Of course this will have some radical consequences, because there are a few major differences between the two modes:

  • The switch from multidimensional cubes to the in-memory Vertipaq engine
  • The switch from MDX to DAX

So multidimensional cubes and MDX will be deprecated? No, not really, SSAS as we know it now will be a product in the future and will remain supported. But it looks like Microsoft will concentrate on BISM, mainly because multidimensional cubes and MDX are very difficult to learn. Microsoft wants to make BI more approachable and less difficult, just like with Self Service BI.
I would say that it’s really time to start learning PowerPivot and DAX right now, if you have not already started learning it. If Microsoft will focus on the new BISM/Vertipaq technology that will be the future if you ask me.

Chris Webb wrote an interesting article about BISM and it looks like he is not very enthusiastic about the strategy Microsoft takes here because this could be the end of SSAS cubes within a few years: “while it’s not true to say that Analysis Services cubes as we know them today and MDX are dead, they have a terminal illness. I’d give them two, maybe three more releases before they’re properly dead, based on the roadmap that was announced yesterday.”

What’s also very interesting is the comprehensive comment on this article from Amir Netz. He explains BISM and UDM will live together in Analysis Services in the future and MOLAP is here to stay: “Make no mistake about it – MOLAP is still the bread and butter basis of SSAS, now and for a very long time. MDX is mature, functional and will stay with us forever.”

Read the article from Chris Webb here and make sure you don’t miss the comment from Amir!

SSAS – Clear SSAS cache with an SSIS package

Often I see developers on different forums asking how they can clear the SSAS cache. You can achieve this by restarting SSAS, which is done quite often.
Restarting is not necessary though, it’s possible to clear the cache of an SSAS database with an XMLA script. Use the script below and replace “YourSsasDatabaseId” with the ID of your SSAS database and “YourSsasCubeId” with the ID of your SSAS cube.

<ClearCache xmlns=http://schemas.microsoft.com/analysisservices/2003/engine>
  <
Object
>
    <
DatabaseID>YourSsasDatabaseId</DatabaseID
>
    <
CubeID>YourSsasCubeId</CubeID
>
  </
Object
>
</
ClearCache>

Please note that you must use the ID’s of the database/cube and not the name! You can find the ID’s of these objects at the properties in BIDS or in SSMS. As you can see on the screenshot below, the cube/database name is not always the same as the ID. This particular cube has ‘Finance’ as its name and ‘DW’ as its ID!

Cube Properties

You can run this script manually from SSMS or automatically using SSIS. To run it from SSMS, right click your SSAS database and choose New Query > MDX. Although this is an XMLA script and not MDX it can be executed as MDX script.

Running this script from SSMS is useful but I think most developers would want to clear the cache automatically. Fortunately SSIS has a ‘Analysis Services Execute DDL Task’ that can execute this script!

task1

 

 

 

Just configure the task for your SSAS instance and copy/paste the script in the SourceDirect box as shown below.

task2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

With the possibilities SSIS offers you can now clear the cache anytime you want. For example you could run it right after you have processed the cube. A lot of developers restart the server after processing but with this script that is no longer necessary!

Thanks to Jamie Thomson for sharing this script on his blog!

SSAS – Speed up dimensions using a NULL default cube measure

Recently I faced some problems with the performance of SSAS dimensions. The cube users were using a large dimension with more than 100.000 members that didn’t perform well.
They tried to add a leaf dimension member on an Excel 2007 pivot table. When dragging this dimension member onto the rows they had to wait very long before the members returned from SSAS and showed on the screen.

After some mailing with Chris Webb he thought this could have something to do with the default cube measure. It seems that when you query dimension members without picking a measure, SSAS takes the first measure from the first measure group as its default measure. So even when you only query a dimension, SSAS is still using a measure!

You can find out which measure SSAS will take with the following query:

SELECT [Measures].DefaultMember ON 0 FROM [YourCube]

In this case the default measure that SSAS picked was from a measure group that was not connected to the dimension that was giving the performance problems. This, plus the fact that returning 100.000 times NULL is faster then returning some big float number, explained the performance issue.

Chris advised me to use a NULL calculation as default measure, as he explains on his blog. The only problem here is that you can’t select a calculation as default measure in the cube properties in BIDS (only normal measures are allowed):

Default measure cube properties

 

 

 

 

Fortunately pasting this MDX statement in the calculations script (use the script view on the calculations tab) did the trick. Just paste it right under the CALCULATE command (or somewhere else):

CREATE MEMBER CURRENTCUBE.MEASURES.UseAsDefaultMeasure
 AS NULL,
VISIBLE = 1;

ALTER CUBE CURRENTCUBE UPDATE DIMENSION Measures,
DEFAULT_MEMBER = [Measures].UseAsDefaultMeasure;

When you return to the form view it should look like this:

Calculations Script with default measure

 

 

 

 

The next time you deploy the cube, SSAS will use the [UseAsDefaultMeasure] calculation as its default measure.


Test results:

I now had a nice opportunity to test the cube performance with and without the NULL calculation as default cube measure. I cleared the cache before each query to get a good comparison.

In SSMS I used the following query:

SELECT [Measures].DefaultMember ON 0,
[Dimension].[AttributeHierarchy].Members ON 1
FROM [Cube]

The results are shown in the bar-chart below:

  • The query with the NULL default measure took 13 seconds.
  • The query without the NULL default measure took 4 minutes and 35 seconds.

Chart

 

 

 

 

 

 

 

 

 

 

 

Conclusion:

Using a NULL default measure can be much faster when querying dimension members without selecting a measure. The result was shown more than 20 times faster in this particular case.

Thanks to Chris Webb for helping me out on this issue!

Start / Stop SQL Server, SSIS, SSAS, SSRS and SQL Server Agent at once with a batch file

A lot of developers have SQL Server and its different services running on their PC or notebook to develop or test BI solutions. Unfortunately this slows down your system quite a lot. To speed things up when not using SQL Server, I used to stop and start each service manual quite often.

Recently I found out that it’s possible to start and stop all services at once with a simple batch file. It now only takes a couple of seconds instead of a few minutes and some annoying steps.

Copy/paste the following in a .txt file and rename it to .bat to make it a batch file, execute it by double clicking the file.

START SCRIPT:

NET START “SQL Server Agent (MsSqlServer)”
NET START “MsSqlServer”
NET START “MsSqlServerOlapService”
NET START “ReportServer”
NET START “SQL Server Integration Services”

STOP SCRIPT:

NET STOP “SQL Server Agent (MsSqlServer)”
NET STOP “MsSqlServer”
NET STOP “MsSqlServerOlapService”
NET STOP “ReportServer”
NET STOP “SQL Server Integration Services”

SSAS – MOLAP, ROLAP and HOLAP storage types

A big advantage of a BI solution is the existence of a cube. Data and aggregations are stored in a optimized format to offer very fast query performance.
Sometimes, a big disadvantage of storing data and aggregations in a cube is the latency that it implies. SSAS processes data from the underlying relational database into the cube. After this is done the cube is no longer connected to the relational database so changes to this database will not be reflected in the cube. Only when the cube is processed again, the data in the cube will be refreshed.

SSAS 2005 gives you the possibility to choose different storage types for the following objects:

  • Cubes
  • Partitions
  • Dimensions


MOLAP (Multi dimensional Online Analytical Processing)
MOLAP is the most used storage type. Its designed to offer maximum query performance to the users. Data AND aggregations are stored in optimized format in the cube. The data inside the cube will refresh only when the cube is processed, so latency is high.

ROLAP (Relational Online Analytical Processing)
ROLAP does not have the high latency disadvantage of MOLAP. With ROLAP, the data and aggregations are stored in relational format. This means that there will be zero latency between the relational source database and the cube.
Disadvantage of this mode is the performance, this type gives the poorest query performance because no objects benefit from multi dimensional storage.
 

HOLAP (Hybrid Online Analytical Processing)
HOLAP is a storage type between MOLAP and ROLAP. Data will be stored in relational format(ROLAP), so there will also be zero latency with this storage type.
Aggregations, on the other hand, are stored in multi dimensional format(MOLAP) in the cube to give better query performance. SSAS will listen to notifications from the source relational database, when changes are made, SSAS will get a notification and will process the aggregations again.
With this mode it’s possible to offer zero latency to the users but with medium query performance compared to MOLAP and ROLAP.

The different storage types of SSAS:


 

Data storage

Aggregations storage

Query performance

Latency

MOLAP

Cube

Cube

High

High

HOLAP

Relational database

Cube

Medium

Low (none)

ROLAP

Relational database

Relational database

Low

Low (none)

Conclusion

SSAS offers three storage types that give you all the flexibility you need. You can choose between high performance and high latency on one side(MOLAP) and lower performance but low latency(ROLAP) on the other side. There is also a possibility to choose a way in between(HOLAP).

SSAS – Visualize Attribute Relationships

Complicated attribute relationships are difficult to oversee in SSAS 2005. Microsoft solved this problem in SQL Server 2008 with a graphical view of the attribute relationships. Unfortunately, most of us still work with SSAS 2005 and have to wait a couple of months before using the benefits of SSAS 2008.
Until SQL Server 2008 is fully available it’s good to know there is a way to get some of the new features in SSAS 2005: BIDS Helper.
It is free to use and offers a lot more than just the visualization of your attribute relationships:
· Aggregation Manager
· Calculation Helpers
· Deploy MDX Script
· Dimension Health Check
· Printer Friendly Dimension Usage
· Show Extra Properties
· Update Estimated Counts
· Visualize Attribute Lattice