Monday, August 1, 2011

What is metrics (indicator) and what are the different types of metrics

Overview

Any BI application's main role is to show information based on some measurements. These measurements are called as metrics. E.g. you measure how much sale you have done, so total sales revenue is your metrics. In this article the focus is on such different types of metrics and when and how to use a proper one in the application.

Details
Most of the time when you are designing BI apps, you have few things to consider. Like what type of indicator you will choose to convey the knowledge in your app.

There are three main types of metrics that you can use in your business intelligence application:

1. Leading Indicators: These are the indicators usually used to predict or to forecast type of activity. Let us see where we can use it practically. Consider we have a requirement to create BI apps for sales team. Now they want to measure activities like how many touches are required to convert a prospect into a customer. This is a leading indicator as you are not reporting on the activity happened, it is showing how many calls/activities you need to do to achieve you goal.

2. Lagging Indicators: These are the indicator reporting the activity happened in the past. E.g: If you want to measure any business financial amounts like last year’s sales revenue, growth etc.Generally these indicators will show where you stand currently using the past data. Sometime people refer to these indicators as Key result indicators as well. As they work on the result or activity happened in the past and report the result on it.

3. Key Performance Indicators (KPI): These indicators are different somewhat like leading indicator however some definite things around it have. Like leading indicator will say 10 calls needed to convert a prospect to the customer, lagging will say you have made 5 calls. Now with respect to quota or requirement and time you don't necessarily know whether it is good or bad or you are on the right track. KPIs are used for that. They show your performance in terms like you are on track or behind or ahead. E.g. If you want to see how is sales revenue with respect to sales quota

Conclusion
A proper metrics is used based on which application you are designing for. If it is a BAM-Business Activity Monitoring then Lagging Indicators or KPI will convey the information. If it is sales forecasting then leading indicators can be used.

Tuesday, March 1, 2011

Design aspects of Data Staging Layer

By: Milind Zodge

Overview

You usually have two to three layers approach for data warehousing solution. On of the layer is called a "staging layer".

Data from various sources is staged here temporary till it is processed and transformed into data warehouse. Data in this layer can be relational in nature.

Different data flows of data staging

Pull mode data staging



Push mode data staging

As shown in the above diagrams there are two ways to flow data into the data staging layer, one is data pull mode and other is data push mode. What I mean by this is, in pull mode you define a process to read from the data source, either using a date range or reading from delta. The process reads data and put it into the staging tables. In push mode, delta or changed data is transferred into the staging table by means middleware.
This was the different ways of data flow into data sharing. Let us see now different techniques you can use for this environment.
Different data staging techniques
Store and forward
In this technique, a data is stored in staging area and then used for transformation and loading into Data Warehouse environment, like ELT
Forward and Forward
In this technique, a data is directly read from ODS and directly will be inserted or updated in the Data Warehouse environment
Different types of data loads
Full data load
This is specially used for first time data load into data warehouse. Generally you fetch all the needed data from the source into staging tables. This is time consuming process and the processing time will gradually increase because of data growth rate in the source system.
Delta data load
This is specially used to extract the changed/new records.
Different ways to store data in staging
File
Data can be stored in a file which can then be transformed and loaded into data warehouse.
Database
Staging data can be stored in the database table(s) either permanently or for some time till it gets loaded into the Data Warehouse.
Conclusion
Before designing data staging layer one has to see these aspects of the design. What should be the data flow technique? What should be the data load technique? What should data storing technique?

Sunday, August 1, 2010

CDC Technique for dimension table which is based on a multi-table query

By: Milind Zodge

In the Data warehousing project you have dimension and fact tables. Usually, if data is coming from a single table, we can use the approach what I have presented in the last article "Change data capture for Oracle 9i database without adding triggers on the source table".
There are also plenty of other options available like CDC, using timestamp etc. However the problems comes when you have a dimension table which is constructed based on a multi-table query. In this case none of the above approach can work directly.
Overview
Consider a case of Sales Representative dimension, this dimension is based several attributes like area, login etc. These attributes are coming from different tables. Now we will see what we can use to have an incremental update of this table.
The examples shown in this article is for Oracle database however same concept can be used for other database engines.

Step 1: Creating a Function which will return hash value
We will be using a hash value technique to compare the rows. Well we really have one more option, compare each field and see if any one of them is changed and that way determine the changed row.

However hash value method is faster than the above approach and code also become manageable with less conditional statements. Both methods are same though.
Create a function such that it will read a value as a text parameter and will return a hash value for it.

e.g.
FUNCTION dim_hashvalue (p_input_str VARCHAR2) RETURN VARCHAR2 IS
l_str VARCHAR2(20);
BEGIN
l_str := dbms_obfuscation_toolkit.md5(input_string => p_input_str);
RETURN l_str;
END dim_hashvalue;

Step 2: Add a new column in the dimension table to hold a hashvalue
Create a new column "hashvalue" in the dimension table. And update its value by using the above created function using the required columns.
Make sure you use the same set of columns and in the same sequence in the ETL logic to create a hash value for a new row.

Step 3: Write ETL code
In the ETL code read the records from this multi-table SQL in a cursor loop. For each record find out the hash key value. Get the old hash key value by selecting the record from the dimension table using a key. If no record exist then insert the record. If record exists compare these tow hash keys if it is different update the record otherwise skip it.

Conclusion
This way you can achieve change data capture for a multi-table select statement query used for dimension table.

Monday, March 1, 2010

Find out how to achieve change data capture for Oracle 9i database without adding triggers on the source table

By: Milind Zodge


In the Data warehousing project you need to pull the data from different environments. The source can be different databases or even different data sources like combination of database with flat file. If the source is purely database chances are that the source and target database have different database versions even different kinds of databases like SQL Server, Oracle etc. In this article I am focusing on getting data from Oracle 9i database.

This article will help you in giving another way of pulling changed data without modifying the source table structure or without adding triggers on the source table. This article is meant for any Database developer, Data Warehouse developer, Data Warehouse Architect, Data Analysts, Managers or even ETL Architect, and, ETL Designer who wants to pull the changed data for their project.

This article is not covering the details of how to create materialized view log and materialized view and not covering the fundamentals of how materialized view and log works, it just explain in brief about these objects and how it is used in this solution. You can get more information on materialized view and materialized log from Oracle's web Site.

Overview

Consider a case of having an Oracle 9i as a source database and Oracle 10g as a target database and we want to pull only changed records from the source table. There are three ways we can do this, first, add modified and inserted date on the source table and use it in the ETL script to incrementally fetch and process the data. Second, add DML triggers on the source table to insert a record into a stage table or use Oracle CDC to fetch the incremental data. In the first two cases you need to modify the table object. If you want to pull data from different systems, sometimes it can turn into a time consuming effort. What I mean by this, is, it may trigger series of meetings if you are going to modify the table structure or going to add triggers like in on the tables as most of the time, different departments in the company have their own schedule for developing the application or even for releasing new features. Since this is going to modify the object layout, it needs to be prioritized, and go thro the standard lifecycle of the project like impact analysis etc. All these required activities may take time, which will affect your project. Now if you are in fix and wants to get a changed data with out modifying the existing table structure or even don't want to add any triggers on the existing table then you will find this article helpful.

We needed to pull the data from different databases into Data Warehouse. All these databases had different versions so using Asynchronous CDC package feature of 10g was not an option. Adding triggers was a huge effort as its going to affect the online transaction processing system. So challenge was to figure out a way to so that an incremental load process can be developed for data warehouse load which will save tremendous processing time.To overcome this problem we had two solutions, one to store the data in stage1, read the snapshot of data from the source system, compare it with the stage1 and load the changed or new records in stage2. Then use stage 2 to transform and load the data into Data Warehouse. This was again a costly effort and was not a scalable solution. The processing time with this solution will be more as more data gets loaded in the system.

Another solution was using materialized view log. This log will be populated by the transaction log and can be used in materialized views. It is a three step process. First step was performed in the source database and other two were performed on the target database.

Step 1: Creating a Materialized Log in the source database

Create a materialized log on the desired table. A materialized view log must be in the source database in the same schema as the table. A table can have only one materialized view log defined on it. There are two ways you can define this log, either on rowid or primary key. This log's name will be MLOG$_table_name which is an underlying table. This log can hold primary key, row ids, or object ids can also have other columns which will support a fast refresh option of materialized view which will be created based on this log.When data changes are made to master table data, Oracle will pull these changes to the Materialized log as defined. The function of this log is to log the DML activities performed on the used table. E.g. CREATE MATERIALIZED VIEW LOG ON table name WITH option like OBJECT ID, PRIMARY KEY or ROWID

Step 2: Creating a Materialized View in Target Database using this log

Create a Materialized view based on the above created materialized log. Materialized view is a replica of the desired table. This is like a table and needs to be refreshed periodically. You can define the needed refresh frequency to fast refresh this view based on the materialized log in the target database. Whenever a DML operation is performed, on the defined table that activity will be recorded in the log which is in the Oracle 9i database, in the source system. Now we have a materialized view defined on this log in our system, Oracle 10g, which is target system. This view will only pull in the changes as defined in the log. These changes will be applied to the rows. One can define a desired frequency of refreshing this view. This process doesn't create any physical trigger, however there is a little overhead, as database has to store the row in the defined log table whenever a commit is issued.

Step 3: Writing triggers on Materialized View

As we know materialized views are like a table, hence we can write triggers on it. In prior two steps we saw how the changed data is pulled from the source system and will be loaded in the materialized view defined in the target system. Now the question is how to use this view to determine the changes. For this purpose we will write database triggers on this materialized view, triggers like after insert, after update and after delete. These triggers will capture which operation was performed on the row. Now we will define a new table having same structure as of staging/target table with few additional columns. First, an indicator of which operation is done, whether it is insert/update or delete. Then a sequence number, this is important as you may get a row which is a new row and also got modified in the same time window. This time the sequence number will tell the sequence of the activity.

Now whenever a DML operation is performed on the table, the log will get refreshed by the new information based on the defined frequency, then materialized view will be refreshed with the new information based on the information available in materialized log. Appropriate trigger will be fired based on the operation performed on the data row. This trigger will create a new record in the staging table with appropriate operation mode like: I for Insert, U for Update and D for Delete with the activity sequence number.

How this works

Whenever a data is changed or added to the source table, a materialized log captures that information. Based on the refresh frequency, materialized view will be refreshed using the log. During refresh, it will insert new records in the view and will update the existing records. During this DML operation, DML triggers will be activated and will insert a row into the stage table, which can be further used to transfer the data into Data Warehouse or Data mart.

Conclusion

No matter what you do, there will be some overhead on the database. The discussed solution has some overhead too; however it is a nice handy alternative solution to pull the data.

Wednesday, January 6, 2010

Design technique for Date type columns in fact table for maximum performance

By: Milind Zodge

Overview
When you design a Data warehouse or Data Mart you come across many Date data type attributes in dimensions and/or facts tables. And . In this article I have pointed out a design technique for Date columns for fact table which gives highest performance.

Design
Consider a data mart having a fact table "Order" with many columns like Order Number, Order Date, Shipped Date and Amount and a "Time" dimension which have an entry for each day. You use "time_id" for "Order Date" however most of the time "Shipped Date" is kept as a Date column.

Consider in your reporting system you want to design a report to report number of orders shipped in a particular year. Now you will have format the shipped date column so that you can compare its year portion to get result. If you have a massive fact table this query is going to take more time as it will not be using any index, well you can create index to solve this problem.

Now consider you have a report which report number of orders shipped in a particular month, day, quarter etc. To speed up this operation you will have create index probably more than one. However if we use the id column and index on that column then you can avoid the above problem

Add shipped_date_id column along with the Shipped Date column in the fact table. Derive the value by using Time dimension. So whenever you query you always use index.

Conclusion
This way you can achieve maximum performance without adding more indexes. You can just go to your time dimension get the required ids and join it with your fact table which will use index defined on "shipped_date_id" column.

Tuesday, December 29, 2009

Data Virtulization or near real time data for reporting with Data Warehouse

By: Milind Zodge


Business requirement

Need to report near real time data segment along with consolidated data


Details

Data virtualization is getting lot of attention now-a-days. In current world business need of a data is changing. Previously Data warehouse used to support DSS applications and reporting tool like Dashboard and Scorecards which preliminary need summarized snapshot of a data.


However now the trend looks like going towards having a mix of consolidated data and near-real time data. There are few EII techniques and tools available for this however if you have to deliver this without spending a fortune you can leverage user database layer.

You can create a Data warehouse either top down or bottom up modeled and can have an ODS tables/schema to hold the ODS data without transformation. Various change data capture techniques can be used to keep ODS data in sync with the source, like in Oracle you can use Change data capture or Streams method.

Now since we are not transforming the ODS data it need to be transformed virtually. We can create a view layer combining these two layers to deliver data for some operational reporting and near real time data needs. Key is to transform the ODS data and fit it together with the data mart or data warehouse data.

Tuesday, December 15, 2009

How to use Oracle's Metadata package for impact analysis

By: Milind Zodge

Overview

Business is always changing and you have to make some changes based on the business requirement.

Before doing any change you want to perform an impact analysis. Most of the data modeling tools have provision to do it. I am focusing in the article how you perform this task if you don't have a tool.

Details

Consider a case, we have Oracle database and wants to alter a column width and would like to see wherever this column is used/ referenced.
We can use Oracle's metadata package as indicated below
SET pagesize 0

SET long 90000

SET feedback off

SET echo off
SELECT DBMS_METADATA.GET_DDL('TABLE',ut.table_name)

FROM USER_TABLES ut;

This will give DDL scripts for all the tables. Now you can use any text tool like Notepad to search for the required column and find out the references.

Conclusion

There are various ways to do it. This is one of them. This will help you determining the impact exposure.