A Pragmatic Approach
Successful Data Warehousing requires an approach that reflects both understanding the business needs and supporting their delivery with application and technology platforms. This approach must be flexible enough to provide for the inevitable, inherent changes the Data Warehousing environment will require as it matures within a company.
To make a corporate Data Warehouse successful:
- Use a defined, repeatable process
- Adopt an iterative approach
- Deliver manageable components of data
Use a Defined, Repeatable Process
The model in Figure 1 defines a methodology for building a Data Warehouse. As the model shows, it is a very iterative process, where the idea is to get all the way through the process (or a sub-process) with a component of data, and then add another component and get it all the way through the process again. It is the model that most of the Data Warehousing industry now follows, developed because the warehousing process is essentially a process of discovery. The iterative approach allows you to build in new knowledge gained during data research, data design, build and testing, and application deployment, as you move ahead in the overall project. As the project proceeds, the team's knowledge keeps growing, expanding on correct information and replacing early guesses that can be made more exact with the addition of new findings. Iterative approaches provide a model for ongoing learning and the constant incorporation of new information.
The Data Warehouse Methodology Template shows the steps of the process for building a Data Warehouse.
The next sections define the following steps of the Data Warehousing process:
- Project initiation
- Data business area definition
- Data research
- Data sourcing
- Data model completion
- Physical model implementation
- Performance tuning
- Business process definition
- Technical architecture definition
- Application definition
- Application engineering
- Distributed deployment
As the model in Figure 1 shows, the first four steps are sequential, with steps two through four repeated as necessary. The remaining steps divide into two branches, one for the data modeling, implementation, and performance tuning, and the other for the definition of the business process and its automation through application architecture and components. Each of these branches also incorporates an iterative development approach.
1. Project Initiation
Define the business case, project scope, and feasibility. Seek ongoing sponsorship and executive buy-in. Identify allies early, establish their goals, and include them in the scope.
2. Data Business Area Definition
Keep the scope focused and manageable by determining the areas of the business most important to the organization at this time. Their priority depends on questions such as the following:
- How much information is available through automation already?
- What are the industry trends and competition?
- What are the events driving the business at this time?
- How much is known and understood about this area of the business?
As these factors are clarified, an understanding of the following points will emerge:
- Certain areas can be researched and mined for data immediately. These constitute the "low-hanging fruit" that can deliver early returns for use in promoting the project.
- Some areas are well understood and the amount of research can be estimated and started immediately. These areas will define the near-term deliverables.
- Other areas are little understood and need further research before they can even be estimated. These are the more long-term areas of interest.
Conducting this type of analysis early and often will keep a Data Warehouse project focused on the real goals of the business and will assure that the research and data sourcing that is undertaken is feasible, meaningful, and realistic.
3. Data Research
Determine how the relevant business process will use the data. Inventory, question, and define business rules for sending (source) applications. Expect that the receiving business process will undergo a transformation as the result of new awareness and new understanding because of having more information than ever before. Expect the requirements to change quickly because the access to information represented by an effective Data Warehouse will cause rapid growth and change in the functions using the information. Research data availability and value. Continually evaluate priority and benefit of desired data versus cost and time involved acquiring it.
4. Data Sourcing
Identify systems, suppliers, and vendors that will provide data. Prioritize the data needs based on revenue implications and strategic and competitive concerns, as well as usage and actual availability. Gather valuable information from sourcing to be returned to the next iteration of business area definition and data research as data discovery results. Equip and train teams performing sourcing to collect information to be returned to researchers.
5. Data Model Completion
Select a portion of the Enterprise model to implement through the analysis of the business area. Define data relationships and resolve anomalies such as many-to-many relationships.
Define schemas for data constructs, which will provide greater flexibility and understandable data structures for the business user. Define facts that will be stored, dimensions to report, and hierarchies to be supported, and determine limits of data sparsity. Define and model common business dimensions such as time, scenarios, geography, product, and customer using multi-dimensional modeling techniques such as summarization and star diagrams.
6. Physical Model Implementation
Depending on the performance requirements and other architecture considerations of the system, choose one of the following:
- Relational DBMSs such as Sybase, SQL Server, Oracle, Informix/RS6000, DB2, and so forth
- Multidimensional databases such as PaBLO, PowerPlay/Cognos, Mercury, ESSBase/Arbor, Lightship Server, Acumate ES/Kenan, Express/Oracle, Gentium, or Holos
- Relational OLAP (or OLAP-on-Relational) tools such as Metaphor, Information Advantage's AXSYS, Prodea's Beacon, Redbrick Data Warehouse, Alpha Warehouse (ISI & Digital), or Decision Warehouse from Sun Systems
7. Performance Tuning
Determine the workload the database will need to support. Bitmap and other indexing schemes are employed to significantly improve response time over traditional indexing methods by greatly reducing the number of read operations to the data. These schemes also enable more users to access the warehouse simultaneously, making it easier for users to pose a series of queries to analyze the data. Such schemes achieve an acceptable level of responsiveness with lower hardware expenditure than with traditional indexes.
8. Business Process Definition
Define and validate the process to be automated through accessing the Data Warehouse. Utilize business modeling to clarify and redesign if necessary. Highlight decision-support requirements for analysis.
9. Technical Architecture Definition
Define the hardware and technical constructs to be applied in providing the business solution. Select core technologies (platform, networking, RDBMS, and so forth). Address structural issues. Data Warehousing architecture includes
- Sourcing of data from legacy systems and other applications (data collection, editing, and preparation).
- Transformation and data integration, with storage issues resolved and atomic versus composite levels of granularity defined.
- Distribution of data to actual warehouse tables (further distribution to subject area databases or data marts is sometimes in order).
Give consideration to the nature of the data, answering the primary question of whether it serves the informational or the operational community.
10. Application Definition
Define the processes that will access and manipulate the data. Consider incorporating an application server, removing the business logic to a three-tiered architecture that addresses performance, reliability, and resource management by moving complex application logic to an application server independent of server databases and client PCs. The following are the three tiers:
- Application server provides efficient data access, accelerated response, and scheduled background processing and serving of pre-processed reports.
- Client is dedicated to presentation logic and services, and has an API for invoking the applications in the middle layer.
- Database server is dedicated to data services and file services.
11. Application Engineering
Develop the physical design, and test plans and scripts with definitions of expected results and build plan. Application definition, engineering, and deployment are iterative, with successive prototypes developed, providing early requirements gathering to third-level systems that will graduate to production.
Always put test data in test tables; never load it to production. Early prototyping efforts in Data Warehousing often deliver data that is below performance thresholds for accuracy. Performance threshold should be defined early in the process, and signed off as part of the data research phase. Warehoused data is sometimes not required to be 100% accurate. Certain marketing applications, for instance, can tolerate a margin of error up to 5%. Expectations must be documented and adhered to, but test data that falls below standards should not be loaded to production databases.
12. Distributed Deployment
Deploy architecture including such components as server, client, and application. Consolidate metadata (data about the data) for publication. Data credibility hinges on managed expectations and consensus in the definition stage.
Adopt an Iterative Approach
The single most important development in warehousing today is probably the iterative approach. Data Warehouse projects risk getting bogged down in traditional waterfall approaches, where phases are not allowed to overlap and project "scope-creep" causes the collection phase to expand, never becoming complete enough to move on to the data delivery phase.
Frequently this phenomenon is caused by the fact that the corporation's information needs are little understood in key areas. Required data has not been available in the past, so business users literally don't know what's needed, what will be of use, and what will not. Of course, some needs are obvious, but the more subtle requirements only surface after iterative exploration of what's available versus what's meaningful.
These problems have led to the industry's adoption of the iterative model of data research and discovery, leading to the iterative model for development of databases and applications to access the data. Until the availability question is settled, presentation will tend to shift and grow.
Deliver Manageable Components of Data
It is better to deliver 70% of the required information in a given business area than to deliver 5% in 10 different areas, because the data becomes meaningful (that is, becomes intelligence) when it can be placed in context. Of course, 100% would be ideal, but the timeframes required to deliver 100% accuracy and completeness of data are never acceptable to the business. And with the shrinking shelf life of the value of data over time, 70% now can be much more valuable than 90% two months from now.
All information is not equally valuable, but requires the prioritization process to find out what's most critical and focus acquisition efforts there first. As the business picture shifts and changes with competition and trends, the priority of data delivery will change to reflect that activity.
When data can be isolated into manageable components and delivered rapidly, more business needs get met, and the sponsors are much more likely to sign on for another parcel of information to be delivered in a reasonable timeframe, just like the first.
Integration Is the Key
In Data Warehousing, as in data management, finding successful methods of managing integration is the key. Integration modeling (see Laura's book, Integration Modeling: Templates for Business Transformation) helps the modeler and data manager understand the viewpoints of the many business areas that must work together to define and support the development of a corporate Data Warehouse. It helps clarify the dynamics of data in the data management cycle, which must also be considered in planning a Data Warehouse. Knowing where you are in that cycle places any effort at warehousing in the context of the larger business picture. Integration modeling also helps you define the architecture views, templates, and data sharing and segmentation schemes required for making the Data Warehouse environment efficient.
Data Warehousing converges with business process re-engineering and Internet disciplines to complete the corporate data architecture for an enterprise-wide data strategy. The successful Data Warehouse often drives the reengineering of business data and processes by surfacing the contradictions and confusion in current application systems. Internet technologies are utilized both internally and externally to provide increasingly important portals, which act as windows on the world of business information.
Since 1984, Laura Brown has helped businesses and technical managers deliver systems solutions, and has worked as management consultant and senior technical advisor to Fortune 500 companies. She is President of System Innovations, a consulting firm specializing in enterprise application integration, Data Warehousing, and Internet design.
Laura is the author of Integration Models: Templates for Business Integration (2000, Sams Publishing).
Laura can be reached via e-mail at lbrown@systeminnovations.net, or on the web at www.systeminnovations.net/.