A data warehouse is the main repository of an organization’s historical data, its corporate memory. It contains the raw material for management’s decision support system. The critical factor leading to the use of a data warehouse is that a data analyst can perform complex queries and analysis, such as data mining, on the information without slowing down the operational systems. Bill Inmon, an early and influential practitioner, has formally defined a data warehouse in the following terms:
Data warehouse types
Subject-oriented: The data in the database is organized so that all the data elements relating to the same real-world event or object are linked together.
Time-variant: The changes to the data in the database are tracked and recorded so that reports can be produced showing changes over time.
Non-volatile: Data in the database is never over-written or deleted – once committed, the data is static, read-only, but retained for future reporting
Integrated: The database contains data from most or all of an organization’s operational applications, and that this data is made consistent.
Table of Contents
History of Datawarehouse
Data Warehouses are a distinct type of computer database that were first developed during the late 1980s and early 1990s. They were developed to meet a growing demand for management information and analysis that could not be met by operational systems. Operational systems were unable to meet this need for a range of reasons:
Data Warehouse Features
- The processing load of reporting reduced the response time of the operational systems
- The database designs of operational systems were not optimized for information analysis and reporting.
- Most organizations had more than one operational system, so company-wide reporting could not be supported by a single system
- Development of reports in operational systems often required writing specific computer programs which were slow and expensive
As a result, separate computer databases began to be built that were specifically designed to support management information and analysis purposes. These data warehouses were able to bring in data from a range of different data sources, such as mainframe computers, minicomputers, as well as personal computers, and office automation software such as spreadsheets, and integrate this information in a single place.
This capability, coupled with user-friendly reporting tools and freedom from operational impacts, has led to the growth of this type of computer system. As technology improved (lower cost for more performance) and user requirements increased (faster data load cycle times and more features), data warehouses have evolved through several fundamental stages:
Types of Data Warehouse
Off-line Operational Databases:-Data warehouses in this initial stage are developed by simply copying the database of an operational system to an off-line. the server where the processing load of reporting does not impact the operational system’s performance.
Off-line Data Warehouse:-Data warehouses in this stage of evolution are updated on a regular time cycle (usually daily, weekly or monthly) from the operational systems and the data is stored in an integrated reporting-oriented data structure.
Real-Time Data Warehouse:-Data warehouses at this stage are updated on a transaction or event basis, every time an operational system performs a transaction (e.g. an order or a delivery or a booking etc.)
Integrated Data Warehouse:-Data warehouses at this stage are used to generate activity or transactions that are passed back into the operational systems for use in the daily activity of the organization.
The architecture of Data warehouse
The concept of “data warehousing” dates back at least to the mid-1980s, and possibly earlier. In essence, it was intended to provide an architectural model for the flow of data from operational systems to decision support environments. It attempted to address the various problems associated with this flow, and the high costs associated with it. In the absence of such an architecture, there usually existed an enormous amount of redundancy in the delivery of management information. In larger corporations it was typical for multiple decision support projects to operate independently, each serving different users but often requiring much of the same data.
The process of gathering, cleaning and integrating data from various sources. often legacy systems, was typically replicated for each project. Moreover, legacy systems were frequently being revisited as new requirements emerged, each requiring a subtly different view of the legacy data.
Based on analogies with real-life warehouses, data warehouses were intended as large-scale collection/storage/staging areas for corporate data. From here data could be distributed to “retail stores” or “data marts” which were tailored for access by decision support users (or “consumers”).
While the data warehouse was designed to manage the bulk supply of data from its suppliers (e.g. operational systems), and to handle the organization and storage of this data, the “retail stores” or “data marts™ could be focused on packaging and presenting selections of the data to end-users, to meet specific management information needs.
Somewhere along the way this analogy and architectural vision was lost, as some vendors and industry speakers redefined the data warehouse as simply a management reporting database. This is a subtle but important deviation from the original vision of the data warehouse as the hub of a management information architecture, where the decision support systems were actually the data marts or “retail stores”.
Storage in data warehouse
In OLTP-online transaction processing systems relational database design use the discipline of data modeling and generally follow the Codd rules of data normalization in order to ensure absolute data integrity. Less complex information is broken down into its most simple structures (a table) where all of the individual atomic level elements relate to each other and satisfy the normalization rules.
Codd defines 5 increasingly stringent rules of normalization and typically OLTP systems achieve a 3rd level of normalization. Fully normalized OLTP database designs often result in having information from a business transaction stored in dozens to hundreds of tables.
Relational database managers are efficient at managing the relationships between tables and result in very fast insert/update performance because only a little bit of data is affected in each relational transaction. OLTP databases are efficient because they are typically only dealing with the information around a single transaction.
In reporting and analysis, thousands to billions of transactions may need to be reassembled imposing a huge workload on the relational database. Given enough time the software can usually return the requested results, but because of the negative performance impact on the machine and all of its hosted applications, data warehousing professionals recommend that reporting databases be physically separated from the OLTP database.
In addition, data warehousing suggests that data be restructured and reformatted to facilitate query and analysis by novice users. OLTP databases are designed to provide good performance by rigidly defined applications built by programmers fluent in the constraints and conventions of the technology. Add in frequent enhancements, and too many a database is just a collection of cryptic names, seemingly unrelated and obscure structures that store data using incomprehensible coding schemes; all factors that while improving performance, complicate use by untrained people.
Lastly, the data warehouse needs to support high volumes of data gathered over extended periods of time and are subject to complex queries and need to accommodate formats and definitions inherited from independently designed package and legacy systems.
Designing the data warehouse data Architecture synergy is the realm of Data Warehouse Architects. The goal of a data warehouse is to bring data together from a variety of existing databases to support management and reporting needs.
The generally accepted principle is that data should be stored at its most elemental level because this provides for the most useful and flexible basis for use in reporting and information analysis. However, because of different focus on specific requirements, there can be alternative methods for designing and implementing data warehouses
There are two leading approaches to organizing the data in a data warehouse the dimensional approach advocated by Ralph Kimball and the normalized approach advocated by Bill Inmon. Whilst the dimension approach is very useful in data mart design, it can result in a rats nest of long-term data integration and abstraction complications when used in a data warehouse.
In the “dimensional” approach, transaction data is partitioned into either measured “facts” which are generally numeric data that captures specific values, or “dimensions” which contain the reference information that gives each transaction its context. As an example, a sales transaction would be broken up into facts such as the number of products ordered, the price paid, and dimensions such as date, customer, product, geographical location, and salesperson.
The main advantage of a dimensional approach is that the data warehouse is easy for business staff with limited information technology experience to understand and use. Also, because the data is pre-joined into the dimensional form, the data warehouse tends to operate
very quickly. The main disadvantage of the dimensional approach is that it is quite difficult to add or change later if the company changes the way in which it does business.
The “normalized” approach uses database normalization. In this method, the data in the data warehouse is stored in the third normal form. Tables are then grouped together by subject areas that reflect the general definition of the data (customer, product, finance, etc.) The main advantage of this approach is that it is quite straightforward to add new information to the database-the primary disadvantage of this approach is that because of the number of tables involved, it can be rather slow to produce information and reports.
Furthermore, since the segregation of facts and dimensions is not explicit in this type of data model, it is difficult for users to join the required data elements into meaningful information without a precise understanding of the data structure.
Subject areas are just a method of organizing information and can be defined along any lines. The traditional approach has subjects defined as the subjects or nouns within a problem space. For example, in a financial services business, you might have customers, products and contracts. An alternative approach is to organize around the business transactions, such as customer enrollment, sales and trades.