Introduction to HBase, the NoSQL Database for Hadoop
Apache HBase is a NoSQL database that runs on top of Hadoop as a distributed and scalable big data store. This means that HBase can leverage the distributed processing paradigm of the Hadoop Distributed File System (HDFS) and benefit from Hadoop’s MapReduce programming model. It is meant to host large tables with billions of rows with potentially millions of columns and run across a cluster of commodity hardware. But beyond its Hadoop roots, HBase is a powerful database in its own right that blends real-time query capabilities with the speed of a key/value store and offline or batch processing via MapReduce. In short, HBase allows you to query for individual records as well as derive aggregate analytic reports across a massive amount of data.
As a little bit of history, Google was faced with a challenging problem: How could it provide timely search results across the entire Internet? The answer was that it essentially needed to cache the Internet and define a new way to search that enormous cache quickly. It defined the following technologies for this purpose:
- Google File System: A scalable distributed file system for large distributed data-intensive applications
- BigTable: A distributed storage system for managing structured data that is designed to scale to a large size: petabytes of data across thousands of commodity servers
- MapReduce: A programming model and an associated implementation for processing and generating large data sets
It was not too long after Google published these documents that we started seeing open source implementations of them, and in 2007, Mike Cafarella released code for an open source BigTable implementation that he called HBase. Since then HBase has become a top-level Apache project that runs in Facebook, Twitter, and Adobe, just to name a few.
HBase is not a relational database and requires a different approach to modeling your data. HBase actually defines a four-dimensional data model and the following four coordinates define each cell (see Figure 1):
- Row Key: Each row has a unique row key; the row key does not have a data type and is treated internally as a byte array.
- Column Family: Data inside a row is organized into column families; each row has the same set of column families, but across rows, the same column families do not need the same column qualifiers. Under-the-hood, HBase stores column families in their own data files, so they need to be defined upfront, and changes to column families are difficult to make.
- Column Qualifier: Column families define actual columns, which are called column qualifiers. You can think of column qualifiers as the columns themselves.
- Version: Each column can have a configurable number of versions, and you can access the data for a specific version of a column qualifier.
Figure 1. HBase Four-Dimensional Data Model
As shown in Figure 1, an individual row is accessible through its row key and is composed of one or more column families. Each column family has one or more column qualifiers (called “column” in Figure 1) and each column can have one or more versions. To access an individual piece of data, you need to know its row key, column family, column qualifier, and version.
When designing an HBase data model, it is helpful to think about how the data is going to be accessed. You can access HBase data in two ways:
- Through their row key or via a table scan for a range of row keys
- In a batch manner using map-reduce
This dual-approach to data access is something that makes HBase particularly powerful. Typically, storing data in Hadoop means that it is good for offline or batch analysis (and it is very, very good at batch analysis) but not necessarily for real-time access. HBase addresses this by being both a key/value store for real-time analysis and supporting map-reduce for batch analysis.
Let’s first look at the real-time access. As a key/value store, the key is the row key, and the value is the collection of column families, as shown in Figure 2.
Figure 2. HBase as a Key/Value Store
As you can see in Figure 2, the key is the row key we have been talking about, and the value is the collection of column families (that have their associated columns that have versions of the data). You can retrieve the value associated with a key; or in other words, you can “get” the row associated with a row key, or you can retrieve a set of rows by giving the starting row key and ending row key, which is referred to as a table scan. You cannot query for values contained in columns in a real-time query, which leads to an important topic: row key design.
The design of the row key is important for two reasons:
- Table scans operate against the row key, so the design of the row key controls how much real-time/direct access you can perform against HBase.
- When running HBase in a production environment, it runs on top of the Hadoop Distributed File System (HDFS) and the data is distributed across the HDFS based on the row key. If all your row keys start with “user-” then most likely the majority of your data will be isolated to a single node (which defeats the purpose of distributing the data in the first place). Your row keys, therefore, should be different enough to be distributed across the entire deployment/
The manner in which you design your row keys depends on how you intend to access those rows. If you store data on a per user basis, then one strategy is to leverage the fact that row keys are ultimately stored as byte arrays in HBase, so we can create a hash (such as an MD5 or SHA-1 hash code) of the user ID and then append the time (as a long) to hash. The importance in using a hash is two-fold: (1) it distributes values so that the data can be distributed across the cluster and (2) it ensures that the length (in bytes) of the key is consistent and hence easier to use in table scans.
Enough with theory; the next section shows you how to set up an HBase environment and start using it via its shell.
Setting Up an HBase Environment
You can download HBase from the Apache website; at the time of this writing, the latest version is 0.98.5. The HBase team recommends that you install HBase on a UNIX/Linux environment, so if you run Windows, you might want to download and install Cygwin and run HBase from there. After you have the file downloaded, decompress it to your hard drive. You need to have Java installed to launch HBase, so if you have not already done so, download and install Java from Oracle’s website. Define an environment variable named “HBASE_HOME” that points to the root directory of where you decompressed HBase, and then execute the start-hbase.sh script from HBase’s bin directory. It will log output to the following directory:
$HBASE_HOME/logs/
You can test your installation by opening a browser to the following URL:
http://localhost:60010
You should see a screen similar to Figure 3.
Figure 3. HBase Management Screen
Let’s begin by interacting with HBase using its command shell. Execute the following command from HBase’s bin directory:
./hbase shell
You should see output similar to the following:
HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 0.98.5-hadoop2, rUnknown, Mon Aug 4 23:58:06 PDT 2014 hbase(main):001:0>
Let’s create a new table named 'PageViews' with a column family named 'info':
hbase(main):002:0> create 'PageViews', 'info' 0 row(s) in 5.3160 seconds => Hbase::Table - PageViews
Every table needs to have at least one column family, so we created one named 'info'. Now let’s look at our table. Execute the following command to list all tables:
hbase(main):002:0> list TABLE PageViews 1 row(s) in 0.0350 seconds => ["PageViews"]
As you can see, the list command returns a single table, namely 'PageViews'. We can get more information about the table by executing the describe command:
hbase(main):003:0> describe 'PageViews' DESCRIPTION ENABLED 'PageViews', {NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE true ', MIN_VERSIONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'} 1 row(s) in 0.0480 seconds
The describe command returns details about the table, including a list of all column families, which in our case is one: info. Now let's add some data to our table. The following command inserts a new row into the info column family:
hbase(main):004:0> put 'PageViews', 'rowkey1', 'info:page', '/mypage' 0 row(s) in 0.0850 seconds
The put command inserts a new record into the row with row key “rowkey1”. We specify that we are adding the value “/mypage” into the page column in the info column family. We can then retrieve the column families and their values for rowkey1 with the get command:
hbase(main):005:0> get 'PageViews', 'rowkey1' COLUMN CELL info:page timestamp=1410374788088, value=/mypage 1 row(s) in 0.0250 seconds
You can see that column “info:page”, or more specifically the page column qualifier in the info column family, has the value “/mypage” and the specified time stamp for when it was inserted. Let's add another row before we do a table scan:
hbase(main):006:0> put 'PageViews', 'rowkey2', 'info:page', '/myotherpage' 0 row(s) in 0.0050 seconds
Now that we have two rows, let's query for all rows in the PageViews table:
hbase(main):007:0> scan 'PageViews' ROW COLUMN+CELL rowkey1 column=info:page, timestamp=1410374788088, value=/mypage rowkey2 column=info:page, timestamp=1410374823590, value=/myotherpage 2 row(s) in 0.0350 seconds
As mentioned in the previous section, we cannot query per se, but rather we can scan the table. If you execute a scan with the table name, it returns all rows in the table, which is probably not what you want to do. Instead you can send it a start row and an end row to limit the returned value. Let's insert a row with a row key that start with “s”:
hbase(main):012:0> put 'PageViews', 'srowkey2', 'info:page', '/myotherpage'
Now if we add in the constraints that we want to see rows with a row key greater than 'r' and less than 's', we can construct the following table scan:
hbase(main):014:0> scan 'PageViews', { STARTROW => 'r', ENDROW => 's' } ROW COLUMN+CELL rowkey1 column=info:page, timestamp=1410374788088, value=/mypage rowkey2 column=info:page, timestamp=1410374823590, value=/myotherpage 2 row(s) in 0.0080 seconds
This scan returned back only the row keys that started with 'r'. The comparison is based on the full row key, so 'rowkey1' is greater than 'r' so it was returned. The scan includes rows that match the STARTROW value but excludes rows that match the ENDROW value. Note that the ENDROW value is actually optional, so if we execute the same query with just the STARTROW, then we'll receive every row greater than 'r':
hbase(main):013:0> scan 'PageViews', { STARTROW => 'r' } ROW COLUMN+CELL rowkey1 column=info:page, timestamp=1410374788088, value=/mypage rowkey2 column=info:page, timestamp=1410374823590, value=/myotherpage srowkey2 column=info:page, timestamp=1410375975965, value=/myotherpage 3 row(s) in 0.0120 seconds
Summary
HBase is a NoSQL database commonly referred to as the Hadoop Database, which is open-source and is based on Google's Big Table white paper. HBase runs on top of the Hadoop Distributed File System (HDFS), which allows it to be highly scalable, and it supports Hadoop's map-reduce programming model. HBase permits two types of access: random access of rows through their row keys and offline or batch access through map-reduce queries.
This article presented HBase, described its features and benefits, and briefly reviewed the importance of HBase row key design. And it concluded by showing you how to set up a local HBase environment, use the shell to create a table, insert data into that table, retrieve a specific row, and finally how to perform a table scan.
The next article, “Programming HBase with Java,” presents the programmatic interface to HBase and demonstrates how to interact with HBase using Java. The final article, “HBase Data Analysis with MapReduce,” in this three-part series reviews how to use map-reduce and programmatically interact with HBase in an offline/batch manner.