Big data architecture examples

introduction to bigdata and hadoop ecosystem and big data hadoop architecture ppt
CecilGalot Profile Pic
CecilGalot,United Kingdom,Teacher
Published Date:06-08-2017
Your Website URL(Optional)
Comment
The Big (Data) Problem Data management is getting more complex than it has ever been before. Big Data is everywhere, on everyone’s mind, and in many different forms: advertising, social graphs, news feeds, recommendations, marketing, healthcare, security, government, and so on. In the last three years, thousands of technologies having to do with Big Data acquisition, management, and analytics have emerged; this has given IT teams the hard task of choosing, without having a comprehensive methodology to handle the choice most of the time. When making such a choice for your own situation, ask yourself the following questions: When should I think about employing Big Data for my IT system? Am I ready to employ it? What should I start with? Should I really go for it despite feeling that Big Data is just a marketing trend? All these questions are running around in the minds of most Chief Information Officers (CIOs) and Chief Technology Officers (CTOs), and they globally cover the reasons and the ways you are putting your business at stake when you decide to deploy a distributed Big Data architecture. This chapter aims to help you identity Big Data symptoms—in other words when it becomes apparent that you need to consider adding Big Data to your architecture—but it also guides you through the variety of Big Data technologies to differentiate among them so that you can understand what they are specialized for. Finally, at the end of the chapter, we build the foundation of a typical distributed Big Data architecture based on real life examples. Identifying Big Data Symptoms You may choose to start a Big Data project based on different needs: because of the volume of data you handle, because of the variety of data structures your system has, because of scalability issues you are experiencing, or because you want to reduce the cost of data processing. In this section, you’ll see what symptoms can make a team realize they need to start a Big Data project. Size Matters The two main areas that get people to start thinking about Big Data are when they start having issues related to data size and volume; although most of the time these issues present true and legitimate reasons to think about Big Data, today, they are not the only reasons to go this route. There are others symptoms that you should also consider—type of data, for example. How will you manage to increase various types of data when traditional data stores, such as SQL databases, expect you to do the structuring, like creating tables? This is not feasible without adding a flexible, schemaless technology that handles new data structures as they come. When I talk about types of data, you should imagine unstructured data, graph data, images, videos, voices, and so on. 1 Chapter 1 ■ he Bi t g (Daa)t proBlem Yes, it’s good to store unstructured data, but it’s better if you can get something out of it. Another symptom comes out of this premise: Big Data is also about extracting added value information from a high-volume variety of data. When, a couple of years ago, there were more read transactions than write transactions, common caches or databases were enough when paired with weekly ETL (extract, transform, load) processing jobs. Today that’s not the trend any more. Now, you need an architecture that is capable of handling data as it comes through long processing to near real-time processing jobs. The architecture should be distributed and not rely on the rigid high-performance and expensive mainframe; instead, it should be based on a more available, performance driven, and cheaper technology to give it more flexibility. Now, how do you leverage all this added value data and how are you able to search for it naturally? To answer this question, think again about the traditional data store in which you create indexes on different columns to speed up the search query. Well, what if you want to index all hundred columns because you want to be able to execute complex queries that involve a nondeterministic number of key columns? You don’t want to do this with a basic SQL database; instead, you would rather consider using a NoSQL store for this specific need. So simply walking down the path of data acquisition, data structuring, data processing, and data visualization in the context of the actual data management trends makes it easy to conclude that size is no longer the main concern. Typical Business Use Cases In addition to technical and architecture considerations, you may be facing use cases that are typical Big Data use cases. Some of them are tied to a specific industry; others are not specialized and can be applied to various industries. These considerations are generally based on analyzing application’s logs, such as web access logs, application server logs, and database logs, but they can also be based on other types of data sources such as social network data. When you are facing such use cases, you might want to consider a distributed Big Data architecture if you want to be able to scale out as your business grows. Consumer Behavioral Analytics Knowing your customer, or what we usually call the “360-degree customer view” might be the most popular Big Data use case. This customer view is usually used on e-commerce websites and starts with an unstructured clickstream—in other words, it is made up of the active and passive website navigation actions that a visitor performs. By counting and analyzing the clicks and impressions on ads or products, you can adapt the visitor’s user experience depending on their behavior, while keeping in mind that the goal is to gain insight in order to optimize the funnel conversion. Sentiment Analysis Companies care about how their image and reputation is perceived across social networks; they want to minimize all negative events that might affect their notoriety and leverage positive events. By crawling a large amount of social data in a near-real-time way, they can extract the feelings and sentiments of social communities regarding their brand, and they can identify influential users and contact them in order to change or empower a trend depending on the outcome of their interaction with such users. 2 Chapter 1 ■ the Big (Daa)t proBlem CRM Onboarding You can combine consumer behavioral analytics with sentiment analysis based on data surrounding the visitor’s social activities. Companies want to combine these online data sources with the existing offline data, which is called CRM (customer relationship management) onboarding, in order to get better and more accurate customer segmentation. Thus, companies can leverage this segmentation and build a better targeting system to send profile-customized offers through marketing actions. Prediction Learning from data has become the main Big Data trend for the past two years. Prediction-enabled Big Data can be very efficient in multiple industries, such as in the telecommunication industry, where prediction router log analysis is democratized. Every time an issue is likely to occur on a device, the company can predict it and order part to avoid downtime or lost profits. When combined with the previous use cases, you can use predictive architecture to optimize the product catalog selection and pricing depending on the user’s global behavior. Understanding the Big Data Project’s Ecosystem Once you understand that you actually have a Big Data project to implement, the hardest thing is choosing the technologies to use in your architecture. It is not just about picking the most famous Hadoop-related technologies, it’s also about understanding how to classify them in order to build a consistent distributed architecture. To get an idea of the number of projects in the Big Data galaxy, browse tohttps://github.com/zenkay/ bigdata-ecosystemprojects-1 to see more than 100 classified projects. Here, you see that you might consider choosing a Hadoop distribution, a distributed file system, a SQL-like processing language, a machine learning language, a scheduler, message-oriented middleware, a NoSQL datastore, data visualization, and so on. Since this book’s purpose is to describe a scalable way to build a distributed architecture, I don’t dive into all categories of projects; instead, I highlight the ones you are likely to use in a typical Big Data project. You can eventually adapt this architecture and integrate projects depending on your needs. You’ll see concrete examples of using such projects in the dedicated parts. To make the Hadoop technology presented more relevant, we will work on a distributed architecture that meets the previously described typical use cases, namely these: • Consumer behavioral analytics • Sentiment analysis • CRM onboarding and prediction Hadoop Distribution In a Big Data project that involves Hadoop-related ecosystem technologies, you have two choices: • Download the project you need separately and try to create or assemble the technologies in a coherent, resilient, and consistent architecture. • Use one of the most popular Hadoop distributions, which assemble or create the technologies for you. 3 Chapter 1 ■ the Big (Daa)t roB p lem Although the first option is completely feasible, you might want to choose the second one, because a packaged Hadoop distribution ensures capability between all installed components, ease of installation, configuration-based deployment, monitoring, and support. Hortonworks and Cloudera are the main actors in this field. There are a couple of differences between the two vendors, but for starting a Big Data package, they are equivalent, as long as you don’t pay attention to the proprietary add-ons. My goal here is not to present all the components within each distribution but to focus on what each vendor adds to the standard ecosystem. I describe most of the other components in the following pages depending on what we need for our architecture in each situation. Cloudera CDH Cloudera adds a set of in-house components to the Hadoop-based components; these components are designed to give you better cluster management and search experiences. The following is a list of some of these components: • Impala: A real-time, parallelized, SQL-based engine that searches for data in HDFS (Hadoop Distributed File System) and Base. Impala is considered to be the fastest querying engine within the Hadoop distribution vendors market, and it is a direct competitor of Spark from UC Berkeley. • Cloudera Manager: This is Cloudera’s console to manage and deploy Hadoop components within your Hadoop cluster. • Hue: A console that lets the user interact with the data and run scripts for the different Hadoop components contained in the cluster. Figure 1-1 illustrates Cloudera’s Hadoop distribution with the following component classification: • The components in orange are part of Hadoop core stack. • The components in pink are part of the Hadoop ecosystem project. • The components in blue are Cloudera-specific components. Figure 1-1. The Cloudera Hadoop distribution 4 Chapter 1 ■ the Big (Daa)t proBlem Hortonworks HDP Hortonworks is 100-percent open source and is used to package stable components rather than the last version of the Hadoop project in its distribution. It adds a component management console to the stack that is comparable to Cloudera Manager. Figure 1-2 shows a Hortonworks distribution with the same classification that appeared in Figure 1-1; the difference is that the components in green are Hortonworks-specific components. Figure 1-2. Hortonworks Hadoop distribution As I said before, these two distributions (Hortonworks and Cloudera) are equivalent when it comes to building our architecture. Nevertheless, if we consider the maturity of each distribution, then the one we should choose is Cloudera; the Cloudera Manager is more complete and stable than Ambari in terms of features. Moreover, if you are considering letting the user interact in real-time with large data sets, you should definitely go with Cloudera because its performance is excellent and already proven. Hadoop Distributed File System (HDFS) You may be wondering where the data is stored when it is ingested into the Hadoop cluster. Generally it ends up in a dedicated file system called HDFS. These are HDFS’s key features: • Distribution • High-throughput access • High availability • Fault tolerance • Tuning • Security • Load balancing 5 Chapter 1 ■ the Big (Daa)t ropBlem HDFS is the first class citizen for data storage in a Hadoop cluster. Data is automatically replicated across the cluster data nodes. Figure 1-3 shows how the data in HDFS can be replicated over a cluster of five nodes. Figure 1-3. HDFS data replication You can find out more about HDFS athadoop.apache.org. Data Acquisition Data acquisition or ingestion can start from different sources. It can be large log files, streamed data, ETL processing outcome, online unstructured data, or offline structure data. Apache Flume When you are looking to produce ingesting logs, I would highly recommend that you use Apache Flume; it’s designed to be reliable and highly available and it provides a simple, flexible, and intuitive programming model based on streaming data flows. Basically, you can configure a data pipeline without a single line of code, only through configuration. Flume is composed of sources, channels, and sinks. The Flume source basically consumes an event from an external source, such as an Apache Avro source, and stores it into the channel. The channel is a passive storage system like a file system; it holds the event until a sink consumes it. The sink consumes the event, deletes it from the channel, and distributes it to an external target. Figure 1-4 describes the log flow between a web server, such as Apache, and HDFS through a Flume pipeline. 6 Chapter 1 ■ he B t ig (Daa)t pB ro lem Figure 1-4. Flume architecture With Flume, the idea is to use it to move different log files that are generated by the web servers to HDFS, for example. Remember that we are likely to work on a distributed architecture that might have load balancers, HTTP servers, application servers, access logs, and so on. We can leverage all these assets in different ways and they can be handled by a Flume pipeline. You can find out more about Flume at flume.apache.org. Apache Sqoop Sqoop is a project designed to transfer bulk data between a structured data store and HDFS. You can use it to either import data from an external relational database to HDFS, Hive, or even HBase, or to export data from your Hadoop cluster to a relational database or data warehouse. Sqoop supports major relational databases such as Oracle, MySQL, and Postgres. This project saves you from writing scripts to transfer the data; instead, it provides you with performance data transfers features. Since the data can grow quickly in our relational database, it’s better to identity fast growing tables from the beginning and use Sqoop to periodically transfer the data in Hadoop so it can be analyzed. Then, from the moment the data is in Hadoop, it is combined with other data, and at the end, we can use Sqoop export to inject the data in our business intelligence (BI) analytics tools. You can find out more about Sqoop atsqoop.apache.org. Processing Language Once the data is in HDFS, we use a different processing language to get the best of our raw bulk data. Yarn: NextGen MapReduce MapReduce was the main processing framework in the first generation of the Hadoop cluster; it basically grouped sibling data together (Map) and then aggregated the data in depending on a specified aggregation operation (Reduce). In Hadoop 1.0, users had the option of writing MapReduce jobs in different languages—Java, Python, Pig, Hive, and so on. Whatever the users chose as a language, everyone relied on the same processing model: MapReduce. 7 Chapter 1 ■ the Big (Daa)t roB p lem Since Hadoop 2.0 was released, however, a new architecture has started handling data processing above HDFS. Now that YARN (Yet Another Resource Negotiator) has been implemented, others processing models are allowed and MapReduce has become just one among them. This means that users now have the ability to use a specific processing model depending on their particular use case. Figure 1-5 shows how HDFS, YARN, and the processing model are organized. Figure 1-5. YARN structure We can’t afford to see all the language and processing models; instead we’ll focus on Hive and Spark, which cover our use cases, namely long data processing and streaming. Batch Processing with Hive When you decide to write your first batch-processing job, you can implement it using your preferred programming language, such as Java or Python, but if you do, you better be really comfortable with the mapping and reducing design pattern, which requires development time and complex coding, and is, sometimes, really hard to maintain. As an alternative, you can use a higher-level language, such as Hive, which brings users the simplicity and power of querying data from HDFS in a SQL-like way. Whereas you sometimes need 10 lines of code in MapReduce/Java; in Hive, you will need just one simple SQL query. When you use another language rather than using native MapReduce, the main drawback is the performance. There is a natural latency between Hive and MapReduce; in addition, the performance of the user SQL query can be really different from a query to another one, as is the case in a relational database. You can find out more about Hive athive.apache.org. Hive is not a near or real-time processing language; it’s used for batch processing such as a long-term processing job with a low priority. To process data as it comes, we need to use Spark Streaming. Stream Processing with Spark Streaming Spark Streaming lets you write a processing job as you would do for batch processing in Java, Scale, or Python, but for processing data as you stream it. This can be really appropriate when you deal with high throughput data sources such as a social network (Twitter), clickstream logs, or web access logs. 8 Chapter 1 ■ the Big (Daa)t proBlem Spark Streaming is an extension of Spark, which leverages its distributed data processing framework and treats streaming computation as a series of nondeterministic, micro-batch computations on small intervals. You can find out more about Spark Streaming atspark.apache.org. Spark Streaming can get its data from a variety of sources but when it is combined, for example, with Apache Kafka, Spark Streaming can be the foundation of a strong fault-tolerant and high-performance system. Message-Oriented Middleware with Apache Kafka Apache is a distributed publish-subscribe messaging application written by LinkedIn in Scale. Kafka is often compared to Apache ActiveMQ or RabbitMQ, but the fundamental difference is that Kafka does not implement JMS (Java Message Service). However, Kafka is a persistent messaging and high-throughput system, it supports both queue and topic semantics, and it uses ZooKeeper to form the cluster nodes. Kafka implements the publish-subscribe enterprise integration pattern and supports parallelism and enterprise features for performance and improved fault tolerance. Figure 1-6 gives high-level points of view of a typical publish-subscribe architecture with message transmitting over a broker, which serves a partitioned topic. Figure 1-6. Kafka partitioned topic example We’ll use Kafka as a pivot point in our architecture mainly to receive data and push it into Spark Streaming. You can find out more about Kafka atkafka.apache.org. 9 www.allitebooks.com Chapter 1 ■ the Big (Daa)t ropBlem Machine Learning It’s never too soon to talk about machine learning in our architecture, specifically when we are dealing with use cases that have an infinity converging model that can be highlighted with a small data sample. We can use a machine learning—specific language or leverage the existing layers, such as Spark with Spark MLlib (machine learning library). Spark MLlib MLlib enables machine learning for Spark, it leverages the Spark Direct Acyclic Graph (DAG) execution engine, and it brings a set of APIs that ease machine learning integration for Spark. It’s composed of various algorithms that go from basic statistics, logistic regression, k-means clustering, and Gaussian mixtures to singular value decomposition and multinomial naive Bayes. With Spark MLlib out-of-box algorithms, you can simply train your data and build prediction models with a few lines of code. You can learn more about Spark MLlib atspark.apache.org/mllib. NoSQL Stores NoSQL datastores are fundamental pieces of the data architecture because they can ingest a very large amount of data and provide scalability and resiliency, and thus high availability, out of the box and without effort. Couchbase and ElasticSearch are the two technologies we are going to focus on; we’ll briefly discuss them now, and later on in this book, we’ll see how to use them. Couchbase Couchbase is a document-oriented NoSQL database that is easily scalable, provides a flexible model, and is consistently high performance. We’ll use Couchbase as a document datastore, which relies on our relational database. Basically, we’ll redirect all reading queries from the front end to Couchbase to prevent high-reading throughput on the relational database. For more information on Couchbase, visitcouchbase.com. ElasticSearch ElasticSearch is a NoSQL technology that is very popular for its scalable distributed indexing engine and search features. It’s based on Apache Lucene and enables real-time data analytics and full-text search in your architecture. ElasticSearch is part of the ELK platform, which stands for ElasticSearch + Logstash + Kibana, which is delivered by Elastic the company. The three products work together to provide the best end-to-end platform for collecting, storing, and visualizing data: • Logstash lets you collect data from many kinds of sources—such as social data, logs, messages queues, or sensors—it then supports data enrichment and transformation, and finally it transports them to an indexation system such as ElasticSearch. • ElasticSearch indexes the data in a distributed, scalable, and resilient system. It’s schemaless and provides libraries for multiple languages so they can easily and fatly enable real-time search and analytics in your application. • Kibana is a customizable user interface in which you can build a simple to complex dashboard to explore and visualize data indexed by ElasticSearch. 10 Chapter 1 ■ the Big (Daa)t proBlem Figure 1-7 shows the structure of Elastic products. Figure 1-7. ElasticSearch products As you can see in the previous diagram, Elastic also provides commercial products such as Marvel, a monitoring console based on Kibana; Shield, a security framework, which, for example, provides authentication and authorization; and Watcher, an alerting and notification system. We won’t use these commercial products in this book. Instead, we’ll mainly use ElasticSearch as a search engine that holds the data produced by Spark. After being processed and aggregated, the data is indexed into ElasticSearch to enable a third-party system to query the data through the ElasticSearch querying engine. On the other side, we also use ELK for the processing logs and visualizing analytics, but from a platform operational point of view. For more information on ElasticSearch, visitelastic.co. 11 Chapter 1 ■ the Big (Daa)t roB p lem Creating the Foundation of a Long-Term Big Data Architecture Keeping all the Big Data technology we are going to use in mind, we can now go forward and build the foundation of our architecture. Architecture Overview From a high-level point of view, our architecture will look like another e-commerce application architecture. We will need the following: • A web application the visitor can use to navigate in a catalog of products • A log ingestion application that is designed to pull the logs and process them • A learning application for triggering recommendations for our visitor • A processing engine that functions as the central processing cluster for the architecture • A search engine to pull analytics for our process data Figure 1-8 shows how these different applications are organized in such an architecture. Figure 1-8. Architecture overview 12 Chapter 1 ■ the Big (Daa)t proBlem Log Ingestion Application The log ingestion application is used to consume application logs such as web access logs. To ease the use case, a generated web access log is provided and it simulates the behavior of visitors browsing the product catalog. These logs represent the clickstream logs that are used for long-term processing but also for real-time recommendation. There can be two options in the architecture: the first can be ensured by Flume and can transport the logs as they come in to our processing application; the second can be ensured by ElasticSearch, Logstash, and Kibana (the ELK platform) to create access analytics. Figure 1-9 shows how the logs are handled by ELK and Flume. Figure 1-9. Ingestion application Using ELK for this architecture gives us a greater value since the three products integrate seamlessly with each other and bring more value that just using Flume alone and trying to obtain the same level of features. Learning Application The learning application receives a stream of data and builds prediction to optimize our recommendation engine. This application uses a basic algorithm to introduce the concept of machine learning based on Spark MLlib. 13 Chapter 1 ■ the Big (Daa)t roB p lem Figure 1-10 shows how the data is received by the learning application in Kafka, is then sent to Spark to be processed, and finally is indexed into ElasticSearch for further usage. Figure 1-10. Machine learning Processing Engine The processing engine is the heart of the architecture; it receives data from multiple kinds of source and delegates the processing to the appropriate model. Figure 1-11 shows how the data is received by the processing engine that is composed of Hive for path processing and Spark for real-time/near real-time processing. Figure 1-11. Processing engine Here we use Kafka combined with Logstash to distribute the data to ElasticSearch. Spark lives on top of a Hadoop cluster, which is not mandatory. In this book, for simplicity’s sake, we do not set up a Hadoop cluster, but prefer to run Spark in a standalone mode. Obviously, however, you’re able to deploy your work in your preferred Hadoop distribution. 14 Chapter 1 ■ he B t ig (Daa)t B pro lem Search Engine The search engine leverages the data processed by the processing engine and exposes a dedicated RESTful API that will be used for analytic purposes. Summary So far, we have seen all the components that make up our architecture. In the next chapter, we will focus on the NoSQL part and will further explore two different technologies: Couchbase and ElasticSearch. 15