How PostgreSQL writes data

How PostgreSQL writes data
Dr.MohitBansal Profile Pic
Dr.MohitBansal,Canada,Teacher
Published Date:26-10-2017
Your Website URL(Optional)
Comment
Understanding the Concepts of Replication Replication is an important issue and, in order to get started, it is highly important to understand some basic concepts and theoretical ideas related to replication. In this chapter, you will be introduced to various concepts of replication, and learn which kind of replication is most suitable for which kind of practical scenario. By the end of the chapter, you will be able to judge whether a certain concept is feasible under various circumstances or not. We will cover the following topics in this chapter: • The CAP theorem and physical limitations of replication • Different types of replication • Synchronous and asynchronous replication • Sharding and data distribution The goal of this chapter is to give you a lot of insights into the theoretical concepts. This is truly important in order to judge whether a certain customer requirement is actually technically feasible or not. You will be guided through the fundamental ideas and important concepts related to replication. 1 Understanding the Concepts of Replication The CAP theorem and physical limitations of replication You might wonder why theory is being covered in such a prominent place in a book that is supposed to be highly practical. Well, there is a very simple reason for that: some nice-looking marketing papers of some commercial database vendors might leave you with the impression that everything is possible and easy to do, without any serious limitation. This is not the case; there are physical limitations that every software vendor has to cope with. There is simply no way around the laws of nature, and shiny marketing cannot help overcome nature. The laws of physics and logic are the same for everybody, regardless of the power of someone's marketing department. In this section, you will be taught the so-called CAP theorem. Understanding the basic ideas of this theorem is essential to avoid some requirements that cannot be turned into reality. The CAP theorem was first described by Eric Brewer back in the year 2000. It has quickly developed into one of the most fundamental concepts in the database world. Especially with the rise of NoSQL database systems, Brewer's theorem (as the CAP theorem is often called) has become an important cornerstone of every distributed system. Understanding the CAP theorem Before we dig into the details, we need to discuss what CAP actually means. CAP is an abbreviation for the following three concepts: • Consistency: This term indicates whether all the nodes in a cluster see the same data at the same time or not. A read-only node has to see all previously completed reads at any time. • Availability: Reads and writes have to succeed all the time. In other words a node has to be available for users at any point of time. • Partition tolerance: This means that the system will continue to work even if arbitrary messages are lost on the way. A network partition event occurs when a system is no longer accessible (think of a network connection failure). A different way of considering partition tolerance is to think of it as message passing. If an individual system can no longer send or receive messages from other systems, it means that it has been effectively partitioned out of the network. The guaranteed properties are maintained even when network failures prevent some machines from communicating with others. 2 Chapter 1 Why are these three concepts relevant to normal users? Well, the bad news is that a replicated (or distributed) system can provide only two out of these three features at the same time. Keep in mind that only two out of the three promises can be fulfilled. It is theoretically impossible to offer consistency, availability, and partition tolerance at the same time. As you will see later in this book, this can have a significant impact on system layouts that are safe and feasible to use. There is simply no such thing as the solution to all replication-related problems. When you are planning a large-scale system, you might have to come up with different concepts, depending on needs that are specific to your requirements. PostgreSQL, Oracle, DB2, and so on will provide you with CAp ("consistent" and "available"), while NoSQL systems, such as MongoDB and Cassandra, will provide you with cAP ("available" and "partition tolerant"). This is why NoSQL is often referred to as eventually consistent. Consider a financial application. You really want to be consistent and partition tolerant. Keeping balances in sync is the highest priority. Or consider an application collecting a log of weather data from some remote locations. If the data is a couple of minutes late, it is really no problem. In this case, you might want to go for cAP. Availability and partition tolerance might really be the most important things in this case. Depending on the use, people have to decide what is really important and which attributes (consistency, availability, or partition tolerance) are crucial and which can be neglected. Keep in mind there is no system which can fulfill all those wishes at the same time (neither open source nor paid software). 3 Understanding the Concepts of Replication Understanding the limits of physics The speed of light is not just a theoretical issue; it really does have an impact on your daily life. And more importantly, it has a serious implication when it comes to finding the right solution for your cluster. We all know that there is some sort of cosmic speed limit called the speed of light. So why care? Well, let's do a simple mental experiment. Let's assume for a second that our database server is running at 3 GHz clock speed. How far can light travel within one clock cycle of your CPU? If you do the math, you will figure out that light travels around 10 cm per clock cycle (in pure vacuum). We can safely assume that an electric signal inside a CPU will be very slow compared to pure light in vacuum. The core idea is, "10 cm in one clock cycle? Well, this is not much at all." For the sake of our mental experiment, let's now consider various distances: • Distance from one end of the CPU to the other • Distance from your server to some other server next door • Distance from your server in Central Europe to a server somewhere in China Considering the size of a CPU core on a die, you can assume that you can send a signal (even if it is not traveling anywhere close to the speed of light) from one part of the CPU to some other part quite fast. It simply won't take 1 million clock cycles to add up two numbers that are already in your first level cache on your CPU. But what happens if you have to send a signal from one server to some other server and back? You can safely assume that sending a signal from server A to server B next door takes a lot longer because the cable is simply a lot longer. In addition to that, network switches and other network components will add some latency as well. Let's talk about the length of the cable here, and not about its bandwidth. Sending a message (or a transaction) from Europe to China is, of course, many times more time-consuming than sending some data to a server next door. Again, the important thing here is that the amount of data is not as relevant as the so-called latency, consider the following criteria: 4 Chapter 1 • Long-distance transmission: To explain the concept of latency, let's cover a very simple example. Let's assume you are a European and you are sending a letter to China. You will easily accept the fact that the size of your letter is not the limiting factor here. It makes absolutely no difference whether your letter is two or 20 pages long; the time it takes to reach the destination is basically the same. Also, it makes no difference whether you send one, two, or 10 letters at the same time. Given a reasonable numbers of letter, the size of the aircraft required (that is, the bandwidth) to ship the stuff to China is usually not the problem. However, the so-called round trip might very well be an issue. If you rely on the response to your letter from China to continue your work, you will soon find yourself waiting for a long time. • Why latency matters: Latency is an important issue. If you send a chunk of data from Europe to China, you should avoid waiting for the response. But if you send a chunk of data from your server to a server in the same rack, you might be able to wait for the response, because your electronic signal will simply be fast enough to make it back in time. The basic problems of latency described in this section are not PostgreSQL-specific. The very same concepts and physical limitations apply to all types of databases and systems. As mentioned before, this fact is sometimes silently hidden and neglected in shiny commercial marketing papers. Nevertheless, the laws of physics will stand firm. This applies to both commercial and open source software. The most important point you have to keep in mind here is that bandwidth is not always the magical fix to a performance problem in a replicated environment. In many setups, latency is at least as important as bandwidth. Different types of replication Now that you are fully armed with the basic understanding of physical and theoretical limitations, it is time to learn about different types of replication. It is important to have a clear image of these types to make sure that the right choice can be made and the right tool can be chosen. In this section, synchronous as well as asynchronous replication will be covered. 5 www.allitebooks.comUnderstanding the Concepts of Replication Synchronous versus asynchronous replication Let's dig into some important concepts now. The first distinction we can make is whether to replicate synchronously or asynchronously. What does this mean? Let's assume we have two servers and we want to replicate data from one server (the master) to the second server (the slave). The following diagram illustrates the concept of synchronous and asynchronous replication: We can use a simple transaction like the one shown in the following: BEGIN; INSERT INTO foo VALUES ('bar'); COMMIT; In the case of asynchronous replication, the data can be replicated after the transaction has been committed on the master. In other words, the slave is never ahead of the master; and in the case of writing, it is usually a little behind the master. This delay is called lag. Synchronous replication enforces higher rules of consistency. If you decide to replicate synchronously (how this is done practically will be discussed in Chapter 5, Setting Up Synchronous Replication), the system has to ensure that the data written by the transaction will be at least on two servers at the time the transaction commits. This implies that the slave does not lag behind the master and that the data seen by the end users will be identical on both the servers. 6 Chapter 1 Some systems will also use a quorum server to decide. So, it is not always about just two or more servers. If a quorum is used, more than half of the servers must agree on an action inside the cluster. Considering performance issues As you have learned earlier in the section about the speed of light and latency, sending unnecessary messages over the network can be expensive and time- consuming. If a transaction is replicated in a synchronous way, PostgreSQL has to make sure that the data reaches the second node, and this will lead to latency issues. Synchronous replication can be more expensive than asynchronous replication in many ways, and therefore, people should think twice about whether this overhead is really needed and justified. In the case of synchronous replication, confirmations from a remote server are needed. This, of course, causes some additional overhead. A lot has been done in PostgreSQL to reduce this overhead as much as possible. However, it is still there. Use synchronous replication only when it is really needed. Understanding replication and data loss When a transaction is replicated from a master to a slave, many things have to be taken into consideration, especially when it comes to things such as data loss. Let's assume that we are replicating data asynchronously in the following manner: 1. A transaction is sent to the master. 2. It commits on the master. 3. The master dies before the commit is sent to the slave. 4. The slave will never get this transaction. In the case of asynchronous replication, there is a window (lag) during which data can essentially be lost. The size of this window might vary, depending on the type of setup. Its size can be very short (maybe as short as a couple of milliseconds) or long (minutes, hours, or days). The important fact is that data can be lost. A small lag will only make data loss less likely, but any lag larger than zero lag is susceptible to data loss. If data can be lost, we are about to sacrifice the consistency part of CAP (if two servers don't have the same data, they are out of sync). 7 Understanding the Concepts of Replication If you want to make sure that data can never be lost, you have to switch to synchronous replication. As you have already seen in this chapter, a synchronous transaction is synchronous because it will be valid only if it commits to at least two servers. Single-master versus multimaster replication A second way to classify various replication setups is to distinguish between single-master and multi-master replication. "Single-master" means that writes can go to exactly one server, which distributes the data to the slaves inside the setup. Slaves may receive only reads, and no writes. In contrast to single-master replication, multi-master replication allows writes to all the servers inside a cluster. The following diagram shows how things work at a conceptual level: The ability to write to any node inside the cluster sounds like an advantage, but it is not necessarily one. The reason for this is that multimaster replication adds a lot of complexity to the system. In the case of only one master, it is totally clear which data is correct and in which direction data will flow, and there are rarely conflicts during replication. Multimaster replication is quite different, as writes can go to many nodes at the same time, and the cluster has to be perfectly aware of conflicts and handle them gracefully. An alterative would be to use locks to solve the problem, but this approach will also have its own challenges. Keep in mind that the need to resolve conflicts will cause network traffic, and this can instantly turn into scalability issues caused by latency. 8 Chapter 1 Logical versus physical replication One more way of classifying replication is to distinguish between logical and physical replication. The difference is subtle but highly important: • Physical replication means that the system will move data as is to the remote box. So, if something is inserted, the remote box will get data in binary format, not via SQL. • Logical replication means that a change, which is equivalent to data coming in, is replicated. Let's look at an example to fully understand the difference: test= CREATE TABLE t_test (t date); CREATE TABLE test= INSERT INTO t_test VALUES (now()) RETURNING ; t 2013-02-08 (1 row) INSERT 0 1 We see two transactions going on here. The first transaction creates a table. Once this is done, the second transaction adds a simple date to the table and commits. In the case of logical replication, the change will be sent to some sort of queue in logical form, so the system does not send plain SQL, but maybe something such as this: test= INSERT INTO t_test VALUES ('2013-02-08'); INSERT 0 1 Note that the function call has been replaced with the real value. It would be a total disaster if the slave were to calculatenow() once again, because the date on the remote box might be a totally different one. Some systems do use statement-based replication as the core technology. MySQL, for instance, uses a so-calledbin-log statement to replicate, which is actually not too binary but more like some form of logical replication. Of course, there are also counterparts in the PostgreSQL world, such as pgpool, Londiste, and Bucardo. 9 Understanding the Concepts of Replication Physical replication will work in a totally different way; instead of sending some SQL (or something else) over, which is logically equivalent to the changes made, the system will send binary changes made by PostgreSQL internally. Here are some of the binary changes our two transactions might have triggered (but by far, this is not a complete list): 1. Added an 8 K block topg_class and put a new record there (to indicate that the table is present). 2. Added rows topg_attribute to store the column names. 3. Performed various changes inside the indexes on those tables. 4. Recorded the commit status, and so on. The goal of physical replication is to create a copy of your system that is (largely) identical on the physical level. This means that the same data will be in the same place inside your tables on all boxes. In the case of logical replication, the content should be identical, but it makes no difference whether it is in the same place or not. When to use physical replication Physical replication is very convenient to use and especially easy to set up. It is widely used when the goal is to have identical replicas of your system (to have a backup or to simply scale up). In many setups, physical replication is the standard method that exposes the end user to the lowest complexity possible. It is ideal for scaling out the data. When to use logical replication Logical replication is usually a little harder to set up, but it offers greater flexibility. It is also especially important when it comes to upgrading an existing database. Physical replication is totally unsuitable for version jumps because you cannot simply rely on the fact that every version of PostgreSQL has the same on-disk layout. The storage format might change over time, and therefore, a binary copy is clearly not feasible for a jump from one version to the next. Logical replication allows decoupling of the way data is stored from the way it is transported and replicated. Using a neutral protocol, which is not bound to any specific version of PostgreSQL, it is easy to jump from one version to the next. Since PostgreSQL 9.4, there is something called Logical Decoding. It allows users to extract internal changes sent to the XLOG as SQL again. Logical decoding will be needed for a couple of replication techniques outlined in this book. 10 Chapter 1 Using sharding and data distribution In this section, you will learn about basic scalability techniques, such as database sharding. Sharding is widely used in high-end systems and offers a simple and reliable way to scale out a setup. In recent years, it has become the standard way to scale up professional systems. Understanding the purpose of sharding What happens if your setup grows beyond the capacity of a single machine in a single-master setup? What if you want to run so many transactions that one server is simply not able to keep up with them? Let's assume you have millions of users and tens of thousands among them want to perform a certain task at the same time. Clearly, at some point, you cannot buy servers that are big enough to handle an infinite load anymore. It is simply impossible to run a Facebook- or Google-like application on a single server. At some point, you have to come up with a scalability strategy that serves your needs. This is when sharding comes into play. The idea of sharding is simple: What if you could split data in a way that it can reside on different nodes? Designing a sharded system To demonstrate the basic concept of sharding, let's assume the following scenario: we want to store information about millions of users. Each user has a unique user ID. Let's further assume that we have only two servers. In this case, we can store even user IDs on server 1 and odd user IDs on server 2. The following diagram shows how this can be done: 11 Understanding the Concepts of Replication As you can see in our diagram, we have nicely distributed the data. Once this is done, we can send a query to the system, as follows: SELECT FROM t_user WHERE id = 4; The client can easily figure out where to find the data by inspecting the filter in our query. In our example, the query will be sent to the first node because we are dealing with an even number. As we have distributed the data based on a key (in this case, the user ID), we can search for any person easily if we know the key. In large systems, referring to users through a key is a common practice, and therefore, this approach is suitable. By using this simple approach, we have also easily doubled the number of machines in our setup. When designing a system, we can easily come up with an arbitrary number of servers; all we have to do is to invent a nice and clever partitioning function to distribute the data inside our server farm. If we want to split the data between 10 servers (not a problem), how about usinguser ID % 10 as a partitioning function? If you are interested in sharding, consider checking outshard_manager, which is available on PGXN. When you are trying to break up data and store it on different hosts, always make sure that you are using a proper partitioning function. It can be very beneficial to split data in such a way that each host has more or less the same amount of data. Splitting users alphabetically might not be a good idea. The reason for that is that not all the letters are equally likely. We cannot simply assume that the letters from A to M occur as often as the letters from N to Z. This can be a major issue if you want to distribute a dataset to a thousand servers instead of just a handful of machines. As stated before, it is essential to have a proper partitioning function, that produces evenly distributed results. In many cases, a hash function will provide you with nicely and evenly distributed data. This can especially be useful when working with character fields (such as names, e-mail addresses, and so on). Querying different fields In the previous section, you saw how we can easily query a person using their key. Let's take this a little further and see what happens if the following query is used: SELECT FROM t_test WHERE name = 'Max'; 12 Chapter 1 Remember that we have distributed data using the ID. In our query, however, we are searching for the name. The application will have no idea which partition to use because there is no rule telling us what is where. As a logical consequence, the application has to ask every partition for thename parameter. This might be acceptable if looking for the name was a real corner case; however, we cannot rely on this fact. Requiring to ask many servers instead of one is clearly a serious deoptimization, and therefore, not acceptable. We have two ways to approach the problem: coming up with a cleverer partitioning function, or storing the data redundantly. Coming up with a cleverer partitioning function would surely be the best option, but it is rarely possible if you want to query different fields. This leaves us with the second option, which is storing data redundantly. Storing a set of data twice, or even more often, is not too uncommon, and it's actually a good way to approach the problem. The following diagram shows how this can be done: As you can see, we have two clusters in this scenario. When a query comes in, the system has to decide which data can be found on which node. For cases where the name is queried, we have (for the sake of simplicity) simply split the data into half alphabetically. In the first cluster, however, our data is still split by user ID. Pros and cons of sharding One important thing to understand is that sharding is not a simple one-way street. If someone decides to use sharding, it is essential to be aware of the upsides as well as the downsides of the technology. As always, there is no Holy Grail that magically solves all the problems of mankind out of the box without them having to think about it. 13 Understanding the Concepts of Replication Each practical use case is different, and there is no replacement for common sense and deep thinking. First, let's take a look at the pros of sharding: • It has the ability to scale a system beyond one server • It is a straightforward approach • It is widely supported by various frameworks • It can be combined with various other replication approaches • It works nicely with PostgreSQL (for example, using PL/Proxy) Light and shade tend to go together, and therefore sharding also has its downsides: • Adding servers on the fly can be far from simple (depending on the type of partitioning function) • Your flexibility might be seriously reduced • Not all types of queries will be as efficient as they would be on a single server • There is an increase in overall complexity of the setup (such as failover, resyncing, maintenance and so on) • Backups need more planning • You might face redundancy and additional storage requirements • Application developers need to be aware of sharding to make sure that efficient queries are written In Chapter 13, Scaling with PL/Proxy, we will discuss how you can efficiently use sharding along with PostgreSQL, and how to set up PL/Proxy for maximum performance and scalability. Choosing between sharding and redundancy Learning how to shard a table is only the first step to designing a scalable system's architecture. In the example we showed in the previous section, we had only one table, which could be distributed easily using a key. But what if we have more than one table? Let's assume we have two tables: • A table calledt_user for storing the users in our system • A table calledt_language for storing the languages supported by our system 14 Chapter 1 We might be able to partition thet_user table nicely and split it in such a way that it can reside on a reasonable number of servers. But what about thet_language table? Our system might support as many as 10 languages. It can make perfect sense to shard and distribute hundreds of millions of users, but splitting up 10 languages? This is clearly useless. In addition to all this, we might need our language table on all the nodes so that we can perform joins. One solution to the problem is simple: you need a full copy of the language table on all the nodes. This will not cause a storage-consumption-related problem because the table is so small. Of course, there are many different ways to attack the problem. Make sure that only large tables are sharded. In the case of small tables, full replicas of the tables might just make much more sense. Again, every case has to be thought over thoroughly. Increasing and decreasing the size of a cluster So far, we have always considered the size of a sharded setup to be constant. We have designed sharding in a way that allowed us to utilize a x fi ed number of partitions inside our cluster. This limitation might not ree fl ct in everyday requirements. How can you really tell for certain how many nodes will be needed at the time a setup is designed? People might have a rough idea of the hardware requirements, but actually knowing how much load to expect is more art than science. To reflect this, you have to design a system in such a way that it can be resized easily. A commonly made mistake is that people tend to increase the size of their setup in unnecessarily small steps. Somebody might want to move from five to maybe six or seven machines. This can be tricky. Let's assume for a second that we have split data usinguser id % 5 as the partitioning function. What if we wanted to move touser id % 6? This is not so easy; the problem is that we have to rebalance the data inside our cluster to ree fl ct the new rules. 15 www.allitebooks.comUnderstanding the Concepts of Replication Remember that we have introduced sharding (that is, partitioning) because we have so much data and so much load that one server cannot handle the requests anymore. Now, if we come up with a strategy that requires rebalancing of data, we are already on the wrong track. You definitely don't want to rebalance 20 TB of data just to add two or three servers to your existing system. Practically, it is a lot easier to simply double the number of partitions. Doubling your partitions does not require rebalancing of data because you can simply follow the strategy outlined here: 1. Create a replica of each partition 2. Delete half of the data on each partition If your partitioning function wasuser id % 5 before, it should be userid % 10 afterwards. The advantage of doubling is that data cannot move between partitions. When it comes to doubling, users might argue that the size of your cluster might increase too rapidly. This is true, but if you are running out of capacity, adding 10 percent storage to your resources won't fix the problem of scalability anyway. Instead of just doubling your cluster (which is fine for most cases), you can also give more thought to writing a more sophisticated partitioning function that leaves the old data in place but handles the more recent data more intelligently. Having time-dependent partitioning functions might cause issues of its own, but it might be worth investigating this path. Some NoSQL systems use range partitioning to spread out data. Range partitioning means that each server has a fixed slice of data for a given time frame. This can be beneficial if you want to perform time series analysis or something similar. However, it can be counterproductive if you want to make sure that data is split evenly. If you expect your cluster to grow, we recommend starting with more partitions than those initially necessary, and packing more than just one partition on a single server. Later on, it will be easy to move single partitions to additional hardware joining the cluster setup. Some cloud services are able to do that, but those aspects are not covered in this book. To shrink your cluster again, you can simply apply the opposite strategy and move more than just one partition to a single server. This leaves the door for a future increase of servers wide open, and can be done fairly easily. Consistent hashing is another approach to distributing data. This technique is widely used in NoSQL systems and allows us to extend clusters in a more flexible way. However, the same technique can be used for PostgreSQL, of course. 16 Chapter 1 Let's assume we have three servers (A,B, andC). What happens in consistent hashing is that an array of, say, 1,000 slots can be created. Then each server name is hashed a couple of times and entered in the array. The result might look like this: 43 B, 153 A, 190 C, 340 A, 450 C, 650 B, 890 A, 930 C, 980 B Each value shows up multiple times. In the case of insertion, we take the input key and calculate a value. Let's assume hash (some input value) equals58. The result will go to server A. Why? There is no entry for58, so the system moves forward in the list, and the first valid entry is 153, which points to A. If the hash value returned900, the data would end up on C. Again, there is no entry for900 so the system has to move forward in the list until something is found. If a new server is added, new values for the server will be added to the array (D might be on32,560,940, or so). The system has to rebalance some data, but of course, not all of the data. It is a major advantage over a simple hashing mechanism, such as a simplekey % server_number function. Reducing the amount of data to be resharded is highly important. The main advantage of consistent hashing is that it scales a lot better than simple approaches. Combining sharding and replication Once data has been broken down into useful chunks, which can be handled by one server or a partition, we have to think about how to make the entire setup more reliable and fail-safe. The more servers you have in your setup, the more likely it will be that one of those servers will be down or not available for some other reason. Always avoid single points of failure when designing a highly scalable system. In order to ensure maximum throughput and availability, we can again turn to redundancy. The design approach can be summed up in a simple formula, which should always be in the back of a system architect's mind: "One is none and two is one." 17 Understanding the Concepts of Replication One server is never enough to provide us with High Availability. Every system needs a backup system that can take over in the event of a serious emergency. By just splitting a set of data, we definitely do not improve availability. This is because we have more servers, which can fail at this point. To fix this problem, we can add replicas to each of our partitions (shards), just as is shown in the following diagram: Each partition is a separate PostgreSQL database instance, and each of those instances can have its own replica (or replicas). Essentially, it is the same concept as you will find in a RAID 1+0 setup on the hardware side. Keep in mind that you can choose from the entire arsenal of features and options discussed in this book (for example, synchronous and asynchronous replication). All the strategies outlined in this book can be combined flexibly. A single technique is usually not enough, so feel free to combine various technologies in different ways to achieve your goals. Various sharding solutions In recent years, sharding has emerged as an industry-standard solution to many scalability-related problems. Thus, many programming languages, frameworks, and products are already providing out-of-the-box support for sharding. 18 Chapter 1 When implementing sharding, you can basically choose between two strategies: • Rely on some framework or middleware • Rely on PostgreSQL's means to solve the problem In the following chapters, we will discuss both options briefly. This little overview is not meant to be a comprehensive guide, but rather an overview to get you started with sharding. PostgreSQL-based sharding PostgreSQL cannot shard data out of the box, but it has all the interfaces and means required to allow sharding through add-ons. One of these add-ons, which is widely used, is called PL/Proxy. It has been around for many years, and offers superior transparency as well as good scalability. It was originally developed by Skype to scale up their infrastructure. The idea behind PL/Proxy is basically to use a local virtual table to hide an array of servers making up the table. PL/Proxy will be discussed in depth in Chapter 13, Scaling with PL/Proxy. Summary In this chapter, you learned about basic replication-related concepts as well as the physical limitations. We dealt with the theoretical concepts, which are the groundwork for what is still to come in this book. In the next chapter, you will be guided through the PostgreSQL transaction log, and we will outline all important aspects of this vital component. You will learn what the transaction log is good for and how it can be applied. 19

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.