How to install slony in Windows

how to install slony-i postgresql and how o install slony centos and how to install slony on windows and how to setup slony replication how to check slony version
Dr.MohitBansal Profile Pic
Dr.MohitBansal,Canada,Teacher
Published Date:26-10-2017
Your Website URL(Optional)
Comment
Configuring Slony Slony is one of the most widespread replication solutions in the field of PostgreSQL. It is not only one of the oldest replication implementations, it is also one that has the most extensive support by external tools, such as pgAdmin3. For many years, Slony was the only viable solution available for the replication of data in PostgreSQL. Fortunately, the situation has changed and a handful of new technologies have emerged. In this chapter, we will take a deep look at Slony, and you will learn how to integrate Slony into your replication setup. You will also find out which problems you can solve using Slony. A wide variety of use cases and examples will be covered. The following topics will be covered in this chapter: • Installing Slony • The Slony system architecture • Replicating tables • Deploying DDLs • Handling failovers • Common pitfalls Installing Slony To install Slony, first download the most recent tarball from http://slony.info. As always, we will perform a source installation so that we are able to perform replication. For the purpose of this chapter, we have used the version of Slony available at http://main.slony.info/downloads/2.2/source/slony1-2.2.4.tar.bz2. 185 Configuring Slony Once we have downloaded the package, we can extract it by running the following command: tar xvfj slony1-2.2.4.tar.bz2 The tarball will inflate, and we can move forward to compile the code. To build the code, we must tell Slony where to look forpg_config. The purpose ofpg_config is to provide the add-on module with all of the information about the build process of PostgreSQL itself. In this way, we can ensure that PostgreSQL and Slony are compiled in the same way. Thepg_config file is quite common and is widely used by many tools built on PostgreSQL. In our demo setup, PostgreSQL resides in/usr/local/pgsql, so we can safely assume that PostgreSQL will reside in the bin directory of the installation: ./configure with-pgconfigdir=/usr/local/pgsql/bin make sudo make install Once we have executedconfigure, we can compile the code by callingmake. Then we can switch toroot (if PostgreSQL has been installed asroot) and install the binaries on their final destination. The process is again similar to that for other external tools built on PostgreSQL. Understanding how Slony works Before we start replicating our r fi st database, we have to dive into Slony's architecture. It is important to understand how it works, otherwise it will be next to impossible to utilize this software in a useful and reasonable way. In contrast to transaction log streaming, Slony uses logical replication. This means that it does not use internal binary data (such as the XLOG) but a logical representation of the data (in the case of Slony, it is text). Using textual data instead of the built-in transaction log has some advantages, but also some downsides, which will be discussed in detail in this chapter. Don't mix up the logical representation of data with PostgreSQL 9.4's replication slots. Slony is not based on the PostgreSQL transaction log. 186 Chapter 10 Dealing with logical replication First of all, we have to discuss what logical replication really means. The backbone of every Slony setup is the so-called changelog triggers. This means that whenever Slony has to replicate the contents of a table, it will create a trigger. This trigger will then store all changes made to the table in a log. A process calledslon will then inspect this changelog and replicate those changes to the consumes. Let's take a look at the basic algorithm: INSERT INTO table (name, tstamp) VALUES ('hans', now()); trigger fires The data'hans', '2015-02-26 13:26:02' as well as some bookkeeping information will be stored in the log table, and thenCOMMIT. After some time, the following will happen: 1. Theslon daemon will come along and read all changes since the last commit 2. All changes will be replayed on the slaves 3. Once this is done the log can be deleted The following diagram shows the overall architecture of Slony: Keep in mind that the transport protocol is pure text. The main advantage here is that there is no need to run the same version of PostgreSQL on every node in the cluster, because Slony will abstract the version number. We cannot achieve this with transaction log shipping because, in the case of XLOG-based replication, all nodes in the cluster must use the same major version of PostgreSQL. The downside to Slony is that data replicated to a slave has to go through an expensive parsing step. In addition to that, pollingsl_log_ and r fi ing triggers is expensive. Therefore, Slony certainly cannot be considered a high-performance replication solution. 187 Configuring Slony The change log is written for certain tables. This also means that we don't have to replicate all of those tables at the same time. It is possible to replicate just a subset of those tables on a node. Because Slony is fairly independent of the PostgreSQL version, it can be used effectively for upgrade purposes. Of course, this is not a one-line process, but it is definitely a feasible option. The slon daemon As we have already stated, theslon daemon will be in charge of picking up the changes made to a specific table or set of tables, and transporting those changes to the desired destinations. The following diagram shows the basic architecture of Slony: To make this work, we have to run exactly oneslon daemon per database in our cluster. Note that we are talking about oneslon daemon per database, not per instance. This is important to take into account when doing the actual setup. 188 Chapter 10 As each database will have its ownslon daemon, these processes will communicate with each other to exchange and dispatch data. Individualslon daemons can also function as relays and simply pass data. This is important if you want to replicate data from database A to database C through database B. The idea here is similar to what is achievable using streaming replication and cascading replicas. An important thing about Slony is that there is no need to replicate an entire instance or an entire database. Replication is always related to a table or a group of tables. For each table (or for each group of tables), one database will serve as the master, while as many databases as desired will serve as slaves for that particular set of tables. It might very well be the case that one database node is the master of tables A and B and another database is the master of tables C and D. In other words, Slony allows the replication of data back and forth. Which data has to flow, from where, and to where will be managed by theslon daemon. Theslon daemon itself consists of various threads serving different purposes, such as cleanup, listening for events, and applying changes to a server. In addition to these, it will perform synchronization-related tasks. To interface with theslon daemon, you can use the command-line tool called slonik. It will be able to interpret scripts and talk to Slony directly. Replicating your first database After this little introduction, we can move forward and replicate our first database. To do so, we can create two databases in a database instance. We want to simply replicate between these two databases. It makes no difference if you replicate within an instance or between two instances—it works exactly the same way. Creating the two databases should be an easy task once your instance is up and running: hshs-VirtualBox: createdb db1 hshs-VirtualBox: createdb db2 189 Configuring Slony Now we can create a table that should be replicated from databasedb1 to databasedb2: db1= CREATE TABLE t_test (id serial, name text, PRIMARY KEY (id)); NOTICE: CREATE TABLE will create implicit sequence "t_test_id_seq" for serial column "t_test.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "t_test_ pkey" for table "t_test" CREATE TABLE Create this table in both the databases in an identical manner, because the table structure won't be replicated automatically. Replicating data structures and adding new tables to the system are not possible out of the box. Once this has been done, we can write aslonik script to tell the cluster about our two nodes. Theslonik interface is a command-line interface that we use to talk to Slony directly. You can also work with it interactively, but this is far from comfortable. In fact, it is really hard, and in the past we have rarely seen people do this in production environments. A script used to register these nodes would look as follows: /bin/sh MASTERDB=db1 SLAVEDB=db2 HOST1=localhost HOST2=localhost DBUSER=hs slonik_EOF_ cluster name = first_cluster; define nodes (this is needed by pretty much all slonik scripts) node 1 admin conninfo = 'dbname=MASTERDB host=HOST1 user=DBUSER'; node 2 admin conninfo = 'dbname=SLAVEDB host=HOST2 user=DBUSER'; init cluster 190 Chapter 10 init cluster ( id=1, comment = 'Master Node'); group tables into sets create set (id=1, origin=1, comment='Our tables'); set add table (set id=1, origin=1, id=1, fully qualified name = 'public.t_test', comment='sample table'); store node (id=2, comment = 'Slave node', event node=1); store path (server = 1, client = 2, conninfo='dbname=MASTERDB host=HOST1 user=DBUSER'); store path (server = 2, client = 1, conninfo='dbname=SLAVEDB host=HOST2 user=DBUSER'); _EOF_ First of all, we define a handful of environment variables. This is not necessary but can be quite handy to make sure that nothing is forgotten if there is a change. Then ourslonik script starts. The first thing we have to do is define a cluster name. This is important: with Slony, a cluster is more of a virtual thing, and it is not necessarily related to physical hardware. We will find out what this means later on, when talking about failovers. The cluster name is also used to prefix schema names in Slony. In the next step, we have to den fi e the nodes of this cluster. The idea here is that each node will have a number associated with a connection string. Once this has been done, we can callinit cluster. During this step, Slony will deploy all of the infrastructure to perform replication. We don't have to install anything manually here. Now that the cluster has been initialized, we can organize our tables into replication sets, which are really just a set of tables. In Slony, we will always work with replication sets. Tables are grouped into sets and replicated together. This layer of abstraction allows us to quickly move groups of tables around. In many cases, this is a lot easier than just moving individual tables one by one. Finally, we have to define paths. What is a path? A path is basically the connection string for moving from A to B. The main question here is: why are paths needed at all? We have already defined nodes earlier, so why define paths? The answer is that the route from A to B is not necessarily the same as the route from B to A. This is especially important if one of these servers is in some DMZ while the other one is not. In other words, by defining paths, you can easily replicate between different private networks and cross firewalls while performing some NAT, if necessary. 191 Configuring Slony As the script is a simple shell script, we can easily execute it: hshs-VirtualBox:/slony sh slony_first.sh Slony has done some work in the background. By looking at our test table, we can see what has happened: db1= \d t_test Table "public.t_test" Column Type Modifiers +-+- id integer not null default nextval('t_test_id_seq'::regclass) name text Indexes: "t_test_pkey" PRIMARY KEY, btree (id) Triggers: _first_cluster_logtrigger AFTER INSERT OR DELETE OR UPDATE ON t_test FOR EACH ROW EXECUTE PROCEDURE _first_cluster.logtrigger('_first_ cluster', '1', 'k') _first_cluster_truncatetrigger BEFORE TRUNCATE ON t_test FOR EACH STATEMENT EXECUTE PROCEDURE _first_cluster.log_truncate('1') Disabled triggers: _first_cluster_denyaccess BEFORE INSERT OR DELETE OR UPDATE ON t_ test FOR EACH ROW EXECUTE PROCEDURE _first_cluster.denyaccess('_first_ cluster') _first_cluster_truncatedeny BEFORE TRUNCATE ON t_test FOR EACH STATEMENT EXECUTE PROCEDURE _first_cluster.deny_truncate() A handful of triggers have been deployed automatically to keep track of these changes. Each event is covered by a trigger. Now that this table is under Slony's control, we can start replicating it. To do so, we have to come up with aslonik script again: /bin/sh MASTERDB=db1 SLAVEDB=db2 HOST1=localhost HOST2=localhost 192 Chapter 10 DBUSER=hs slonik_EOF_ cluster name = first_cluster; node 1 admin conninfo = 'dbname=MASTERDB host=HOST1 user=DBUSER'; node 2 admin conninfo = 'dbname=SLAVEDB host=HOST2 user=DBUSER'; subscribe set ( id = 1, provider = 1, receiver = 2, forward = no); _EOF_ After stating the cluster name and listing the nodes, we can callsubscribe set. The point here is that in our example, set number 1 is replicated from node 1 to node 2 (receiver). Theforward keyword is important to mention here. This keyword indicates whether or not the new subscriber should store the log information during replication to make it possible for future nodes to be candidates for the provider. Any node that is intended to be a candidate for failover must haveforward = yes. In addition to that, this keyword is essential to perform cascaded replication (meaning A replicates to B and B replicates to C). If you execute this script, Slony will truncate the table on the slave and reload all of the data to make sure that things are in sync. In many cases, you may already know that you are in sync, and you would want to avoid copying gigabytes of data over and over again. To achieve this, we can addOMIT COPY = yes. This will tell Slony that we are sufficiently confident that the data is already in sync. This feature has been supported since Slony 2.0. After defining what we want to replicate, we can fire up those two slon daemons in our favorite Unix shell: slon first_cluster 'host=localhost dbname=db1' slon first_cluster 'host=localhost dbname=db2' This can also be done before we define this replication route, so the order is not of primary concern here. Now we can move forward and check whether the replication is working nicely: db1= INSERT INTO t_test (name) VALUES ('anna'); INSERT 0 1 db1= SELECT FROM t_test; id name + 1 anna 193 Configuring Slony (1 row) db1= \q hshs-VirtualBox:/slony psql db2 psql (9.4.1) Type "help" for help. db2= SELECT FROM t_test; id name -+ (0 rows) db2= SELECT FROM t_test; id name - + 1 anna (1 row) We add a row to the master, quickly disconnect, and query to check whether the data is already there. If you happen to be quick enough, you will see that the data comes with a small delay. In our example, we managed to get an empty table just to demonstrate what asynchronous replication really means. Let's assume you are running a book shop. Your application connects to server A to create a new user. Then the user is redirected to a new page, which queries some information about the new user—be prepared for the possibility that the data is not there yet on server B. This is a common mistake in many web applications dealing with load balancing. The same kind of delay happens with asynchronous streaming replication. Deploying DDLs Replicating just one table is clearly not enough for a productive application. Also, there is usually no way to ensure that the data structure never changes. At some point, it is simply necessary to deploy changes of the data structures (so-called DDLs). 194 Chapter 10 The problem now is that Slony relies heavily on triggers. A trigger can fire when a row in a table changes. This works for all tables, but it does not work for system tables. So, if you deploy a new table or happen to change a column, there is no way for Slony to detect that. Therefore, you have to run a script to deploy changes inside the cluster to make it work. PostgreSQL 9.3 and beyond have some basic functionality to trigger DDLs, but that is not enough for Slony. However, future versions of PostgreSQL might very well be capable of handling triggers inside DDLs. The same applies to PostgreSQL 9.4 replication slots. Let's see whether those features will be supported in the long run, and how. We need aslonik script for that: /bin/sh MASTERDB=db1 SLAVEDB=db2 HOST1=localhost HOST2=localhost DBUSER=hs slonik_EOF_ cluster name = first_cluster; node 1 admin conninfo = 'dbname=MASTERDB host=HOST1 user=DBUSER'; node 2 admin conninfo = 'dbname=SLAVEDB host=HOST2 user=DBUSER'; execute script ( filename = '/tmp/deploy_ddl.sql', event node = 1 ); _EOF_ The key to our success isexecute script. We simply pass an SQL file to the call and tell it to consult node1. The content of the SQL file can be quite simple—it should simply list the DDLs we want to execute: CREATE TABLE t_second (id int4, name text); Running the file can be done just as before: hshs-VirtualBox:/slony ./slony_ddl.sh 195 Configuring Slony The table will be deployed on both the nodes. The following listing shows that the table has also made it to the second node, which proves that things have been working as expected: db2= \d t_second Table "public.t_second" Column Type Modifiers +-+- id integer name text Of course, you can also create new tables without using Slony, but this is not recommended. Administrators should be aware that all DDLs should go through Slony. The SQL deployment process has to reflect that, which is a major problem with Slony. For example, adding columns to a table without Slony being aware of it will definitely end up a disaster. Adding tables to replication and managing problems Once we have added this table to the system, we can add it to the replication setup. However, doing so is a little complex. First of all, we have to create a new table set and merge it with the one we already have. Burcado and Londiste are a bit more convenient here and DDLs are easier. So, for a brief moment, we will have two table sets involved. The script goes like this: /bin/sh MASTERDB=db1 SLAVEDB=db2 HOST1=localhost HOST2=localhost DBUSER=hs slonik_EOF_ cluster name = first_cluster; node 1 admin conninfo = 'dbname=MASTERDB host=HOST1 user=DBUSER'; node 2 admin conninfo = 'dbname=SLAVEDB host=HOST2 user=DBUSER'; create set (id=2, origin=1, 196 Chapter 10 comment='a second replication set'); set add table (set id=2, origin=1, id=5, fully qualified name = 'public.t_second', comment='second table'); subscribe set(id=1, provider=1,receiver=2); merge set(id=1, add id=2,origin=1); _EOF_ The key to our success is themerge call at the end of the script. It will make sure that the new tables become integrated into the existing table set. To make sure that nothing goes wrong, it makes sense to check out the Slony metadata tables (sl_table) and see which IDs are actually available. When the script is executed, we will face an expected problem, as follows: hshs-VirtualBox:/slony sh slony_add_to_set.sh stdin:7: PGRES_FATAL_ERROR select "_first_cluster".determineIdxnameU nique('public.t_second', NULL); - ERROR: Slony-I: table "public"."t_ second" has no primary key We have created the table without a primary key. This is highly important—there is no way for Slony to replicate a table without a primary key. So we have to add this primary key. Basically, we have two choices of how to do this. The desired way here is definitely to use execute script, just as we have shown before. If your system is idling, you can also do it the quick and dirty way: db1= ALTER TABLE t_second ADD PRIMARY KEY (id); NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index "t_ second_pkey" for table "t_second" ALTER TABLE db1= \q hshs-VirtualBox:/slony psql db2 psql (9.4.1) Type "help" for help. db2= ALTER TABLE t_second ADD PRIMARY KEY (id); NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index "t_ second_pkey" for table "t_second" ALTER TABLE However, this is not recommended—it is definitely more desirable to use the Slony interface to make changes like these. Remember that DDLs are not replicated on Slony. Therefore, any manual DDL is risky. 197 Configuring Slony Once we have fixed the data structure, we can execute theslonik script again and see what happens: hshs-VirtualBox:/slony sh slony_add_to_set.sh stdin:6: PGRES_FATAL_ERROR lock table "_first_cluster".sl_event_lock, "_first_cluster".sl_config_lock;select "_first_cluster".storeSet(2, 'a second replication set'); - ERROR: duplicate key value violates unique constraint "sl_set-pkey" DETAIL: Key (set_id)=(2) already exists. CONTEXT: SQL statement "insert into "_first_cluster".sl_set (set_id, set_origin, set_comment) values (p_set_id, v_local_node_id, p_set_comment)" PL/pgSQL function _first_cluster.storeset(integer,text) line 7 at SQL statement What you see is a typical problem that you will face with Slony. If something goes wrong, it can be really, really hard to get things back in order. This is a scenario you should definitely be prepared for. If you are working with Slony on a production system, always create yourself a perfectly working library with scripts to perform different tasks. It will greatly reduce your risk if you don't have to come up with fixes on the fly and during normal operations. Also, always make sure that you have got enough scripts around to handle most of the common issues, such as the one we have just outlined. Compiling those scripts is hard, but thinking of all potential cases and providing ready-to-use shell scripts to talk to Slony is definitely worth the effort. Remember that ad hoc scripting is dangerous in Slony; there are too many things that can go wrong which might be hard to fix later on. So, to fix the problem, we can simply drop the table set again and start from scratch: /bin/sh MASTERDB=db1 SLAVEDB=db2 HOST1=localhost HOST2=localhost DBUSER=hs slonik_EOF_ 198 Chapter 10 cluster name = first_cluster; node 1 admin conninfo = 'dbname=MASTERDB host=HOST1 user=DBUSER'; node 2 admin conninfo = 'dbname=SLAVEDB host=HOST2 user=DBUSER'; drop set (id=2, origin=1); _EOF_ To kill a table set, we can rundrop set. It will help you get back to where you started. The script will execute cleanly: hshs-VirtualBox:/slony sh slony_drop_set.sh Now we can restart again and add the table: /bin/sh MASTERDB=db1 SLAVEDB=db2 HOST1=localhost HOST2=localhost DBUSER=hs slonik_EOF_ cluster name = first_cluster; node 1 admin conninfo = 'dbname=MASTERDB host=HOST1 user=DBUSER'; node 2 admin conninfo = 'dbname=SLAVEDB host=HOST2 user=DBUSER'; create set (id=2, origin=1, comment='a second replication set'); set add table (set id=2, origin=1, id=5, fully qualified name = 'public.t_second', comment='second table'); subscribe set(id=1, provider=1,receiver=2); subscribe set(id=2, provider=1,receiver=2); merge set(id=1, add id=2,origin=1); _EOF_ We can now cleanly execute the script, and everything will be replicated as expected: hshs-VirtualBox:/slony sh slony_add_to_set_v2.sh stdin:11 subscription in progress before mergeSet. waiting stdin:11 subscription in progress before mergeSet. waiting 199 Configuring Slony As we have already stated in this chapter, we have intentionally made a small mistake, and you have seen how tricky it can be to get things straight even if it is just a small mistake. One of the reasons for this is that a script is basically not a transaction on the server side. So, if a script fails somewhere in the middle, it will just stop working. It will not undo the changes made so far. This can cause some issues, and these are outlined in the rest of this section. So, once you have made a change, you should always take a look and check if everything is working nicely. One simple way to do this is to connect to thedb2 database and do this: db2= BEGIN; BEGIN db2= DELETE FROM t_second; ERROR: Slony-I: Table t_second is replicated and cannot be modified on a subscriber node - role=0 db2= ROLLBACK; ROLLBACK You can start a transaction and try to delete a row. It is supposed to fail. If it does not, you can safely roll back and try to fix your problem. As you are using a transaction that never commits, nothing can go wrong. Performing failovers Once you have learned how to replicate tables and add them to sets, it is time to learn about failover. Basically, we can distinguish between two types of failovers: • Planned failovers • Unplanned failovers and crashes In this section, you will learn about both scenarios. Planned failovers Having planned failovers might be more of a luxury scenario. In some cases, you will not be so lucky and you will have to rely on automatic failovers or face unplanned outages. Basically, a planned failover can be seen as moving a set of tables to some other node. Once that node is in charge of those tables, you can handle things accordingly. 200 Chapter 10 In our example, we want to move all tables from node 1 to node 2. In addition to this, we want to drop the first node. Here is the code required: /bin/sh MASTERDB=db1 SLAVEDB=db2 HOST1=localhost HOST2=localhost DBUSER=hs slonik_EOF_ cluster name = first_cluster; node 1 admin conninfo = 'dbname=MASTERDB host=HOST1 user=DBUSER'; node 2 admin conninfo = 'dbname=SLAVEDB host=HOST2 user=DBUSER'; lock set (id = 1, origin = 1); move set (id = 1, old origin = 1, new origin = 2); wait for event (origin = 1, confirmed = 2, wait on=1); drop node (id = 1, event node = 2); _EOF_ After our standard introduction, we can callmove set. The catch here is this: we have to create a lock to make this work. The reason is that we have to protect ourselves from changes made to the system when failover is performed. You must not forget this lock, otherwise you might find yourself in a truly bad situation. Just as in all of our previous examples, nodes and sets are represented using their numbers. Once we have moved the set to the new location, we have to wait for the event to be completed, and then we can drop the node (if desired). If the script is 100 percent correct, it can be executed cleanly: hshs-VirtualBox:/slony ./slony_move_set.sh debug: waiting for 1,5000016417 on 2 Once we have failed over to the second node, we can delete data. Slony has removed the triggers that prevent this operation: db2= DELETE FROM t_second; DELETE 1 201 Configuring Slony The same has happened to the table on the first node. There are no more triggers, but the table itself is still in place: db1= \d t_second Table "public.t_second" Column Type Modifiers +-+- id integer not null name text Indexes: "t_second_pkey" PRIMARY KEY, btree (id) You can now take the node offline and use it for other purposes. Using a planned failover is also the recommended strategy you should apply when upgrading a database to a new version of PostgreSQL with little downtime. Just replicate an entire database to an instance running the new version and perform a controlled failover. The actual downtime of this kind of upgrading will be minimal, and it is therefore possible to do it with a large amount of data. Unplanned failovers In the event of an unplanned failover, you may not be so lucky. An unplanned failover could be a power outage, a hardware failure, or some site failure. Whatever it is, there is no need to be afraid. You can still bring the cluster back to a reasonable state easily. To do so, Slony provides thefailover command: failover (id = 1, backup node = 2); drop node (id = 1, event node = 2); This is all that you need to execute on one of the remaining nodes in order to perform a failover from one node to the other and remove the node from the cluster. It is a safe and reliable procedure. 202 Chapter 10 Summary Slony is a widespread tool used to replicate PostgreSQL databases at a logical level. In contrast to transaction log shipping, it can be used to replicate instances between various versions of PostgreSQL, and there is no need to replicate an entire instance. It allows us to flexibly replicate individual objects. In the next chapter, we will focus our attention on SkyTools, a viable alternative to Slony. We will cover installation, generic queues, and replication. 203 Using SkyTools Having introduced Slony, we will take a look at another popular replication tool. SkyTools is a software package originally developed by Skype, and it serves a variety of purposes. It is not a single program but a collection of tools and services that you can use to enhance your replication setup. It offers solutions for generic queues, a simplified replication setup, data transport jobs, as well as a programming framework suitable for database applications (especially transport jobs). Covering all aspects of SkyTools is not possible here. However, in this chapter, the most important features, concepts, and tools are covered to show you an easy way to enter this vast world of tools and ideas and benefit as much as possible. In this chapter, we will discuss the following topics related to SkyTools: • Building generic queues • Using londiste for replication • Handling XLOG and walmgr As with all the chapters of this book, the most recent version of SkyTools will be used. Installing SkyTools SkyTools is an open source package, and can be downloaded freely from pgfoundry.org. For the purpose of this chapter, we have used version 3.2 which can be found athttp://pgfoundry.org/frs/download.php/3622/ skytools-3.2.tar.gz. 205