How to test PostgreSQL ODBC connection

how to test postgresql connection and how to check postgresql connection limit and how to increase postgres connection limit postgresql allow connection from any host
Dr.MohitBansal Profile Pic
Published Date:26-10-2017
Your Website URL(Optional)
Server Control In this chapter, we will cover the following recipes: f Starting the database server manually f Stopping the server safely and quickly f Stopping the server in an emergency f Reloading the server configuration files f Restarting the server quickly f Preventing new connections f Restricting users to only one session each f Pushing users off the system f Deciding on a design for multitenancy f Using multiple schemas f Giving users their own private database f Running multiple servers on one system f Setting up a connection pool f Accessing multiple servers using the same host and port 89Server Control Introduction PostgreSQL consists of a set of server processes, the group leader of which is named the postmaster. Starting the server is the act of creating these processes, and stopping the server means to terminate those processes. Each postmaster listens for client connection requests on a defined port number. Multiple concurrently running postmasters cannot share that port number. The port number is often used to uniquely identify a particular postmaster and hence also the database server that it leads. When we start a database server, we refer to a data directory, which contains the heart and soul—or at least the data—of our database. Subsidiary tablespaces may contain some data outside the maindata directory, so thedata directory is just the main central location, and not the only place where data for that database server is held. Each running server has one data directory, and onedata directory can have at the most one running server (or instance). To perform any action for a database server, we must know thedata directory for that server. The basic actions we can perform on the database server are starting and stopping. We can also perform a restart, though that is just a stop followed by a start. In addition, we can reload the server, which means we can reread the server's configuration files. We should also mention a few other points. The default port number for PostgreSQL is5432. That has been registered with the Internet Assigned Numbers Authority (IANA), and so it should already be reserved for PostgreSQL's use in most places. Because each PostgreSQL server requires a distinct port number, the normal convention is to use subsequent numbers for any additional server, for example, 5433,5434, and so on. Subsequent port numbers might not be as easily recognized by the network infrastructure, which might, in some cases, make life more difficult for you in large enterprises, especially in more security-conscious ones. Port number6432 has been registered with IANA for PgBouncer, the connection pooler that we will describe in the Setting up a connection pool recipe. This happened only recently, and many installations are using nonstandard port numbers such as6543 only because they were deployed earlier. A database server is also sometimes referred to as a database cluster. I don't recommend that term for normal usage because it makes people think about multiple nodes, not one database server on one system. 90Chapter 4 Starting the database server manually Typically, the PostgreSQL server will start automatically when the system boots. You may have opted to stop and start the server manually, or you may need to start up or shut down for various operational reasons. Getting ready First, you need to understand the difference between the service and the server. The word "server" refers to the database server and its processes. The word "service" refers to the operating system wrapper by which the server gets called. The server works in essentially the same way on every platform, whereas each operating system and distribution has its own concept of a service. How to do it… On all platforms, there is a specific command to start the server: f Here is the command for Ubuntu and Debian: pg_ctlcluster 9.4 main start f For Red Hat/Fedora, the command is as follows: pg_ctl -D /var/lib/pgsql/data start f For Solaris, this is the command: pg_ctl -D /var/lib/pgsql/data start f For Mac OS, the command is as follows: pg_ctl -D /var/lib/pgsql/data start f For FreeBSD, the following is the command: pg_ctl -D /var/lib/pgsql/data start On some platforms, the service can be started in various ways, such as these: f For Red Hat/Fedora, you can use this command: service postgresql start f For Windows, the command is as follows: net start postgres 91Server Control How it works… On Ubuntu/Debian, thepg_ctlcluster wrapper is a convenient utility that allows multiple servers to coexist more easily, which is especially good when you have servers with different versions. This is very useful. Another feature specific to Ubuntu and Debian is the start.conf file, located next to the other configuration files (that is, in the same directory). Apart from the informational comments, it contains only a single word, with the following meaning: f auto: The server will be started automatically on booting. This is the default when creating a new server. It is suitable for frequently used servers, such as those powering live services or being used for everyday development activities. f manual: The server will not be started automatically on boot, but can be started with pg_ctlcluster. This is suitable for custom servers that are seldom used. f disabled: The server is not supposed to be started. This setting is only a protection from starting the server accidentally. Thepg_ctlcluster wrapper won't let you start it, but a skilled user can easily bypass the protection. If you need to reserve a port for a server not managed by pg_ctlcluster, for example, when compiling directly from the source code, then you can create a cluster withstart.conf set todisabled, and then use its port. Any new servers will be allocated different ports. Stopping the server safely and quickly There are several modes to stop the server, depending on the level of urgency. We'll do a comparison in view of the effects in each mode. How to do it… You can issue a database serverstop command using fast mode, as follows: pg_ctl -D datadir -m fast stop You must use-m fast if you wish to shut down as soon as possible. Normal shutdown means "wait for all users to finish before we exit". That can take a very long time, though all the while new connections are refused. On Debian/Ubuntu systems, this command can be as follows: pg_ctlcluster 9.0 main stop force 92Chapter 4 How it works… When you do a fast stop, all users have their transactions aborted and all connections are disconnected. This is not very polite to users, but it still treats the server and its data with care, which is good. PostgreSQL is similar to other database systems in that it does do a shutdown checkpoint before it closes. This means that the startup that follows will be quick and clean. The more work the checkpoint has to do, the longer it will take to shut down. One difference between PostgreSQL and some other RDBMSes such as Oracle, DB2, or SQL Server is that the transaction rollback is very fast. On those other systems, if you shut down the server in a mode that rolls back transactions, it can cause the shutdown to take a while, possibly a very long time. This difference is for internal reasons, and isn't in any way unsafe. Some distributions (for example, Debian and Ubuntu) support theforce option, which is rather nice because it first attempts a fast shutdown, and if that fails, it performs an immediate shutdown. After that, it kills the postmaster. See also The technology that provides immediate rollback for PostgreSQL is called MVCC. More information on this is provided in the Identifying and fixing bloated tables and indexes recipe in Chapter 9, Regular Maintenance. Stopping the server in an emergency If nothing else is working, we may need to stop the server quickly, without caring about disconnecting the clients gently. "Break the glass in case of emergency" How to do it… The basic command to perform an emergency restart on the server is the following: pg_ctl -D datadir stop -m immediate We must use an immediate stop mode. 93Server Control How it works… When you do an immediate stop, all users have their transactions aborted and all connections are disconnected. There is no clean shutdown, nor is there politeness of any kind. An immediate mode stop is similar to a database crash. Some cached files will need to be rebuilt, and the database itself needs to undergo crash recovery when it comes back up. Note that for DBAs with Oracle experience, immediate mode is the same thing as a shutdown abort. The PostgreSQL immediate mode stop is not the same thing as shutdown immediate on Oracle. Reloading the server configuration files Some PostgreSQL configuration parameters can only be changed by reloading the entire configuration file. How to do it… On all platforms, there is a specific command to reload the server. All of these are listed as follows: f Here is the command for Ubuntu and Debian: pg_ctlcluster 9.4 main reload f For Red Hat/Fedora, the command is as follows: service postgresql reload You can also use the following command: pg_ctl -D /var/lib/pgsql/data reload f For Solaris, this is the command: pg_ctl -D /var/lib/pgsql/data reload f For Mac OS, the command is as follows: pg_ctl -D /var/lib/pgsql/data reload f Here is the command for FreeBSD: pg_ctl -D /var/lib/pgsql/data reload 94Chapter 4 You can reload the configuration files while still connected to PostgreSQL. This can be done from the command line as follows, if you are a superuser: postgres= select pg_reload_conf(); The output is rather short: pg_reload_conf t This function is also often executed from an admin tool, such as pgAdmin3. If you do this, you should realize that it's possible to implement a new authentication rule that is violated by the current session. It won't force you to disconnect, but when you do disconnect, you may not be able to reconnect. Any error in a configuration file will be reported in the message log, so we recommend to look there immediately after reloading. You will quickly notice (and fix) syntax errors, because they prevent any login even before reloading. Other errors, such as typos in parameter names, or wrong units, will only be reported in the log; moreover, only some non-syntax errors will prevent reloading the whole file, so it's best to check the log in any case. How it works… To reload the configuration files, we send the SIGHUP signal to the postmaster, which then passes that to all connected backends. That's why some people call reloading the server "sigh-up-ing". If you look at thepg_settings catalog table, you'll see that there is a column named context. Each setting has a time and a place where it can be changed. Some parameters can only be reset by a server reload, and so the value ofcontext for those parameters will be asighup. Here are a few of the parameters you'd want to change sometimes during server operation (there are others, however): postgres= SELECT name, setting, unit ,(source = 'default') as is_default FROM pg_settings WHERE context = 'sighup' AND (name like '%delay' or name like '%timeout') AND setting = '0'; 95Server Control name setting unit is_default +-++ authentication_timeout 60 s t autovacuum_vacuum_cost_delay 20 ms t bgwriter_delay 10 ms f checkpoint_timeout 32 s f deadlock_timeout 1000 ms t max_standby_delay 30 t wal_sender_delay 200 ms t wal_writer_delay 200 ms t (8 rows) There's more… As reloading the configuration file is achieved by sending the SIGHUP signal, we can reload the configuration file only for a single backend using the kill command. As you might expect, you may get some strange results from doing this, so don't try it at home. First, find the PID of the backend usingpg_stat_activity. Then, from the OS prompt, issue the following: kill -SIGHUP pid Alternatively, we can do both at once, as shown in this command: kill -SIGHUP \ `psql -t -c "select procpid from pg_stat_activity limit 1"` This is only useful with a sensibleWHERE clause. Restarting the server quickly Some of the database server parameters require you to stop and start the server again fully. Doing this as quickly as possible can be very important in some cases. The best time to do this is usually a quiet time, with lots of planning, testing, and forethought. Sometimes, not everything goes according to plan. 96Chapter 4 How to do it… The basic command to restart the server is the following: pg_ctl -D datadir restart -m fast A restart is just a stop followed by a start, so it sounds very simple. In many cases, it will be simple, but there are times when you'll need to restart the server while it is fairly busy. That's when we need to start pulling some tricks to make that restart happen faster. First, the stop performed needs to be a fast stop. If we do a default or "smart" stop, then the server will just wait for everyone to finish. If we do an immediate stop, then the server will crash, and we will need to crash-recover the data, which will be slower overall. The running database server has a cache full of data blocks, many of them dirty. PostgreSQL is similar to other database systems in that it does a shutdown checkpoint before it closes. This means that the startup that follows will be quick and clean. The more work the checkpoint has to do, the longer it will take to shut down. The actual shutdown will happen much faster if we issue a normal checkpoint first, as the shutdown checkpoint will have much less work to do. So, flush all the dirty shared buffers to disk with the following command, issued by a database superuser: psql -c "CHECKPOINT" The next consideration is that once we restart, the database cache will be empty again and will need to refresh itself. The larger the database cache, the longer it takes for the cache to get warm again, and 30 to 60 minutes is not uncommon before returning to full speed. So, what was a simple restart can actually have a large business impact if handled badly. There's more… There is an extension called pgfincore that implements a set of functions to manage PostgreSQL data pages in the operating system's file cache. One possible use is to preload some tables so that PostgreSQL will load them faster when requested. The general idea is that you can provide more detailed information for the operating system cache, which can therefore behave more efficiently. The pgfincore extension is a stable project started in 2009. More details about it are available at, including the source code. However, it should be noted that most distributions include a prebuilt pgfincore package, which makes installation easier. 97Server Control Preventing new connections In certain emergencies, you may need to lock down the server completely, or just prevent specific users from accessing the database. It's hard to foresee all the situations in which you might need to do this, so we present a range of options. How to do it… Connections can be prevented in a number of ways, as follows: f Pause and resume the session pool. See the Setting up a connection pool recipe later in this chapter on controlling connection pools. f Stop the server See the Stopping the server safely and quickly and Stopping the server in an emergency recipes, but this is not recommended. f Restrict the connections for a specific database to zero, by setting the connection limit to zero: ALTER DATABASE foo_db CONNECTION LIMIT 0; This will limit normal users from connecting to that database, though it will still allow superuser connections. f Restrict the connections for a specific user to zero by setting the connection limit to zero (see the Restricting users to only one session each recipe): ALTER USER foo CONNECTION LIMIT 0; This will limit normal users from connecting to that database, but it will still allow connections if the user is a superuser, so luckily you cannot shut yourself out accidentally. f Change the host-based authentication (HBA) file to refuse all incoming connections, and then reload the server: ‰ Create a new file namedpg_hba_lockdown.conf, and add the following two lines to the file. This puts in place rules that will completely lock down the server, including superusers. You should have no doubt that this is a serious and drastic action: TYPE DATABASE USER CIDR-ADDRESS METHOD local all all reject host all all reject 98Chapter 4 If you still want superuser access, then try something like the following: TYPE DATABASE USER CIDR-ADDRESS METHOD local all postgres peer local all all reject host all all reject This will prevent connections to the database by any user except thepostgres operating system user ID, which connects locally to any database. Be careful not to confuse the second and third columns—the second column is the database and the third column is the username. It's worth keeping the header line just for that reason. Thepeer method should be replaced by other authentication methods if a more complex configuration is in use. In versions prior to 9.1, you need to use theident method to obtain this behavior for local connections. ‰ Copy the existingpg_hba.conf file topg_hba_access.conf so that it can be replaced later, if required. ‰ Copypg_hba_lockdown.conf topg_hba.conf. ‰ Reload the server following the recipe earlier in this chapter. How it works… Thepg_hba.conf file is where we specify the host-based authentication rules. We do not specify the authentications themselves, but just specify which authentication mechanisms will be used. This is the top-level set of rules for PostgreSQL authentication. The rules are specified in a file and applied by the postmaster process when connections are attempted. To prevent denial-of-service attacks, the HBA rules never involve database access, so we do not know whether a user is a superuser or not. As a result, you can lock out all users, but note that you can always re-enable access by editing the file and reloading. Restricting users to only one session each If resources need to be closely controlled, you may wish to restrict users so that they can only connect at most once to the server. The same technique can be used to prevent connections entirely for that user. 99Server Control How to do it… We can restrict users to only one connection using the following command: postgres= ALTER ROLE fred CONNECTION LIMIT 1; ALTER ROLE This will then cause any additional connections to receive the error message: FATAL: too many connections for role "fred". You can eliminate this restriction by setting the value to-1. It's possible to set the limit to zero or any positive integer. You can set this to a number other thanmax_connections, though it is up to you to make sense of that if you do. Setting the value to zero will completely restrict normal connections. Note that even if you set the connection limit to zero for superusers, they will still be able to connect. How it works… The connection limit is applied during session connection. Raising this limit will never affect any connected users. Lowering the limit doesn't have any effect either, unless they try to disconnect and reconnect. So, if you lower the limit, you should immediately check to see whether there are more sessions connected than the new limit you just set. Otherwise, there may be some surprises in case there is a crash: postgres= SELECT rolconnlimit FROM pg_roles WHERE rolname = 'fred'; rolconnlimit 1 (1 row) postgres= SELECT count() FROM pg_stat_activity WHERE usename = 'fred'; count - 2 (1 row) 100Chapter 4 If you have more connected sessions than the new limit, you can ask users politely to disconnect, or apply the next recipe, Pushing users off the system. Users can't raise or lower their own connection limit, just in case you are worried that they might be able to override this somehow. Pushing users off the system Sometimes, we may need to remove groups of users from the database server for various operational reasons. Here's how. How to do it… You can terminate a user's session with thepg_terminate_backend() function included with PostgreSQL. That function takes the PID, or the process ID, of the user's session on the server. This process is known as the backend, and it is a different system process from the program that runs the client. Some of the columns used in this recipe had a different name in version 9.1 and before: f pid was calledprocpid f query was calledcurrent_query The queries in this recipe are written for PostgreSQL 9.4, the most recent stable version at the time of writing this book. They work without changes on versions 9.3 and 9.2. To run them on prior versions, you only have to replace each occurrence ofpid withprocpid andquery withcurrent_query. To find the PID of a user, we can look at the pg_stat_activity view. We can use it in a query, like this: SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE ... There are a couple of things to note if you run this query. If theWHERE clause doesn't match any sessions, then you won't get any output from the query. Similarly, if it matches multiple rows, you will get a fairly useless result, that is, a list of booleantrue values. Unless you are careful enough to exclude your own session from the query, you will disconnect yourself What's even funnier is that you'll disconnect yourself halfway through disconnecting the other users, as the query will runpg_terminate_backend() in the order in which sessions are returned from the outer query. 101Server Control Therefore, I suggest a safer and more useful query that gives a useful response in all cases, which is as follows: postgres= SELECT count(pg_terminate_backend(pid)) FROM pg_stat_activity WHERE usename NOT IN (SELECT usename FROM pg_user WHERE usesuper); count - 1 This is assuming that superusers are performing administrative tasks. Other good filters might be the following: f WHERE application_name = 'myappname' f WHERE waiting f WHERE query = 'IDLE in transaction' f WHERE query = 'IDLE' How it works… Thepg_terminate_backend() function sends a signal directly to the operating system process for that session. It's possible that the session may have closed by the timepg_terminate_backend() is named. As PID numbers are assigned by the operating system, it could even happen that you try to terminate a given session (call it "session A"), but you actually terminate another session (call it "session B"). Here is how it could happen. Suppose you take note of the PID of session A and decide to disconnect it. Before you actually issuepg_terminate_backend(), session A disconnects, and right after, a new session B is given exactly the same PID. So, when you terminate that PID, you hit session B instead. 102Chapter 4 On one hand, you need to be careful. On the other hand, this case is really unlikely, and is only mentioned for completeness. For it to happen, all the following events must happen as well: f One of the sessions you are trying to close must terminate independently in the very short interval between the momentpg_stat_activity is read and the moment pg_terminate_backend() is executed f Another session on the same database server must be started in the even shorter interval between the old session closing and the execution ofpg_terminate_ backend() f The new session must get exactly the same PID value as the old session, which is less than 1 chance in 32,000 on a 32-bit Linux machine Nonetheless, Probability Theory is tricky, even for experts. Therefore, it's better to be aware that there is a tiny risk, especially if you use the query many times per day over a long period of time, in which case the probability of getting caught at least once builds up. It's also possible that new sessions could start after we get the list of active sessions. There's no way to prevent that other than by following the Preventing new connections recipe. Finally, you should note that starting with version 9.3 any user can terminate a session, unless it belongs to a different user. In older PostgreSQL versions, only superusers were able to terminate sessions. Deciding on a design for multitenancy There are many reasons why we might want to split groups of tables or applications: security, resource control, convenience, and so on. Whatever the reason, we often need to separate groups of tables (I avoid saying the word "database", just to avoid various kinds of confusion). This topic is frequently referred to as multitenancy, though it is not a fully accepted term as yet. The purpose of this recipe is to discuss the options and lead to other, more detailed recipes. How to do it… If you want to run multiple physical databases on one server, then you have four main options, which are as follows: f Option 1: Run multiple sets of tables in different schemas in one database of a PostgreSQL instance (covered in the Using multiple schemas recipe) f Option 2: Run multiple databases in the same PostgreSQL instance (covered in the Giving users their own private database recipe) 103Server Control f Option 3: Run multiple PostgreSQL instances on the same virtual/physical system (covered in the Running multiple servers on one system recipe) f Option 4: Run separate PostgreSQL instances in separate virtual machines on the same physical server Which is best? Well, that's certainly a question many people ask, and something on which many views exist. The answer lies in looking at the specific requirements, which are as follows: f If our goal is the separation of physical resources, then options 3 or 4 work best. Separate database servers can easily be assigned different disks, individual memory allocations can be assigned, and we can take the servers up or down without impacting the others. f If our goal is security, then option 2 is sufficient. f If our goal is merely the separation of tables for administrative clarity, then options 1 or 2 can be useful. Option 2 allows complete separation for security purposes. This, however, does prevent someone with privileges on both groups of tables from performing a join between those tables. So, if there is a possibility of future cross-analytics, it might be worth considering option 1. However, it might also be argued that such analytics should be carried out on a separate data warehouse, not by co-locating production systems. Option 3 has a difficulty in many of the PostgreSQL distributions: the default installation uses a single location for the database, making it a little harder to configure that option. Ubuntu/ Debian handles that aspect particularly well, making it more attractive in that environment. Option 4 can be applied using virtualization technology, but that is outside the scope of this book. How it works… I've seen people who use PostgreSQL with thousands of databases, but it is my opinion that the majority of people use only one database, such aspostgres (or at least, only a few databases). I've also seen people with a great many schemas. One thing you will find is that almost all admin GUI tools become significantly less useful if there are hundreds or thousands of items to display. In most cases, administration tools use a tree view, which doesn't cope gracefully with a large number of items. 104Chapter 4 Using multiple schemas We can separate groups of tables into their own "namespaces", referred to as "schemas" by PostgreSQL. In many ways, they can be thought of as being similar to directories, though that is not a precise description. Getting ready Make sure you've read the Deciding on a design for multitenancy recipe so that you're certain that this is the route you wish to take. Other options exist, and they may be preferable in some cases. How to do it… 1. Schemas can be easily created using the following commands: CREATE SCHEMA finance; CREATE SCHEMA sales; 2. We can then create objects directly within those schemas using "fully qualified" names, like this: CREATE TABLE finance.month_end_snapshot (.….) The default schema in which an object is created is known as the current schema. We can find out which is our current schema using the following query: postgres= select current_schema; This returns an output like the following: current_schema public (1 row) 3. When we access database objects, we use the user-settablesearch_path parameter to identify the schemas to search. The current schema is the first schema in thesearch_path parameter. There is no separate parameter for the current schema. 105Server Control So, if we want to let only a specific user look at certain sets of tables, we can modify theirsearch_path parameter. This parameter can be set for each user so that the value will be set when they connect. The SQL queries for this would be something like the following: ALTER ROLE fiona SET search_path = 'finance'; ALTER ROLE sally SET search_path = 'sales'; The public schema is not mentioned onsearch_path, so it will not be searched. All tables created byfiona will go into thefinance schema by default, whereas all tables created bysally will go into the sales schema by default. 4. The users forfinance andsales will be able to see that the other schema exists and changesearch_path to use it, but we will be able to grant or revoke privileges such that they can neither create objects nor read data in others' schemas: REVOKE ALL ON SCHEMA finance FROM public; GRANT ALL ON SCHEMA finance TO fiona; REVOKE ALL ON SCHEMA sales FROM public; GRANT ALL ON SCHEMA sales TO sally; An alternate technique is to allow one user to create privileges on only one schema, but grant usage rights on all other schemas. We can set up that arrangement like this: REVOKE ALL ON SCHEMA finance FROM public; GRANT USAGE ON SCHEMA finance TO fiona; GRANT CREATE ON SCHEMA finance TO fiona; REVOKE ALL ON SCHEMA sales FROM public; GRANT USAGE ON SCHEMA sales TO sally; GRANT CREATE ON SCHEMA sales TO sally; 5. Note that you need to grant the privileges for usage on the schema, as well as specific rights on the objects in the schema. So, you will also need to issue specific grants for objects, as shown here: GRANT SELECT ON month_end_snapshot TO public; You can also set default privileges so that they are picked up when objects are created, using the following command: ALTER DEFAULT PRIVILEGES FOR USER fiona IN SCHEMA finance GRANT SELECT ON TABLES TO PUBLIC; 106Chapter 4 How it works… Earlier, I said that schemas work like directories—a little at least. The PostgreSQL concept ofsearch_path is similar to the concept of aPATH environment variable. The PostgreSQL concept of the current schema is similar to the concept of the current working directory. There is nocd command to change the directory. The current working directory is changed by alteringsearch_path. A few other differences exist; for example, PostgreSQL schemas are not arranged in a hierarchy like filesystem directories. Many people create a user of the same name as the schema to make this work in a way similar to other RDBMSes, such as Oracle. Both thefinance andsales schemas exist within the same PostgreSQL database, and run on the same database server. They use a common buffer pool, and there are many global settings that tie the two schemas fairly closely together. Giving users their own private database Separating data and users is a key part of administration. There will always be a need to give users a private, secure, or simply risk-free area ("sandpit") to use the database. Here's how. Getting ready Again, make sure you've read the Deciding on a design for multitenancy recipe so that you're certain this is the route you wish to take. Other options exist, and they may be preferable in some cases. How to do it… 1. We can create a database for a specific user with some ease. From the command line, as a superuser, these actions would be as follows: postgres= create user fred; CREATE ROLE postgres= create database fred owner = fred; CREATE DATABASE 107Server Control 2. As the database owners, users have login privileges, so they can connect to any database by default. There is a command namedALTER DEFAULT PRIVILEGES, however, that does not currently apply to databases, tablespaces, or languages. TheALTER DEFAULT PRIVILEGES command also currently applies only to roles (that is, users) that already exist. So, we need to revoke the privilege to connect to our new database from everybody except the designated user. There isn't aREVOKE … FROM PUBLIC EXCEPT command. Therefore, we need to revoke everything and then just regrant everything we need, all in one transaction, such as the following: postgres= BEGIN; BEGIN postgres= REVOKE connect ON DATABASE fred FROM public; REVOKE postgres= GRANT connect ON DATABASE fred TO fred; GRANT postgres= COMMIT; COMMIT postgres= create user bob; CREATE ROLE 3. Then, try to connect asbob to thefred database: os psql -U bob fred psql: FATAL: permission denied for database "fred" DETAIL: User does not have CONNECT privilege. This is exactly what we wanted. How it works… If you didn't catch it before, PostgreSQL allows transactional DDL in most places, so either both of theREVOKE andGRANT commands in the preceding section work or neither works. This means that thefred user never loses the ability to connect to the database. Note that CREATE DATABASE cannot be performed as part of a transaction, though nothing serious happens as a result. 108

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.