Beginning PHP and Postgre SQL 8 From Novice to Professional phần 8 pot

90 303 0
Beginning PHP and Postgre SQL 8 From Novice to Professional phần 8 pot

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

598 CHAPTER 26 ■ POSTGRESQL ADMINISTRATION your PostgreSQL system, you need to run the VACUUM VERBOSE command on each database, and set this value to the total number of pages for all databases. This setting requires 6 × max_fsm_pages bytes of memory, but it is critical for optimum performance, so don’t set this value too low. This value requires a full restart of PostgreSQL for any changes to take effect. Managing Planner Resources The PostgreSQL planner is the part of PostgreSQL that determines how to execute a given query. It bases its decisions on the statistics collected via the ANALYZE command and on a handful of options in the postgresql.conf file. Here we review the two most important options. effective_cache_size This setting tells the planner the size of the cache it can expect to be available for a single index scan. Its value is a number equal to one disk page, which is normally 8,192 bytes, and has a default value of 1,000 (8MB RAM). A lower value suggests to the planner that using sequential scans will be favorable, and a higher value suggests that an index scan will be favorable. In most cases, this default is too low, but determining a more appropriate setting can be difficult. The amount you want will be based on both PostgreSQL’s shared_buffer setting and the kernel’s disk cache available to PostgreSQL, taking into account the amount other applications will take and that this amount will be shared among concurrent index scans. It is worth noting that this setting does not control the amount of cache that is available, but rather is merely a suggestion to the planner, and nothing more. This value requires a full restart of PostgreSQL for any changes to take effect. random_page_cost Of the settings that control planner costs, this is by far the most often modified by PostgreSQL experts. This setting controls the planner’s estimate of the cost of fetching nonsequential pages from disk. The measure is a number representing the multiple of the cost of a sequential page fetch (which by definition is equal to 1) and has a default value of 4. Setting this value lower will increase the tendency to use an index scan, and setting it higher will increase the tendency for a sequential scan. On a system with fast disk access, or on a database in which most if not all of the data can safely be held in RAM, a value of 2 or lower is not out of the question, but you’ll need to experiment with your hardware and workload to find the setting that is best for you. This value requires a full restart of PostgreSQL for any changes to take effect. Managing Disk Activity One of the most common bottlenecks to performance is that of disk input/output (I/O). In general, it is more expensive to read from and write to a hard drive than to compute informa- tion or retrieve the information from RAM. Thus, a number of settings have been created to help manage this process, as discussed in this section. fsync This setting controls whether or not PostgreSQL should use the fsync() system call to ensure that all updates are physically written to disk, rather than rely on the OS and hardware to ensure this. This is significant because, while PostgreSQL can ensure that a database-level crash will Gilmore_5475.book Page 598 Thursday, February 2, 2006 7:56 AM CHAPTER 26 ■ POSTGRESQL ADMINISTRATION 599 be handled appropriately, without fsync, PostgreSQL cannot ensure that a hardware- or OS-level crash will not lead to data corruption, requiring restoration from backup. The reason this is an option at all is that the use of fsync adds a performance penalty to regular operations. The default is to ensure data integrity, and thus leave fsync on; however, in some limited scenarios, you may want to turn off fsync. These scenarios include using databases that are read-only in nature, and restoring a database load from backup, where you can easily (and most likely want to) restore from backup if you encounter a failure. Just remember that turning off fsync opens you up to a higher risk of data corruption, so do not do this casually or without good backups. This value requires a full restart of PostgreSQL for any changes to take effect. checkpoint_segments This setting controls the maximum number of log file segments that can occur between auto- matic write-ahead logging (WAL) checkpoints. Its value is a number representing those segments, with a default value of 3. Increasing this setting can lead to serious gains in performance on write-intensive databases, such as those that do bulk data loading, mass updates, or a high amount of transaction processing. Increasing this value requires additional disk space. To determine how much, you can use the following formula: 16MB × ((2 × checkpoint_segments)+1) Also be aware that this benefit may be reduced if your xlog files are kept on the same physical disk as your data files. checkpoint_warning This setting, added in PostgreSQL 7.4, controls whether the server will emit a warning if check- points occur more frequently than a number of seconds equal to this setting. The value is a number representing 1 second; the default is 30. This value requires a full restart of PostgreSQL for any changes to take effect. checkpoint_timeout This setting controls the maximum amount of time that will be allowed between WAL check- points. The value is a number representing 1 second; the default value is 300 seconds. This value is usually best when kept between 3 and 10 minutes, with the range increasing the more the write load tends to group into bursts of activity. In some cases, where very large data loads must be processed, you can set this value even higher, even as much as 30 minutes, and still see some benefits. Using Logging for Performance Tuning While most of the logging options are used for error reporting or audit logging, the two options covered in this section can be used for gathering critical performance-related information. log_duration This setting causes the execution time of every statement to be logged when statement logging is turned on. This can be used for profiling queries being run on a server, to get a feel for both quick and slow queries, and for helping to determine overall speed. The default is set to FALSE, meaning the statement duration will not be printed. Gilmore_5475.book Page 599 Thursday, February 2, 2006 7:56 AM 600 CHAPTER 26 ■ POSTGRESQL ADMINISTRATION log_min_duration_statement This setting, added in version 7.4, is similar to log_duration, but in this case the statement and duration are only printed if execution time exceeds the time allotted here. The value represents 1 millisecond, with the default being –1 (meaning no queries are logged). This setting is best set in multiples of 1,000, depending on how responsive you need your system to be. It is also often recommended to set this value to something really high (30,000, or 30 seconds) and handle those queries first, gradually reducing the setting as you deal with any queries that are found. ■Tip There is a popular external tool called Practical Query Analysis (PQA) that can be used to do more advanced analyses of PostgreSQL log data to find slow query bottlenecks. You can find out more about this tool on its homepage at http://pqa.projects.postgresql.org/. Managing Run-Time Information When administering a database server, you will often need to see information about the current state of affairs with the server, and gather profiling information regarding queries being executed on the system. The following settings help control the amount of information made available through PostgreSQL. stats_start_collector This setting controls whether PostgreSQL will collect statistics. The default value is for this setting to be turned on, and you should verify this setting if you intend to do any profiling on the system. stats_command_string This setting controls whether PostgreSQL should collect statistics on currently executing commands within each setting. The information collected includes both the query being executed and the start time of the query. This information is made available in the pg_stat_activity view. The default is to leave this setting turned off, because it incurs a small performance disadvantage. However, unless you are under the most dire of server loads, you are strongly recommended to turn this setting on. stats_row_level This setting controls whether PostgreSQL should collect row-level statistics on database activity. This information can be viewed through the pg_stat and pg_stat_io system views. This information can be invaluable for determining system use, including such things as deter- mining which indexes are underused and thus not needed, and determining which tables have a high number of sequential scans and thus might need an index. The default is to turn this setting off, because it incurs a performance penalty when turned on. However, the tuning information that can be obtained often outweighs this penalty, so you may want to turn it on. Gilmore_5475.book Page 600 Thursday, February 2, 2006 7:56 AM CHAPTER 26 ■ POSTGRESQL ADMINISTRATION 601 Working with Tablespaces Before PostgreSQL 8.0, administrators had to be very careful to monitor disk usage from size and speed standpoints, and often had to settle for finding some balance for their database between the two. While this was certainly possible, in some scenarios it proved rather inflexible for the needs of some systems. Because of this, some administrators would go through cumbersome steps of creating symbolic links on the file system to add this flexibility. Unfortunately, this was somewhat dangerous, because PostgreSQL had no knowledge of these underlying changes and thus, in the normal course of events, could sometimes break these fragile setups. PostgreSQL 8.0 solved this with the addition of the tablespace feature. Tablespaces within PostgreSQL provide two major benefits: • Allow administrators to store relations on disk to better account for disk space issues that may be encountered as database size grows. • Allow administrators to take advantage of different disk subsystems for different objects within the database based on the usage patterns of those objects. Because working with tablespaces requires disk access, you need to be a superuser to create any new tablespaces; however, once created, you can make a tablespace usable by anyone. Creating a Tablespace The first step in creating a new tablespace is to define an area on the hard drive for that tablespace to reside. A tablespace can be created in any empty directory on disk that is owned by the operating system user that we used to run PostgreSQL (usually postgres). Once we have that directory defined, we can go ahead and create our tablespace from within PostgreSQL with the following command syntax: CREATE TABLESPACE tablespacename [OWNER username] LOCATION 'directory' If no owner is given, the tablespace will be owned by the user who issued the command. As an example, let’s create a tablespace called extraspace on a spare hard drive, mounted at /mnt/spare: phppg=# CREATE TABLESPACE extraspace LOCATION '/mnt/spare'; CREATE TABLESPACE If we now examine the pg_tablespace system table, we see our tablespace listed there along with the default system tablespaces: phppg=# select * from pg_tablespace; spcname | spcowner | spclocation | spcacl + + + pg_default | 1 | | pg_global | 1 | | extraspace | 1 | /mnt/spare | We see our tablespace listed under the spcname column. The owner of the tablespace is listed in spcowner, the location on disk is listed under spclocation, and any privileges will be listed in spcacl. Gilmore_5475.book Page 601 Thursday, February 2, 2006 7:56 AM 602 CHAPTER 26 ■ POSTGRESQL ADMINISTRATION Altering a Tablespace The ALTER TABLESPACE command allows us to change the name or owner of the tablespace. The command takes one of two forms. The first form renames a current tablespace to a new name: ALTER TABLESPACE tablespacename RENAME TO newtablespacename; The second form changes the owner of a tablespace to a new owner: ALTER TABLESPACE tablespacename OWNER TO newowner; Note that this does not change the ownership of the objects within that tablespace. Dropping a Tablespace Of course, from time to time, we may want to drop a tablespace that we have created. This is accomplished simply enough with the DROP TABLESPACE command: DROP TABLESPACE tablespacename; Note that all objects within a tablespace must first be deleted separately or the DROP TABLESPACE command will fail. Vacuum and Analyze Compared to most database systems, PostgreSQL is a relatively low-maintenance database system. However, PostgreSQL does have a few tasks that need to be run regularly, whether manually, through automated system tools, or via some other means. These two tasks are periodic vacuuming and analyzing of your tables. This section explains why we need to run these processes and introduces the commands involved in doing so. Vacuum PostgreSQL employs a Multiversion Concurrency Control (MVCC) system to handle highly concurrent loads without locking. One aspect of an MVCC system is that multiple versions of a given row may exist within a table at any given time; this may happen if, for example, one user is selecting a row while another is updating that row. While this is good for high concurrency, at some point these multiple row versions must be resolved. That point is at transaction commit, which is when the server looks at any versions of a row that are no longer valid and marks them as such, a condition referred to as being a “dead tuple.” In an MVCC system, these dead tuples must be removed at some point, because otherwise they lead to wasted disk space and can slow down subsequent queries. Some database systems choose to do this housecleaning at transaction commit time, scanning in-progress transactions and moving records around on disk as needed. Rather than put this work in the critical path of running transactions, PostgreSQL leaves this work to be done by a background process, which can be scheduled in a fashion that incurs minimal impact on the mainline system. This background process is handled by PostgreSQL’s VACUUM command. The syntax for VACUUM is simple enough: VACUUM [FULL | FREEZE] [VERBOSE] [ANALYZE] [ table [column]]; Gilmore_5475.book Page 602 Thursday, February 2, 2006 7:56 AM CHAPTER 26 ■ POSTGRESQL ADMINISTRATION 603 The VACUUM command breaks down into two basic use cases, each with a variation of the above syntax and each accomplishing different tasks. The first case, sometimes referred to as “regular” or “lazy” vacuums, is called without the FULL option, and is used to recover disk space found in empty disk pages and to mark space as reusable for future transactions. This form of VACUUM is nonblocking, meaning concurrent reads and writes may occur on a table as it is being vacuumed. Calling this version of the command without a table name vacuums all tables in the database; specifying a table vacuums only that table. ■Caution If you are managing your vacuuming manually, you can normally get away with just vacuuming specific tables under normal operations, but you do need to do a complete vacuum of the database once every one billion transactions in order to keep the transaction ID counter (an internal counter used for managing which transactions are valid) from getting corrupted. The other case for VACUUM is referred to as the “full” version, based on the inclusion of the FULL keyword. This version of VACUUM is much more aggressive with regard to reclaiming dead tuple space. Rather than just reclaim available space and mark space for reuse, it physically moves tuples around, maximizing the amount of space that can be recovered. While this is good for performance and managing disk space, the downside is that VACUUM FULL must exclu- sively lock the table while it is being worked on, meaning that no concurrent read or write operations can take place on the table while it is being vacuumed. Because of this, the generally recommend practice is to use regular “lazy” vacuums and reserve VACUUM FULL for cases in which a large majority of rows in the table have been removed or updated. There is actually a third version of the VACUUM command, known as VACUUM FREEZE. This version is meant for freezing a database into a steady state, where no further transactions will be modifying data. Its primary use is for creating new template databases, but that is not needed in most, if any, routine maintenance plans. The ANALYZE option can be run with both cases of VACUUM. If it is present, PostgreSQL will run an ANALYZE command for each table after it is vacuumed, updating the statistics for each table. We discuss the ANALYZE command more in just a moment. The VERBOSE option provides valuable output that can be studied to determine informa- tion regarding the physical makeup of the table, including how many live rows are in the table, how many dead rows have been reclaimed, and how many pages are being used on disk for the table and its indexes. Analyze When you execute a query with PostgreSQL, the server examines the query to determine the fastest plan for retrieving the query results. It bases these decisions on statistical information that it holds on each of the tables, such as the number of rows in a table, the range of values in a table, or the distribution of values. In order for the server to consistently choose good plans, this statistical information must be kept up to date. This task is accomplished through the ANALYZE command, using the following syntax: ANALYZE [ VERBOSE ] [ table [ (column [, ] ) ] ] Gilmore_5475.book Page 603 Thursday, February 2, 2006 7:56 AM 604 CHAPTER 26 ■ POSTGRESQL ADMINISTRATION The ANALYZE command can be called at the database level, where all tables are analyzed, at the table level, where a single table is analyzed, or even at the column level, where a single column on a specific table is analyzed. In all cases, PostgreSQL examines the table to determine various pieces of statistical information and stores that information in the pg_statistic table. On larger tables, ANALYZE only looks at a small, statistical sample of the table, allowing even very large tables to be analyzed in a relatively short period of time. Also, ANALYZE only requires a read lock on the current table being analyzed, so it is possible to run ANALYZE while concurrent oper- ations are happening within the database. The VERBOSE option outputs a progress report and a summary of the statistical information collected. The recommended practice is to run ANALYZE at regular intervals, with the length between analyzing based on how frequently (or infrequently) the statistical makeup of the table changes due to new inserts, updates, or deletes on the data within a table. Autovacuum In versions prior to PostgreSQL 8.1, the execution of VACUUM and ANALYZE commands had to be managed manually, or with an extra autovacuum process. Beginning in version 8.1, this auto- mated process has been integrated into the PostgreSQL core code, and can be enabled by setting the autovacuum parameter to TRUE in the postgresql.conf file. When autovacuum is enabled, PostgreSQL will launch an additional server process to peri- odically connect to each database in the system and review the number of inserted, updated, or deleted rows in each table to determine if a VACUUM or ANALYZE command should be run. The frequency of these checks can be controlled through the use of the autovacuum_naptime setting in the postgresql.conf file. PostgreSQL starts by vacuuming any databases that are close to transaction ID wraparound. However, if there is no database that meets that criterion, PostgreSQL vacuums the database that was processed least recently. In addition to controlling how often each database is checked, you can control under which criteria a given table will be vacuumed or analyzed. The primary way of setting this criteria is through the autovacuum_vacuum_threshold and autovacuum_vacuum_scale_factor settings for vacuuming and the autovacuum_analyze_threshold and autovacuum_analyze_scale_factor settings for analyzing, all of which are found in the postgresql.conf file. The autovacuum process uses these settings to create a “vacuum threshold” for each table, based on the following formula: vacuum base threshold + (vacuum scale factor × number of tuples) = vacuum threshold While these settings will be applied on a global basis, you can also set these parameters for individual tables in the pg_autovacuum system table. This table allows you to enter a row for each table in your database and set individual base threshold and scale factor settings for those tables, or even to disable running VACUUM or ANALYZE commands on given tables as needed. One reason you might want to disable running VACUUM or ANALYZE commands on a table would be that a table has a narrowly defined use (for example, strictly for inserts only), where the statistics of the data involved are not likely to change much over time. Conversely, a situation in which you might want to try to increase the likelihood of a table being vacuumed is one in which you have a table that has a high rate of updates, perhaps updating all rows in a matter of minutes. At the time of this writing, the autovacuum feature hasn’t quite settled in the code for 8.1, and given that it is a relatively new feature in PostgreSQL, it likely will change somewhat over Gilmore_5475.book Page 604 Thursday, February 2, 2006 7:56 AM CHAPTER 26 ■ POSTGRESQL ADMINISTRATION 605 the next few PostgreSQL releases. However, the advantages it offers in ease of administration are very compelling, and thus you are encouraged to read more about it in the 8.1 documenta- tion and use it when you can. Backup and Recovery Although not strictly needed for good performance, backing up your database should be a natural part of any production system. These tasks are not difficult to perform in PostgreSQL, but it is important to fully understand exactly what you are getting with your backups before a failure occurs. There is nothing worse than having a hard drive go out and then realizing you weren’t doing proper backups. There are three commands that cover database backups and restores, covered next. pg_dump Because the database is the backbone of many enterprise systems, and those systems are expected to run 24 hours a day, 7 days a week, it is imperative that you have a way to take online backups without the need to bring the system down. In PostgreSQL, this is accomplished with the pg_dump command: pg_dump [option] [dbname] The options for pg_dump are listed in Table 26-4. Table 26-4. pg_dump Options Option Explanation Connection Options –h, host=host Specifies the host to connect to; defaults to PGHOST or local machine. –p, port=port Specifies the port to connect on; defaults to PGPORT or compiled port. –U username Specifies the user to connect as; defaults to current system user. –W Forces password prompt even if the connecting server does not require it. Backup Options –a, data-only Outputs data only from the database, not from the schema. Used in plain-text dumps. –b, blobs Includes large objects in the dump. Used in nontext dumps. On by default in 8.1. –c, clean Outputs SQL to drop objects before creating them. Used in plain-text dumps. –C, create Includes a command to create the database itself. Used in plain-text dumps. Gilmore_5475.book Page 605 Thursday, February 2, 2006 7:56 AM 606 CHAPTER 26 ■ POSTGRESQL ADMINISTRATION Because there are quite a few options for pg_dump, let’s take a look at some of the more common scenarios you may encounter when backing up your PostgreSQL database. The following command connects as the postgres user, dumps an archive of the mydb database in the custom archive format, and has that output redirected into the file called mydb.pgr: pg_dump -U postgres -Fc mydb > mydb.pgr The next command connects to a database called phppg running on a host called production, producing a schema-only dump, without owner information but with the commands to drop objects before creating them, in the file called production_schema.sql: pg_dump -h production -s -O -c -f production_schema.sql phppg –d, inserts Dumps data using INSERT commands instead of COPY. Will slow restore. –D, column-inserts Specifies column names in INSERT commands. Even slower than –d. –f, file=file Dumps output to the specified file rather than to standard out. –F, format=c|p|t Specifies the format for the dump: custom, plain-text, or tar-archive. –i, ignore-version Ignores version mismatch between the database and pg_dump. –n, schema=schema Dumps only the objects in the specified schema. –o, oids Includes OIDs with data for each row. Normally not needed. –O, no-owner Prevents commands to set object ownership. Used in plain-text dumps. –s, schema-only Dumps only the database schema, and not data. –S, superuser=username Specifies a superuser to use when disabling triggers. –t, table=table Dumps only the specified table. –v, verbose Produces verbose output in the dump file. –x, no-privileges, no-acl Does not emit GRANT/REVOKE commands in the dump output. disable-dollar-quoting Forces function bodies to be dumped with standard SQL string syntax. disable-triggers Emits commands to disable triggers when loading data in plain-text dumps. –Z, compress=0 9 Sets the compression level to use in the custom dump format. Table 26-4. pg_dump Options (Continued) Option Explanation Gilmore_5475.book Page 606 Thursday, February 2, 2006 7:56 AM CHAPTER 26 ■ POSTGRESQL ADMINISTRATION 607 The following command connects to a database called customer as the user postgres to a server running on port 5480 and produces a data-only dump that disables triggers on data reload, which is redirected into the file data.sql: pg_dump -U postgres -p 5480 -a disable-triggers customer > data.sql The last command provides a schema-only dump of the customer table in the company database, excluding the privilege information: pg_dump -t customer no-privileges -s -f data.sql company As you can see, the pg_dump program is extremely flexible in the output that it can produce. The important thing is to verify your backups and test them by reloading them into develop- ment servers before you have a problem. ■Tip As you may have noticed, we used the file extensions .pgr and .sql for the output files in the preceding examples. While you can actually use any file name and any file extension, we usually recommend using .sql for Plain SQL dumps, and .pgr for custom-formatted dumps that will require pg_restore to reload them. pg_dumpall Although the pg_dump program works very well for backing up a single database, if you have multiple databases installed on a particular cluster, you may want to use the pg_dumpall program. This program works in many of the same ways as pg_dump, with a few differences: • pg_dumpall dumps information that is global between databases, such as user and group information, that pg_dump does not back up. • All output from pg_dumpall is in plain-text format; it does not support custom or tar archive formats like pg_dump. • Due to format limitations, pg_dumpall does not dump large object information. If you have large objects in your database, you need to dump these separately using pg_dump. •The pg_dumpall program always dumps output to standard out, so its output must be redirected to a file rather than using a specified file name. Aside from these differences, pg_dumpall works and acts like pg_dump, so if you are familiar with pg_dump, you will understand how to operate pg_dumpall. ■Tip Remember that pg_dumpall dumps all databases to a single file. If you foresee a need to restore individual databases in a more portable fashion, you may want to stick with using pg_dump for your backup needs. Gilmore_5475.book Page 607 Thursday, February 2, 2006 7:56 AM [...]... introduction to psql: This chapter introduces the psql client along with many of the options that you’ll want to keep in mind to maximize its usage • Commonplace psql tasks: You’ll see how to execute many of psql’s commonplace commands, including how to log on and off a PostgreSQL server, use configuration files to set environment variables and tweak psql’s behavior, read in and edit commands found within... the PostgreSQL server Bundled with the PostgreSQL distribution, psql is akin to MySQL’s mysql client and Oracle’s SQL* Plus tool With it, you can create and delete databases, tablespaces, and tables, execute transactions, execute 611 Gilmore_5475C27.fm Page 612 Tuesday, February 7, 2006 2:25 PM 612 CHAPTER 27 ■ THE MANY POSTGRESQL CLIENTS general queries such as table selections and insertions, and. .. 6 08 Thursday, February 2, 2006 7:56 AM 6 08 CHAPTER 26 ■ POSTGRESQL ADMINISTRATION pg_restore The pg_restore program is used to restore database dumps that have been created using either pg_dumps tar or custom archive formats The basic syntax of pg_restore is certainly straightforward: pg_restore [option] [file name] If the file name is omitted from the command, pg_restore takes its input from standard... executing the command and then exiting psql to make adjustments to those commands from within an editor can become quite tedious To save yourself from the tedium, you can edit these files without ever leaving psql by executing \e For example, to edit the audit .sql file used in the previous example, execute the following command: corporate=> \e audit .sql This will open the file within whatever editor has been... ENCODING = 'SQL_ ASCII' HISTFILE = '~/.psql_history' HISTSIZE = '500' Storing Configuration Information in a Startup File PostgreSQL users have two startup files at their disposal, both of which can be used to affect psql’s behavior on the system-wide and user-specific levels, respectively The system-wide psqlrc file is located within PostgreSQL’s etc/ directory on Linux and within %APPDATA\postgresql\ on... initially connect to a new installation of PostgreSQL, you’ll want to connect to the template1 database and use that to create a new database If there are schema objects or extensions that you need to load into PostgreSQL that you want all future databases to have access to, you can load them into the template1 database The template0 database is mainly provided as a backup in case you manage to modify your... global variables (see the later section “Storing psql Variables and Options”) Therefore, to connect a user website to the database corporate found on the PostgreSQL server located on IP address 192.1 68. 3.45, you’d execute the following command: %>psql -h 192.1 68. 3.45 corporate website To see the other syntax variations for this task, see the section “Logging Onto and Off the Server,” later in this chapter... conf/config.inc .php- dist file, located in this newly uncompressed directory (which at the time of writing is titled phpPgAdmin), and save it as config.inc .php to the same directory Open a Web browser and proceed to the phpPgAdmin home directory—for example, http://www.example.com/phpPgAdmin/index .php You will be presented with a welcome screen, which prompts for a username, password, choice of language, and a... (second section), and revision (third section) releases Revision releases (for example, upgrading from 8. 0.2 to 8. 0.3) are the easiest to handle, because the on-disk format for database files is usually guaranteed to remain the same, meaning that upgrading is as simple as stopping your server, installing the binaries from the newer version of PostgreSQL right over top the older version, and then restarting... The final command restores the data (only), disabling triggers as it loads, into the database called test, from the custom-formatted archive file called mydb.pgr: pg_restore -a disable-triggers -d test -Fc mydb.pgr Upgrading Between Versions PostgreSQL development seems to be moving faster than ever these days At the time of this writing, PostgreSQL 8. 1 was being finalized in an effort to begin testing . alternatives, psql offers a powerful means for managing every aspect of the PostgreSQL server. Bundled with the PostgreSQL distribution, psql is akin to MySQL’s mysql client and Oracle’s SQL* Plus tool of psql’s commonplace commands, including how to log on and off a PostgreSQL server, use configuration files to set environment variables and tweak psql’s behavior, read in and edit commands. 5 98 CHAPTER 26 ■ POSTGRESQL ADMINISTRATION your PostgreSQL system, you need to run the VACUUM VERBOSE command on each database, and set this value to the total number of pages

Ngày đăng: 12/08/2014, 14:21

Tài liệu cùng người dùng

Tài liệu liên quan