Oracle Database 10g The Complete Reference phần 4 ppt

135 375 0
Oracle Database 10g The Complete Reference phần 4 ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

388 Part III: Beyond the Basics ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 20 Blind Folio 20:388 The read-only status for tablespaces is displayed via the Status column of the USER_ TABLESPACES data dictionary view, as shown in the following example: alter tablespace USERS read only; Tablespace altered. select Status from USER_TABLESPACES where Tablespace_Name = 'USERS'; STATUS READ ONLY alter tablespace USERS read write; Tablespace altered. select Status from USER_TABLESPACES where Tablespace_Name = 'USERS'; STATUS ONLINE nologging Tablespaces You can disable the creation of redo log entries for specific objects. By default, Oracle generates log entries for all transactions. If you wish to bypass that functionality—for instance, if you are loading data and you can completely re-create all the transactions—you can specify that the loaded object or the tablespace be maintained in nologging mode. You can see the current logging status for tablespaces by querying the Logging column of USER_TABLESPACES. Temporary Tablespaces When you execute a command that performs a sorting or grouping operation, Oracle may create a temporary segment to manage the data. The temporary segment is created in a temporary tablespace, and the user executing the command does not have to manage that data. Oracle will dynamically create the temporary segment and will release its space when the instance is shut down and restarted. If there is not enough temporary space available and the temporary tablespace datafiles cannot auto- extend, the command will fail. Each user in the database has an associated temporary tablespace— there may be just one such tablespace for all users to share. A default temporary tablespace is set at the database level so all new users will have the same temporary tablespace unless a different one is specified during the create user or alter user command. As of Oracle Database 10 g, you can create multiple temporary tablespaces and group them. Assign the temporary tablespaces to tablespace groups via the tablespace group clause of the create temporary tablespace or alter tablespace command. You can then specify the group as a user’s default tablespace. Tablespace groups can help to support parallel operations involving sorts. P:\010Comp\Oracle8\351-7\CD\Ventura\book.vp Friday, August 13, 2004 1:48:16 PM Color profile: Generic CMYK printer profile Composite Default screen Tablespaces for System-Managed Undo You can use Automatic Undo Management (AUM) to place all undo data in a single tablespace. When you create an undo tablespace, Oracle manages the storage, retention, and space utilization for your rollback data via system-managed undo (SMU). When a retention time is set (in the database’s initialization parameter file), Oracle will make a best effort to retain all committed undo data in the database for the specified number of seconds. With that setting, any query taking less than the retention time should not result in an error as long as the undo tablespace has been sized properly. While the database is running, DBAs can change the UNDO_RETENTION parameter value via the alter system command. As of Oracle Database 10 g, you can guarantee undo data is retained, even at the expense of current transactions in the database. When you create the undo tablespace, specify retention guarantee as part of your create database or create undo tablespace command. Use care with this setting, because it may force transactions to fail in order to guarantee the retention of old undo data in the undo tablespace. Supporting Flashback Database As of Oracle Database 10 g, you can use the flashback database command to revert an entire database to a prior point in time. DBAs can configure tablespaces to be excluded from this option—the alter tablespace flashback off command tells Oracle to exclude that tablespace’s transaction from the data written to the flashback database area. See Chapter 28 for details on flashback database command usage. Transporting Tablespaces A transportable tablespace is a tablespace that can be “unplugged” from one database and “plugged into” another. To be transportable, a tablespace—or a set of tablespaces—must be self-contained. The tablespace set cannot contain any objects that refer to objects in other tablespaces. Therefore, if you transport a tablespace containing indexes, you must move the tablespace containing the indexes’ base tables as part of the same transportable tablespace set. The better you have organized and distributed your objects among tablespaces, the easier it is to generate a self-contained set of tablespaces to transport. To transport tablespaces, you need to generate a tablespace set, copy or move that tablespace set to the new database, and plug the set into the new database. Because these are privileged operations, you must have database administration privileges to execute them. As a developer, you should be aware of this capability, because it can significantly reduce the time required to migrate self-contained data among databases. For instance, you may create and populate a read- only tablespace of historical data in a test environment and then transport it to a production database, even across platforms. See Chapter 46 for details on transporting tablespaces. Planning Your Tablespace Usage With all these options, Oracle can support very complex environments. You can maintain a read- only set of historical data tables alongside active transaction tables. You can place the most actively used tables in datafiles that are located on the fastest disks. You can partition tables (see Chapter 17) and store each partition in a separate tablespace. With all these options available, you should establish a basic set of guidelines for your tablespace architecture. This plan should Chapter 20: Working with Tablespaces 389 ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 20 Blind Folio 20:389 P:\010Comp\Oracle8\351-7\CD\Ventura\book.vp Friday, August 13, 2004 1:48:16 PM Color profile: Generic CMYK printer profile Composite Default screen be part of your early design efforts so you can take the best advantage of the available features. The following guidelines should be a starting point for your plan. Separate Active and Static Tables Tables actively used by transactions have space considerations that differ significantly from static lookup tables. The static tables may never need to be altered or moved; the active tables may need to be actively managed, moved, or reorganized. To simplify the management of the static tables, isolate them in a dedicated tablespace. Within the most active tables, there may be further divisions—some of them may be extremely critical to the performance of the application, and you may decide to move them to yet another tablespace. Taking this approach a step further, separate the active and static partitions of tables and indexes. Ideally, this separation will allow you to focus your tuning efforts on the objects that have the most direct impact on performance while eliminating the impact of other object usage on the immediate environment. Separate Indexes and Tables Indexes may be managed separately from tables—you may create or drop indexes while the base table stays unchanged. Because their space is managed separately, indexes should be stored in dedicated tablespaces. You will then be able to create and rebuild indexes without worrying about the impact of that operation on the space available to your tables. Separate Large and Small Objects In general, small tables tend to be fairly static lookup tables—such as a list of countries, for example. Oracle provides tuning options for small tables (such as caching) that are not appropriate for large tables (which have their own set of tuning options). Because the administration of these types of tables may be dissimilar, you should try to keep them separate. In general, separating active and static tables will take care of this objective as well. Separate Application Tables from Core Objects The two sets of core objects to be aware of are the Oracle core objects and the enterprise objects. Oracle’s core objects are stored in its default tablespaces—SYSTEM, SYSAUX, the temporary tablespace, and the undo tablespace. Do not create any application objects in these tablespaces or under any of the schemas provided by Oracle. Within your application, you may have some objects that are core to the enterprise and could be reused by multiple applications. Because these objects may need to be indexed and managed to account for the needs of multiple applications, they should be maintained apart from the other objects your application needs. Grouping the objects in the database according to the categories described here may seem fairly simplistic, but it is a critical part of successfully deploying an enterprise-scale database application. The better you plan the distribution of I/O and space, the easier it will be to implement, tune, and manage the application’s database structures. Furthermore, database administrators can manage the tablespace separately—taking them offline, backing them up, or isolating their I/O activity. In later chapters, you will see details on other types of objects (such as materialized views) as well as the commands needed to create and alter tablespaces. 390 Part III: Beyond the Basics ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 20 Blind Folio 20:390 P:\010Comp\Oracle8\351-7\CD\Ventura\book.vp Friday, August 13, 2004 1:48:16 PM Color profile: Generic CMYK printer profile Composite Default screen ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 21 Blind Folio 21:391 CHAPTER 21 Using SQL*Loader to Load Data P:\010Comp\Oracle8\351-7\CD\Ventura\book.vp Friday, August 13, 2004 1:48:17 PM Color profile: Generic CMYK printer profile Composite Default screen I n the scripts provided for the practice tables, a large number of insert commands are executed. In place of those inserts, you could create a file containing the data to be loaded and then use Oracle’s SQL*Loader utility to load the data. This chapter provides you with an overview of the use of SQL*Loader and its major capabilities. Two additional data-movement utilities, Data Pump Export and Data Pump Import, are covered in Chapter 22. SQL*Loader, Data Pump Export, and Data Pump Import are described in great detail in the Oracle Database Utilities provided with the standard Oracle documentation set. SQL*Loader loads data from external files into tables in the Oracle database. SQL*Loader uses two primary files: the datafile, which contains the information to be loaded, and the control file, which contains information on the format of the data, the records and fields within the file, the order in which they are to be loaded, and even, when needed, the names of the multiple files that will be used for data. You can combine the control file information into the datafile itself, although the two are usually separated to make it easier to reuse the control file. When executed, SQL*Loader will automatically create a log file and a “bad” file. The log file records the status of the load, such as the number of rows processed and the number of rows committed. The “bad” file will contain all the rows that were rejected during the load due to data errors, such as nonunique values in primary key columns. Within the control file, you can specify additional commands to govern the load criteria. If these criteria are not met by a row, the row will be written to a “discard” file. The log, bad, and discard files will by default have the extensions .log, .bad, and .dsc, respectively. Control files are typically given the extension .ctl. SQL*Loader is a powerful utility for loading data, for several reasons: ■ It is highly flexible, allowing you to manipulate the data as it is being loaded. ■ You can use SQL*Loader to break a single large data set into multiple sets of data during commit processing, significantly reducing the size of the transactions processed by the load. ■ You can use its Direct Path loading option to perform loads very quickly. To start using SQL*Loader, you should first become familiar with the control file, as described in the next section. The Control File The control file tells Oracle how to read and load the data. The control file tells SQL*Loader where to find the source data for the load and the tables into which to load the data, along with any other rules that must be applied during the load processing. These rules can include restrictions for discards (similar to where clauses for queries) and instructions for combining multiple physical rows in an input file into a single row during an insert. SQL*Loader will use the control file to create the insert commands executed for the data load. The control file is created at the operating-system level, using any text editor that enables you to save plain text files. Within the control file, commands do not have to obey any rigid formatting requirements, but standardizing your command syntax will make later maintenance of the control file simpler. The following listing shows a sample control file for loading data into the BOOKSHELF table: 392 Part III: Beyond the Basics ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 21 Blind Folio 21:392 P:\010Comp\Oracle8\351-7\CD\Ventura\book.vp Friday, August 13, 2004 1:48:17 PM Color profile: Generic CMYK printer profile Composite Default screen LOAD DATA INFILE 'bookshelf.dat' INTO TABLE BOOKSHELF (Title POSITION(01:100) CHAR, Publisher POSITION(101:120) CHAR, CategoryName POSITION(121:140) CHAR, Rating POSITION(141:142) CHAR) In this example, data will be loaded from the file bookshelf.dat into the BOOKSHELF table. The bookshelf.dat file will contain the data for all four of the BOOKSHELF columns, with whitespace padding out the unused characters in those fields. Thus, the Publisher column value always begins at space 101 in the file, even if the Title value is less than 100 characters. Although this formatting makes the input file larger, it may simplify the loading process. No length needs to be given for the fields, since the starting and ending positions within the input data stream effectively give the field length. The infile clause names the input file, and the into table clause specifies the table into which the data will be loaded. Each of the columns is listed, along with the position where its data resides in each physical record in the file. This format allows you to load data even if the source data’s column order does not match the order of columns in your table. To perform this load, the user executing the load must have INSERT privilege on the BOOKSHELF table. Loading Variable-Length Data If the columns in your input file have variable lengths, you can use SQL*Loader commands to tell Oracle how to determine when a value ends. In the following example, commas separate the input values: LOAD DATA INFILE 'bookshelf.dat' BADFILE '/user/load/bookshelf.bad' TRUNCATE INTO TABLE BOOKSHELF FIELDS TERMINATED BY "," (Title, Publisher, CategoryName, Rating) NOTE Be sure to select a delimiter that is not present within the values being loaded. In this example, a comma is the delimiter, so any comma present within any text string being loaded will be interpreted as an end-of-field character. The fields terminated by "," clause tells SQL*Loader that during the load, each column value will be terminated by a comma. Thus, the input file does not have to be 142 characters wide for each row, as was the case in the first load example. The lengths of the columns are not specified in the control file, since they will be determined during the load. In this example, the name of the bad file is specified by the badfile clause. In general, the name of the bad file is only given when you want to redirect the file to a different directory. Chapter 21: Using SQL*Loader to Load Data 393 ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 21 Blind Folio 21:393 P:\010Comp\Oracle8\351-7\CD\Ventura\book.vp Friday, August 13, 2004 1:48:18 PM Color profile: Generic CMYK printer profile Composite Default screen This example also shows the use of the truncate clause within a control file. When this control file is executed by SQL*Loader, the BOOKSHELF table will be truncated before the start of the load. Since truncate commands cannot be rolled back, you should use care when using this option. In addition to truncate, you can use the following options: ■ append Adds rows to the table. ■ insert Adds rows to an empty table. If the table is not empty, the load will abort with an error. ■ replace Empties the table and then adds the new rows. The user must have DELETE privilege on the table. Starting the Load To execute the commands in the control file, you need to run SQL*Loader with the appropriate parameters. SQL*Loader is started via the SQLLDR command at the operating-system prompt (in UNIX, use sqlldr). NOTE The SQL*Loader executable may consist of the name SQLLDR followed by a version number. Consult your platform-specific Oracle documentation for the exact name. For Oracle Database 10 g , the executable file should be named SQLLDR. When you execute SQLLDR, you need to specify the control file, username/password, and other critical load information, as shown in Table 21-1. Each load must have a control file, since none of the input parameters specify critical information for the load—the input file and the table being loaded. You can separate the arguments to SQLLDR with commas. Enter them with the keywords (such as userid or log), followed by the parameter value. Keywords are always followed by an equal sign (=) and the appropriate argument. 394 Part III: Beyond the Basics ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 21 Blind Folio 21:394 SQLLDR Keyword Description Userid Username and password for the load, separated by a slash. Control Name of the control file. Log Name of the log file. Bad Name of the bad file. Discard Name of the discard file. TABLE 21-1. SQL*Loader Options P:\010Comp\Oracle8\351-7\CD\Ventura\book.vp Friday, August 13, 2004 1:48:18 PM Color profile: Generic CMYK printer profile Composite Default screen Chapter 21: Using SQL*Loader to Load Data 395 ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 21 Blind Folio 21:395 SQLLDR Keyword Description Discardmax Maximum number of rows to discard before stopping the load. The default is to allow all discards. Skip Number of logical rows in the input file to skip before starting to load data. Usually used during reloads from the same input file following a partial load. The default is 0. Load Number of logical rows to load. The default is all. Errors Number of errors to allow. The default is 50. Rows Number of rows to commit at a time. Use this parameter to break up the transaction size during the load. The default for conventional path loads is 64; the default for Direct Path loads is all rows. Bindsize Size of conventional path bind array, in bytes. The default is operating-system–dependent. Silent Suppress messages during the load. Direct Use Direct Path loading. The default is FALSE. Parfile Name of the parameter file that contains additional load parameter specifications. Parallel Perform parallel loading. The default is FALSE. File File to allocate extents from (for parallel loading). Skip_Unusable_Indexes Allows loads into tables that have indexes in unusable states. The default is FALSE. Skip_Index_Maintenance Stops index maintenance for Direct Path loads, leaving them in unusable states. The default is FALSE. Readsize Size of the read buffer; default is 1MB. External_table Use external table for load; default is NOT_USED; other valid values are GENERATE_ONLY and EXECUTE. Columnarrayrows Number of rows for Direct Path column array; default is 5,000. Streamsize Size in bytes of the Direct Path stream buffer; default is 256,000. Multithreading A flag to indicate if multithreading should be used during a direct path load. Resumable A TRUE/FALSE flag to enable or disable resumable operations for the current session; default is FALSE. Resumable_name Text identifier for the resumable operation. Resumable_timeout Wait time for resumable operation; default is 7200 seconds. TABLE 21-1. SQL*Loader Options (continued) P:\010Comp\Oracle8\351-7\CD\Ventura\book.vp Friday, August 13, 2004 1:48:18 PM Color profile: Generic CMYK printer profile Composite Default screen If the userid keyword is omitted and no username/password is provided as the first argument, you will be asked for it. If a slash is given after the equal sign, an externally identified account will be used. You also can use an Oracle Net database specification string to log into a remote database and load the data into it. For example, your command may start sqlldr userid=usernm/mypass@dev The direct keyword, which invokes the Direct Path load option, is described in “Direct Path Loading” later in this chapter. The silent keyword tells SQLLDR to suppress certain informative data: ■ HEADER suppresses the SQL*LOADER header. ■ FEEDBACK suppresses the feedback at each commit point. ■ ERRORS suppresses the logging (in the log file) of each record that caused an Oracle error, although the count is still logged. ■ DISCARDS suppresses the logging (in the log file) of each record that was discarded, although the count is still logged. ■ PARTITIONS disables the writing of the per-partition statistics to the log file. ■ ALL suppresses all of the preceding. If more than one of these is entered, separate each with a comma and enclose the list in parentheses. For example, you can suppress the header and errors information via the following keyword setting: silent=(HEADER,ERRORS) NOTE Commands in the control file override any in the calling command line. Let’s load a sample set of data into the BOOKSHELF table, which has four columns (Title, Publisher, CategoryName, and Rating). Create a plain text file named bookshelf.txt. The data to be loaded should be the only two lines in the file: Good Record,Some Publisher,ADULTNF,3 Another Title,Some Publisher,ADULTPIC,4 NOTE Each line is ended by a carriage return. Even though the first line’s last value is not as long as the column it is being loaded into, the row will stop at the carriage return. The data is separated by commas, and we don’t want to delete the data previously loaded into BOOKSHELF, so the control file will look like this: 396 Part III: Beyond the Basics ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 21 Blind Folio 21:396 P:\010Comp\Oracle8\351-7\CD\Ventura\book.vp Friday, August 13, 2004 1:48:20 PM Color profile: Generic CMYK printer profile Composite Default screen Chapter 21: Using SQL*Loader to Load Data 397 ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 21 Blind Folio 21:397 LOAD DATA INFILE 'bookshelf.txt' APPEND INTO TABLE BOOKSHELF FIELDS TERMINATED BY "," (Title, Publisher, CategoryName, Rating) Save that file as bookshelf.ctl, in the same directory as the input data file. Next, run SQLLDR and tell it to use the control file. This example assumes that the BOOKSHELF table exists under the PRACTICE schema: sqlldr practice/practice control=bookshelf.ctl log=bookshelf.log When the load completes, you should have one successfully loaded record and one failure. The successfully loaded record will be in the BOOKSHELF table: select Title from BOOKSHELF where Publisher like '%Publisher'; TITLE Good Record A file named bookshelf.bad will be created, and will contain one record: Another Title,Some Publisher,ADULTPIC,4 Why was that record rejected? Check the log file, bookshelf.log, which will say, in part: Record 2: Rejected - Error on table BOOKSHELF. ORA-02291: integrity constraint (PRACTICE.CATFK) violated - parent key not found Table BOOKSHELF: 1 Row successfully loaded. 1 Row not loaded due to data errors. Row 2, the “Another Title” row, was rejected because the value for the CategoryName column violated the foreign key constraint—ADULTPIC is not listed as a category in the CATEGORY table. Because the rows that failed are isolated into the bad file, you can use that file as the input for a later load once the data has been corrected. Logical and Physical Records In Table 21-1, several of the keywords refer to “logical” rows. A logical row is a row that is inserted into the database. Depending on the structure of the input file, multiple physical rows may be combined to make a single logical row. For example, the input file may look like this: Good Record,Some Publisher,ADULTNF,3 P:\010Comp\Oracle8\351-7\CD\Ventura\book.vp Friday, August 13, 2004 1:48:21 PM Color profile: Generic CMYK printer profile Composite Default screen [...]... "PRACTICE"."ADDRESS_TY" OID '48 D49FA5EB6D 447 C8D4C 141 7D 849 D63A' as object (Street VARCHAR2(50), City VARCHAR2(25), State CHAR(2), Zip NUMBER); / P:\010Comp \Oracle8 \351-7\CD\Ventura\book.vp Friday, August 13, 20 04 1 :48 :38 PM 41 9 Color profile: Generic CMYK printer profile Composite Default screen 42 0 Part III: ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 22 Blind Folio 22 :42 0 Beyond the Basics... start a Data Pump Import job via the impdp executable provided with Oracle Database 10g Use the command-line parameters to specify the import mode and the locations for all the files You can store the parameter values in a parameter file and then reference the file via the PARFILE option In the first export example of this chapter, the parameter file named dp1.par contained the following entries: DIRECTORY=dtpump... mode is specified, Oracle attempts to load the entire dump file P:\010Comp \Oracle8 \351-7\CD\Ventura\book.vp Friday, August 13, 20 04 1 :48 :33 PM 41 3 Color profile: Generic CMYK printer profile Composite Default screen 41 4 Part III: ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 22 Blind Folio 22 :41 4 Beyond the Basics Parameter Description ATTACH Attaches the client to a server... from Another Database You can use the NETWORK_LINK parameter to export data from a different database If you are logged into the HQ database and you have a database link to a separate database, Data Pump can use that link to connect to the database and extract its data NOTE If the source database is read-only, the user on the source database must have a locally managed tablespace assigned as the temporary... specify the name as part of the ATTACH parameter call When you attach to a running job, Data Pump will display the status of the P:\010Comp \Oracle8 \351-7\CD\Ventura\book.vp Friday, August 13, 20 04 1 :48 :36 PM 41 7 Color profile: Generic CMYK printer profile Composite Default screen 41 8 Part III: ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 22 Blind Folio 22 :41 8 Beyond the Basics... Default screen 40 0 Part III: ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 21 Blind Folio 21 :40 0 Beyond the Basics maximum value of the column as the starting point for the sequence The following listing shows the use of the sequence option: Seqnum_col SEQUENCE(MAX,1) You can also specify the starting value and increment for a sequence to use when inserting The following... tablespace; otherwise, the job will fail In your parameter file (or on the expdp command line), set the NETWORK_LINK parameter equal to the name of your database link The Data Pump Export will write the data from the remote database to the directory defined in your local database Using EXCLUDE, INCLUDE, and QUERY You can exclude or include sets of tables from the Data Pump Export via the EXCLUDE and... very little tolerance for errors and discards On the other hand, if you do not have control over the source for the input datafile, you need to set errors and discardmax high enough to allow the load to complete After the load has completed, you need to review the log file, correct the data in the bad file, and reload the data using the original bad file as the new input file If rows have been incorrectly... 13, 20 04 1 :48 :33 PM 41 5 Color profile: Generic CMYK printer profile Composite Default screen 41 6 Part III: ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 22 Blind Folio 22 :41 6 Beyond the Basics Parameter Description KILL_JOB Kill the current job and detach related client sessions PARALLEL Alter the number of workers for the Data Pump Import job START_JOB Restart the attached... Default screen ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 21 Blind Folio 21 :40 3 Chapter 21: Using SQL*Loader to Load Data ■ Streamline the data-writing process by creating multiple database writer (DBWR) processes for the database ■ Remove any unnecessary triggers during the data loads If possible, disable or remove the triggers prior to the load, and perform the trigger . Load Data 40 3 ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 21 Blind Folio 21 :40 3 P:10Comp Oracle8 351-7CDVenturaook.vp Friday, August 13, 20 04 1 :48 : 24 PM Color. for implementation details. 40 4 Part III: Beyond the Basics ORACLE Series TIGHT / Oracle Database 10g: TCR / Loney / 225351-7 / Chapter 21 Blind Folio 21 :40 4 P:10Comp Oracle8 351-7CDVenturaook.vp Friday,. tablespace set to the new database, and plug the set into the new database. Because these are privileged operations, you must have database administration privileges to execute them. As a developer, you

Ngày đăng: 08/08/2014, 20:21

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan