DBA Blogs

Globally block a specific SQL Plan Directive?

Tom Kyte - 18 hours 21 min ago
I work with a team of DBAs to administer an Oracle-based COTS application suite for over forty client departments within our organization. This past summer we upgraded all client environments from Oracle 12c v1 to Oracle 19c, on entirely new and upgraded virtual hardware platforms. Out largets client department has > 20,000 active users. The COTS software package and its data model are primarily designed for transaction throughput and supporting client user browsing actions. It generates large volumes of dynamic SQL statements in response to user actions. For historical reasons, and to support older versions of Oracle, many of the dynamically generated queries include the /* BIND_AWARE */ directive. This has not caused problems in the past, but since the upgrade to Oracle 19c, these directives frequently cause the SQL optimizer to choose vastly inferior, inefficient execution plans. For example, the Optimizer may choose a FULL INDEX SCAN on a secondary index of a table containing 300 million rows. Executing the same query after removing the /* BIND_AWARE */ directive causes the Optimizer to access the same table using a UNIQUE scan of the Primary Key. This results in an execution time in milliseconds as opposed to several minutes (worst case) for the original dfynamic query that includes the directive. Since these queries are generated many times per second during heavy usage periods, they have caused some serious performance problems, to the point of rendering the systems unusable. The COTS vendor has looked at the problem, along with recommendations we received from Oracle Support, but acknowledge that it will be a large effort to identify and remove those directives for clients running recent releases of Oracle. We have verified to our satisfaction that the directives are the cause of the problem because of the differences in the execution plans when they are present or not. This surprised our DBAs because they understood that the Oracle 19c Optimizer ignored directives, but that is apparently not the case. They have been able to work around the problem for some frequently occurring queries by applying SQL patches to force the Optimizer to ignore all directives, but a patch only applies to a single SQL statement / SQL ID. The problem is the large number of dynamically generated queries containing the /* BIND_AWARE */ directive, which results in an indefinite number of distinct SQL statements / SQL IDs. It's like Heracles fighting the Hydra - cut off one head and two more grow back. Is there any way in Oracle 19c to globally disable a specific SQL Plan Directive like /* BIND_AWARE */ ? We do not want to disable all directives because there are others related to specific admin tasks that should be enabled, but we want to be able to suppress any and all occurrences of /* BIND_AWARE */, at least for specific schemas. So far I have not been able to identify any such feature in 19c. Thanks, Patrick
Categories: DBA Blogs

How to change Launch Configuration to Launch Template in AWS

Pakistan's First Oracle Blog - Thu, 2022-12-01 21:59

Here is step by step guide as  how to change launch configuration to launch template in AWS for an autoscaling group. It's actually quite simple and straight forward.

There has been a notification in AWS that was sent out this week to following accounts that make use of Amazon EC2 Auto Scaling launch configurations. Amazon EC2 Launch Configurations will Deprecate support for new Instances. After December 31, 2022 no new Amazon Elastic Compute Cloud (Amazon EC2) instance types will be added to launch configurations. After this date, existing launch configurations will continue to work, but new EC2 instances will only be supported through launch templates.


In order to update the ASG, you need to follow below steps:


1. Create a launch template and paste the user data scripts in it and save it. Also, make sure that you are using the correct AMI ID in it.


2. Once launch template is created then navigate your respective auto scaling group and in the details section of the ASG, click on "edit" button in the launch configuration section.  There you will get an option on the top like "Switch to Launch Template".


3. Then select your newly created launch template and save the changes

Here is the document to create launch template.

Here is the document to know how to replace a launch configuration with a launch template.

The existing instances will keep in running state. Only new instances will be launched using launch template. On the ASG console, you can check the instance is launched using launch template in the instance management section. 

For the instances perspective testing like application is running or not or instance is working properly or not, for this you can login the instance and verify the details. It will not automatically launch an instance in the ASG after setting it to launch template. you would have to change the desired capacity to launch a new instance using the launch template.

Categories: DBA Blogs

tracking blocking sessions

Tom Kyte - Thu, 2022-12-01 04:46
hello sir, I want to see the blocking sessions that occur during the day to fix the sessions or SQLs that are causing the others to lock up. There is no trace of locker SQLs in the GV$ACTIVE_SESSION_HISTORY . The sql_id and top_level_sql_id fields specify the locked SQLs, but not the locker SQLs. How can I get it? thanks for your help.
Categories: DBA Blogs

Default privileges for new objects

Tom Kyte - Thu, 2022-12-01 04:46
Hi, is there a method to set default system privileges for all new created objects in Oracle, such as tables, sequences, procedures and functions and triggers? For example, select privilege assigned to an Oracle user. How to make it possible without having to write additional code except for e.g. create table, sequence, function etc.? Best regards, Dackard.
Categories: DBA Blogs

Need to if any provision for parallelly copying data from main table to different schema table

Tom Kyte - Thu, 2022-12-01 04:46
Hello Team, I would like to know is there any provision for parallelly copying data from main table (one schema) to different schema table without hampering current traffic? We need such provision/approach which should be faster way of doing this. Please suggest or guide us how we can do this? Please note: we want to create new schema which contains same table like in main schema. As soon as SQL operation happens on main schema table same should get copied to different new schema as well in faster way. We need to use this new schema for only reporting purpose so we need to develop this approach. please guide us on this as soon as possible. Thanks & Regards, Shital
Categories: DBA Blogs

PL/SQL Question , how to write a query to accept start and end booking date and display all hotel reservation between that date.

Tom Kyte - Thu, 2022-12-01 04:46
create table hotel_reservation with following fields. Booking id, booking start date booking end date room type, room rent write PL/SQL block to accept start and end booking date and display all hotel reservation between that date. *** In this question i am stuck in : *** create table hotel_reservation(booking_id number(20),booking_start_date date,booking_end_date date,room_type varchar2(50),room_rent number(20)); insert into hotel_reservation values(1, TO_DATE('10-05-2022', 'dd-mm-yyyy'), TO_DATE('12-05-2022', 'dd-mm-yyyy'), 'Double', 12000); insert into hotel_reservation values(2, TO_DATE('21-07-2022', 'dd-mm-yyyy'), TO_DATE('25-07-2022', 'dd-mm-yyyy'), 'Single', 5000); insert into hotel_reservation values(3, TO_DATE('01-02-2022', 'dd-mm-yyyy'), TO_DATE('02-02-2022', 'dd-mm-yyyy'), 'Luxury', 30000); insert into hotel_reservation values(4, TO_DATE('30-06-2022', 'dd-mm-yyyy'), TO_DATE('01-07-2022', 'dd-mm-yyyy'), 'Double', 10000); insert into hotel_reservation values(5, TO_DATE('15-10-2022', 'dd-mm-yyyy'), TO_DATE('15-10-2022', 'dd-mm-yyyy'), 'Quad', 11000); select * from hotel_reservation; DECLARE book_id hotel_reservation.booking_id%type; book_sdate hotel_reservation.booking_start_date%type := TO_DATE('10-05-2022', 'dd-mm-yyyy'); book_edate hotel_reservation.booking_end_date%type := TO_DATE('15-10-2022', 'dd-mm-yyyy'); r_type hotel_reservation.room_type%type; r_rent hotel_reservation.room_rent%type; BEGIN Select booking_id,booking_start_date,booking_end_date,room_type,room_rent INTO book_id, book_sdate, book_edate,r_type,r_rent FROM hotel_reservation WHERE booking_start_date = book_sdate and booking_end_date = book_edate; dbms_output.put_line('hotel_reservation ' || book_id || ' ' || book_sdate || ' ' || book_edate || ' ' || r_type || ' ' || r_rent); END; *** in the begin block of pl/sql ***
Categories: DBA Blogs

Split String with table function

Tom Kyte - Thu, 2022-12-01 04:46
Hi I use this table function for string split (a space is used as a separator) create or replace FUNCTION fmv_space_to_table(p_list IN VARCHAR2) RETURN fmv_test_type AS l_string VARCHAR2(32767) := p_list || ' '; l_comma_index PLS_INTEGER; l_index PLS_INTEGER := 1; l_tab fmv_test_type := fmv_test_type(); BEGIN LOOP l_comma_index := INSTR(l_string, ' ', l_index); EXIT WHEN l_comma_index = 0; l_tab.EXTEND; l_tab(l_tab.COUNT) := TRIM(SUBSTR(l_string, l_index, l_comma_index - l_index ) ); l_index := l_comma_index + 1; END LOOP; RETURN l_tab; END fmv_space_to_table; The table function works fine: select * from table( fmv_space_to_table( 'A2345 123456 7890 2344')) Output: A2345 123456 7890 2344 When table function "fmv_space_to_table" is used in a query with only number values, it works fine e.g. ... s.productnumber IN ( select * from table( fmv_space_to_table( '123456 7890')) ) When table function "fmv_space_to_table" is used in a query with letter + number values, it doesn't work e.g. ... s.productnumber IN ( select * from table( fmv_space_to_table( 'A2345')) ) Error: ORA-00904: "A2345": ungultige ID cause of error: my table function returns as output: A2345 but in this case is needed: 'A2345' Question: How can the table function "fmv_space_to_table" be extended so that it can also be used with letters + numbers Values?
Categories: DBA Blogs

ORA-00001 on merge, concurent transactions

Tom Kyte - Thu, 2022-12-01 04:46
When having two simultaneous merges into a table with a unique constraint in different sessions, the second one throws ORA-00001. Steps to reproduce: <code>create table tab1 (col1 number); create unique index ind1 on tab1(col1); --1st session: merge into tab1 d using ( select 1 col1 from dual ) s on (s.col1 = d.col1) when not matched then insert (col1) values(s.col1); --second session: merge into tab1 d using ( select 1 col1 from dual ) s on (s.col1 = d.col1) when not matched then insert (col1) values(s.col1); --second session now hangs --first session: commit; --second session: --throws ora-00001</code> As far as I know, this might be how Oracle behaves since Merge statment exists and might be considered to work as expected. My objection to this is that merge behaves differently depending on first session being commited or not before the second session starts it's merge. The second session obviously knows it cannot just insert and hangs, waits for the first session to finish. And the second session is only blocked (hangs) if it is working with the same key of unique index. So, again, the second session is obviously aware that there is an uncommited session working on the same record. So when the firts session commits the record to the table (results in insert), the second session already desided that it too should insert and violates the constraint. But why? Shouldn't the second session re-evaluate what the correct actions is? If we tried this with [merge when matched] then the second session hangs until first one commits but even though the constrained columns is updated, there is no issue for the second session to update the same record. Does it make sense or do you think "it works as intended" and shouldn't be addressed?
Categories: DBA Blogs

In SQL developer compare of two tables

Tom Kyte - Wed, 2022-11-30 10:26
Hi, How to compare the two tables in SQL developer same database which performs the following actions as add, delete, modify Where I can add delete or modify into source table to target table. Can I get example with procedure for the about task please help me out. Thanks Example table A is having old data and table B is having new data so we have compare to tables if any new data comes have to be updated and have to perform add, delete or modify. For this need a procedure with example Please Thanks
Categories: DBA Blogs

Oracle dump into csv file have a empty line at start.

Tom Kyte - Wed, 2022-11-30 10:26
I am executing a sql-plus session to dump a table's data into a .csv file. I need header details along with the data in the dumped csv file. When I use the "set heading on" option, I am getting header details along with the data plus one empty line at the start. I tried multiple options like "set newpage NONE" and others, but first empty file is still coming. Can u pls help to solve this issue? My code snippet is mentioned at below sqlplus -s /nolog << !EOF! WHENEVER OSERROR EXIT 9; WHENEVER SQLERROR EXIT SQL.SQLCODE; connect sys/${DB_PASSWORD}@//${CONN_STR}:${CONN_PORT}/XE as ${DB_USERNAME} --/*************************/-- set echo off set feedback off set linesize 4000 set newpage NONE set pagesize 0 set serveroutput off set termout off set flush off SET NUMWIDTH 5 SET COLSEP "," set markup csv on set sqlprompt '' set trimspool on SET VERIFY OFF SET TERM OFF --/*******************************/-- --/**** Clear Screen & Buffer ****/-- --/*******************************/-- clear screen clear buffer --/*****************************/-- --/**** Start spooling file ****/-- --/*****************************/-- SPOOL $FILE set heading on set pagesize 0 embedded on select * from $table_name; SPOOL OFF EXIT !EOF! Output sh-4.2$ cat aprm_mirrordb_persons_2022_11_16.csv "PERSONID","LASTNAME" 1,"das" 2,"das2" 3,"abc" 4,"def" 5,"testdata only temporary purpose" 6,"testdata only temporary purpose" 7,"testdata only temporary purpose"
Categories: DBA Blogs

Average number of rows over a time period

Tom Kyte - Wed, 2022-11-30 10:26
I find myself needing to generate a data set where I get the average number of rows over a time period. This period needs to be daily, by hour, for the same period for previous days. As an example, get the average number records from 1 am ? 2 am for the last 180 days. In effect count the number of rows on 17 Nov 2022 between 1 am and 2 am, then count the number for rows on 16 Nov between 1 am and 2 am, then count the number of rows on 15 Nov between 1 am and 2 am, etc. Putting the date and time in the WHERE clause is easy enough but is manual. I would like to dynamically create these values for the WHERE clause based on a start date and time passed to the query. Therefore, if the start date and time passed in is 11-17-2022 01:00, an hour is added to get the end hour and then subtract 1 day for the past 180 days to get the average row count for the date range. Is there some function(s) within Oracle that I could use to dynamically create the date and times based on the start date time? Update 23 Nov 2022 Based on some research I've been doing, here is some code I have come up with so far. I thought it best to put this in a procedure. <code> CREATE OR REPLACE PROCEDURE TestAvgATMTransactions ( DaysToLookBack INT, CurrentStartDateTimeInput TIMESTAMP, CurrentEndDateTimeInput TIMESTAMP, PreviousStartDateTimeInput TIMESTAMP, PreviousEndDateTimeInput TIMESTAMP, RTXNTYPCDToCount VARCHAR2(4 BYTE) ) IS BEGIN DECLARE Low_ATM_Tran_Count_PWTH EXCEPTION, TYPE two_cols_rt IS RECORD ( PreviousStartDateTime TIMESTAMP, PreviousEndDateTime TIMESTAMP ); TYPE DateTimesToQuery IS TABLE OF two_cols_rt; StartEndDateTime DateTimesToQuery; PRAGMA EXCEPTION_INIT(Low_ATM_Tran_Count_PWTH, -20001); PWTHCount RTXN.RTXNNBR%TYPE;-- the average transactions during the given period AvgTransactions INT; -- the current average transactions BEGIN -- This does generate the days given how far I want to look back -- however, it does not include the time for some reason -- Need to figure out why as the time is crucial -- (Chris Saxon - https://livesql.oracle.com/apex/livesql/file/content_LIHBDFVF9IUU6AFZB4H6NVLWL.html) SELECT TO_TIMESTAMP (PreviousStartDateTimeInput, 'mm-dd-yyyy hh24:mi') + LEVEL - 1,-- AS StartDateTime, TO_TIMESTAMP (PreviousEndDateTimeInput, 'mm-dd-yyyy hh24:mi') + LEVEL - 1-- AS EndDateTime BULK COLLECT INTO StartEndDateTime FROM DUAL CONNECT BY LEVEL < DaysToLookBack; DBMS_OUTPUT.PUT_LINE (StartEndDateTime.COUNT); END; -- Based on the article Bulk data processing with BULK COLLECT and FORALL in PL/SQL -- by Steven Feuerstein (https://blogs.oracle.com/connect/post/bulk-processing-with-bulk-collect-and-forall) -- FORALL seemes like it would do what I want, but not sure FORALL index IN 1 .. StartEndDateTime.COUNT -- This code needs to get the average for the given RTXNTYPCD for every day -- in my two_cols_rt record -- Yes needs work as this will not give me the overal average of transactions SELECT COUNT(RTXNSOURCECD) INTO PWTHCount FROM RTXN WHERE datelastmaint BETWEEN PreviousStartDateTimeInput AND PreviousEndDateTimeInput AND RTXNSOURCECD = 'ATM' AND RTXNTYPCD RTXNTYPCDToCount; -- end FORALL -- This gets the current count for the given RTXNTYPCD SELECT COUNT(RTXNSOURCECD) INTO AvgTransactions FROM RTXN WHERE datelastmaint BETWEEN CurrentStartDateTimeInput AND CurrentEndDateTimeInput AND RTXNSOURCECD = 'ATM' AND RTXNTYPCD = RTXNTYPCDToCount; -- If the current count for the given RTXNTYPCD -- is less than the average for the period raise the error IF AvgTransactions < PWTHCount THEN RAISE Low_ATM_Tran_Count_PWTH; END IF END; </code> Thank you Michael
Categories: DBA Blogs

Oracle application containers

Tom Kyte - Wed, 2022-11-30 10:26
I have created application root container and two applicatioon PDBs. In application root created a table products ar follows: <code>CREATE TABLE products SHARING=EXTENDED DATA (prod_id NUMBER(4) CONSTRAINT pk_products PRIMARY KEY, prod_name VARCHAR2(15)); ALTER TABLE hr_app_owner.products ENABLE containers_default;</code> If containers_default is enabled on this table, then query (select * from products;) hangs. When containers_default is disabled, query works using CONTAINERS clause (select * from CONTAINERS(products);). Is this is the restriction for SHARING=EXTENDED DATA? I tried to search through documentation without any luck. Thanks in advance.
Categories: DBA Blogs

SQL Query performance issue

Tom Kyte - Wed, 2022-11-30 10:26
<u></u>Dear Team, We have a delete statement that, when executed, runs for infinite time (we even tried to keep it running for more than 3 days and it was still running) until the we killed the session. We are struggling with support since more than a month and they don't have any idea so far, about what could be the possible cause. We need your help to find why/where it is hanging. The delete command is as follow: <code>delete from T where rec_date <=to_date('15-JAN-2013') /</code> Below are the details about table: <code> SQL> select count(*) FROM T; COUNT(*) ------------ 100175652 SQL> ed Wrote file c:\app\afiedt.buf 1 select count(*) FROM T 2* where rec_date <=to_date('15-JAN-2013') SQL> / COUNT(*) ------------ 328786 </code> As per the number of records, it shouldn't run for this much time. The table involved has a child table, too (we have already deleted the records from child table before executing above query). If we re-import (in a different schema) only the parent table (T) and try deleting from that table, it completes in 10 minutes. However, if we re-import parent and child table, the hang behavior can be seen. <b><u>10046 trace</u></b> <code> call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.03 0.02 0 0 0 0 Execute 1 257.64 1332.39 3575286 1360867 2214317 0 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 2 257.67 1332.42 3575286 1360867 2214317 0 Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 64 Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 0 0 0 DELETE T (cr=0 pr=0 pw=0 time=0 us starts=8) 21 21 21 TABLE ACCESS FULL T (cr=1360867 pr=1361006 pw=0 time=1990133 us starts=33923681 cost=1990133 size=6508050 card=216935) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ library cache lock 1 0.00 0.00 library cache pin 1 0.00 0.00 row cache lock 5 0.00 0.00 Disk file operations I/O 36 0.00 0.00 db file sequential read 2214297 1.03 1155.11 db file scattered read 10646 0.03 19.67 resmgr:cpu quantum 99 0.10 1.63 gc current grant 2-way 77 0.00 0.13 latch: gc element 16 0.00 0.00 resmgr:internal state change 1 0.09 0.09 latch: gcs resource hash 1 0.00 0.00 latch: object queue header operation 1 0.00 0.00 </code> DDL of parent/child tables (table/constraint name renamed) <code> Create Table T ( Serv_Prov_Code Varchar2(15 Char) Not Null Enable, B1_Con_Nbr Number Not Null Enable, B1_Con_History_Nbr Number Not Null Enable, B1_Con_Genus Varchar2(15 Char) Not Null Enable, B1_Con_Comment Varchar2(4000), B1_Con_Des Varchar2(255 Char), B1_Con_Eff_Dd1 Date, B1_Con_Expir_Dd Date, B1_Con_Impact_Code Varc...
Categories: DBA Blogs

Steampipe Brings SQL to AWS Cloud APIs

Pakistan's First Oracle Blog - Tue, 2022-11-29 21:30
A number of database administrators from SQL, Oracle and other relational database services have transitioned into cloud engineering in AWS over the last few years. Most of the still yearn and miss their SQL queries to pull out the data. 


I have been using Boto3 API calls in my Python and NodeJS script for sometime to pull out data out of AWS services. This data then gets pushed into some of the MySQL RDS instance or an Oracle express edition in the tables, so that I could run some SQL queries to slice and dice this data as per my requirement. Selecting specific columns, filtering with where clause, grouping with Group by and various other SQL native features bring this AWS data to life. But as you can already imagine, this is lot of work and extra overhead not to mention the cost.

This is where Steampipe enters into the picture. As per Steampipe, Steampipe exposes APIs and services as a high-performance relational database, giving you the ability to write SQL-based queries to explore dynamic data. Mods extend Steampipe's capabilities with dashboards, reports, and controls built with simple HCL.




It's sort of live database for which you don't have to create any database infrastructure. It uses your native credentials for AWS and their plugin does the magic. One thing I have encountered while using it is that it starts lagging when there is multi-account scenario is in play and you would face few issues especially at the data retrieved volume increases. But for starters and single account, it works like the charm and you can rely on good old Structured query language.




Categories: DBA Blogs

Retrieving and understanding a response file for Oracle GoldenGate Deployment configuration

DBASolved - Sun, 2022-11-27 12:46

To say that Oracle GoldenGate has undergone a transformation over the last seven year is an understatement.  Back in 2017, […]

The post Retrieving and understanding a response file for Oracle GoldenGate Deployment configuration appeared first on DBASolved.

Categories: DBA Blogs

Supported database services

Tom Kyte - Wed, 2022-11-16 16:46
Will this service be available for Exacs?
Categories: DBA Blogs

How to call external sql script from within a PL/SQL block

Tom Kyte - Wed, 2022-11-16 16:46
Hi Tom, This is probably super simple once you show me how, but I haven't been able to find the answer in the documentation. How do I call an external SQL script from within PL/SQL? I know I can embed the actual SQL text inside the PL/SQL block, but I would rather keep the SQLs in a separate file to be called externally when I need to outside the PL/SQL code. E.g. BEGIN -- Check for some condition. -- if condition true then -- execute foo.sql END; / In SQL*PLUS, we execute external scripts using one @, e.g. @foo.sql. And within a script, we execute other scripts with two @, e.g. @@bar.sql. Not sure how to do the same within a PL/SQL block. Thanks, -Peter
Categories: DBA Blogs

Index MIN/MAX Optimization

Tom Kyte - Wed, 2022-11-16 16:46
Dear Team, I have a logging table with the following structure: <code> SQL> desc t Name Null? Type ----------------------------------------------------- -------- ------------------------------------ LOG_ID NUMBER TRACEID VARCHAR2(250 CHAR) TYPE VARCHAR2(250 CHAR) NODE VARCHAR2(250 CHAR) URL VARCHAR2(4000 CHAR) TOKEN VARCHAR2(4000 CHAR) METHOD VARCHAR2(250 CHAR) TIMESTAMPREQ VARCHAR2(100 CHAR) BODY CLOB RESPONSE CLOB TIMESTAMPRES VARCHAR2(100 CHAR) REC_DATE DATE </code> In addition to other indexes, there is one index on columns (Trunc(Rec_Date), Type, Node, Method). I try to get min or max of trunc(rec_date) but surprisingly it doesn't uses the index min/max optimization. Below is the command and trace file output: <code> SELECT min(trunc(rec_date)) FROM t where trunc(rec_date) is not null call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.03 0.09 0 1 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 94.40 567.28 256176 507776 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 94.43 567.38 256176 507777 0 1 Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 242 Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 1 1 1 SORT AGGREGATE (cr=507776 pr=256176 pw=0 time=0 us starts=567286694) 4176565 4176565 4176565 TABLE ACCESS FULL T (cr=507776 pr=256176 pw=0 time=86468 us starts=709057596 cost=86468 size=34748199 card=3860911) SELECT min(trunc(rec_date)) FROM t where rec_date is not null call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.00 0.00 0 1 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 5.84 17.41 5 259917 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 5.84 17.41 5 259918 0 1 Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 242 Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 1 1 1 SORT AGGREGATE (cr=259917 pr=5 pw=0 time=0 us starts=17415568) 4184139 4184139 4184139 TABLE ACCESS FULL T (cr=259917 pr=5 pw=0 time=86242 us starts=106246737 cost=86242 size=34748199 card=3860911) </code> Can you please help me understanding why t...
Categories: DBA Blogs

CREATE VIEW . . . BEQUEATH DEFINER vs. WITH GRANT OPTION

Tom Kyte - Wed, 2022-11-16 16:46
I have a view, cust.viewname, that references a table in a different schema I'll call "data". According to Oracle documentation for CREATE VIEW, the optional clause BEQUEATH DEFINER causes the view to run using the permissions of the view owner. This is the default if not specified. The view schema has SELECT privs to a table in the data schema, but not WITH GRANT OPTION. The view owner has access to the data table, but when I try to create the view, it throws <b>ORA-01720: grant option does not exist for 'DATA.TABLENAME'</b> <code>CREATE OR REPLACE VIEW cust.viewname BEQUEATH DEFINER AS SELECT field1, field2, ..., fieldN FROM data.tablename;</code> If the view operates as the owner, and the view owner has access to the referenced objects, then why should grants to objects in other schemas require WITH GRANT OPTION? Is there a way around WITH GRANT OPTION? Our cloud provider refuses to allow WITH GRANT OPTION, but we have an ongoing need to create views in CUST that reference objects in other schemas and will be used by users other than CUST. Thanks in advance.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs