Feed aggregator

CLOB column over DB Link

Tom Kyte - Wed, 2021-09-15 22:26
Hi Tom, We have a query which has to get a set of rows over the db link which has a CONTAINS predicate on a column of CLOB. E.g. There is a DB A and DB B with table T1( c1 varchar2(10),c2 clob) . I want to run a query from DB A using dblink on DB B to query T1 which has a CONTAINS predicate on c2 and based on the query the rows have to return c1 from t1. Can you suggest anyway in which we can get this done. We have a couple of restrictions though: WE cant insert into DB A from DB B. Table T1 contains around a 100 thousand rows and has around 20 columns. Thanks Ramkumar
Categories: DBA Blogs

Using Buildkite to perform Snyk Open Source and Snyk Code (SAST) tests

Pas Apicella - Tue, 2021-09-14 23:15

Buildkite is a platform for running fast, secure, and scalable continuous integration pipelines on your own infrastructure. In the example below I will run my Buildkite pipeline on my Macbook to perform two Snyk Tests, one for Open-Source dependancies and the other a SAST test of the code itself.

Snyk is an open source security platform designed to help software-driven businesses enhance developer security.

You will need an account on Snyk and Buildkite to follow the steps below.

Steps

1. First in Snyk let's create a Service Account which will be the Snyk token I will use to authenticate with. You can use the Snyk API Token but the service account is all you need to run "Snyk Tests" so makes sense to use that.


2. Next let's store that Service Account token somewhere where I can safely inject that into my pipeline at the appropriate step. In this example I am using "Google Secret Manager" but there are other choices of course.


Note: We will be using the secret NAME shortly "PAS_BUILDKITE_SA_SNYK_TOKEN"

3. You will need a Buildkite agent on your local Infrastructure in my case I using my Macbook so that's done as follows

https://buildkite.com/docs/agent/v3/macos 

pasapicella@192-168-1-113:~/demos/integrations/buildkite$ ./start-agent.sh

   _           _ _     _ _    _ _                                _
  | |         (_) |   | | |  (_) |                              | |
  | |__  _   _ _| | __| | | ___| |_ ___    __ _  __ _  ___ _ __ | |_
  | '_ \| | | | | |/ _` | |/ / | __/ _ \  / _` |/ _` |/ _ \ '_ \| __|
  | |_) | |_| | | | (_| |   <| | ||  __/ | (_| | (_| |  __/ | | | |_
  |_.__/ \__,_|_|_|\__,_|_|\_\_|\__\___|  \__,_|\__, |\___|_| |_|\__|
                                                 __/ |
 https://buildkite.com/agent                    |___/

2021-09-15 11:09:33 NOTICE Starting buildkite-agent v3.32.3 with PID: 50130
2021-09-15 11:09:33 NOTICE The agent source code can be found here: https://github.com/buildkite/agent
2021-09-15 11:09:33 NOTICE For questions and support, email us at: hello@buildkite.com
2021-09-15 11:09:33 INFO   Configuration loaded path=/usr/local/etc/buildkite-agent/buildkite-agent.cfg
2021-09-15 11:09:33 INFO   Registering agent with Buildkite...
2021-09-15 11:09:35 INFO   Successfully registered agent "y.y.y.y.tpgi.com.au-1" with tags []
2021-09-15 11:09:35 INFO   Starting 1 Agent(s)
2021-09-15 11:09:35 INFO   You can press Ctrl-C to stop the agents

4. You're now ready to create a pipeline. A pipeline is a template of the steps you want to run. There are many types of steps, some run scripts, some define conditional logic, and others wait for user input. When you run a pipeline, a build is created. Each of the steps in the pipeline end up as jobs in the build, which then get distributed to available agents.

In the example below our pipeline is created from a GitHub repo and then select the default branch. At that point incoming webhooks are sent to Buildkite by source control providers (GitHub, GitLab, Bitbucket, etc.) to trigger builds, in this scenario we using GitHub


5. Let's go ahead and actually just edit the build steps using YAML. My final YAML is as follows and I explain below why it looks this way but in short I just want to run two snyk tests rather then actually deploy anything for this demo.

steps:
  - commands:
      - "snyk config set api=$$SNYK_SA_TOKEN_VAR"
      - "snyk test --severity-threshold=$$SEVERITY_THRESHOLD"
      - "snyk code test --org=$$SNYK_ORG"
    plugins:
      - avaly/gcp-secret-manager#v1.0.0:
          credentials_file: /Users/pasapicella/snyk/clouds/gcp/buildkite-secrets-gcp.json
          env:
            SNYK_SA_TOKEN_VAR: PAS_BUILDKITE_SA_SNYK_TOKEN
    env:
      SEVERITY_THRESHOLD: "critical"
      SNYK_ORG: "pas.apicella-41p"
    label: "Employee API Snyk Test"

Few things to note here:

  • I am using a GCP secret manager plugin to retrieve my Snyk SA token with a name as follows "PAS_BUILDKITE_SA_SNYK_TOKEN"
  • I am using a Google Service Account JSON so I can authenticate with GCP and retrieve my secret "SNYK_SA_TOKEN_VAR", you will need to use a Service Account with privileges to at least READ from Google Secret Manager
  • I am using some local non sensitive ENV variables which get used at the appropriate time
  • I have three commands of which the first command sets my Snyk API token for the Snyk CLI
  • I have not installed the Snyk CLI because it already exists on my Macbook 
  • I am only looking for my Snyk tests to fail if it finds any CRITICAL issues only
  • I should be running a "mvn package" here but I can still execute a "snyk test" without it for demo purposes as we have a pom..xml
  • I could also build a container in the pipeline from the source code and then run a "snyk container test" as well, in fact I could even run "snyk iac test" against any IaC files in the repo as well
  • If a test fails we can easily run "snyk monitor" to load the results into the Snyk App but for this demo we don't do that

6. Now we can manually run a build or wait for triggering event on our repo, here is some screen shots of what it looks like including some failures where we find vulnerabilities in a separate node.js repo





 

It makes more sense to create a Buildkite plugin for Snyk rather than execute commands using a script and here is an example of one below. Having said that the commands you run to execute a "snyk test" are simple enough to include in the pipeline YML without the need for a plugin here especially if you have infrastructure already setup with the ability to run the "snyk cli". A plugin would be the right approach here though as per the example below.

https://github.com/seek-oss/snyk-buildkite-plugin


Hopefully you have seen how easy it is to continuously avoid known vulnerabilities in your dependencies and code, by integrating Snyk into your continuous integration pipeline with Buildkite.

More Information
Snyk
Categories: Fusion Middleware

Register for a Free Webinar with PFCLForensics for Breached Oracle Databases

Pete Finnigan - Tue, 2021-09-14 07:46
I will be giving a free webinar hosted with our reseller/distributer in Slovenia and the Balkans region - Palsit . The free webinar is at 09:00 UK time or 10:00 CET time on the 22nd September 2021. In this webinar....[Read More]

Posted by Pete On 14/09/21 At 01:28 PM

Categories: Security Blogs

How to Call Fusion REST api in PLSQL using apex web service

Tom Kyte - Mon, 2021-09-13 15:26
Hi, I'm trying to consume Oracle fusion rest web service's in PLSQL using apex_web_service. I'm getting the below error. could you please guide me on this. also Pls suggest how to enable the basic authentication in web service like how to set the basic auth. ORA-20987: APEX - One or more cookies set in apex_util.g_request_cookies contain an invalid value. - Contact your application administrator. Details about this incident are available via debug id "49001 <code> CREATE OR REPLACE FUNCTION ATP_REST RETURN CLOB AS l_clob CLOB; l_result VARCHAR2(32767); BEGIN APEX_WEB_SERVICE.g_request_cookies.delete(); APEX_WEB_SERVICE.g_request_cookies(1).name := '';---i'm passing the username (baisc auth) APEX_WEB_SERVICE.g_request_cookies(1).value := '';---pwd l_clob := APEX_WEB_SERVICE.make_rest_request( p_url => 'https://ehpv-dev8.fa.em2.oraclecloud.com/bpm/api/4.0/tasks', p_http_method => 'GET' -- p_parm_name => APEX_UTIL.string_to_table('p_int_1:p_int_2'), -- p_parm_value => APEX_UTIL.string_to_table(p_int_1 || ':' || p_int_2) ); DBMS_OUTPUT.put_line('l_clob=' || l_clob); RETURN l_clob; END; / </code> Regards, Praveen Paulraj
Categories: DBA Blogs

Oracle 12c EMON processing features

Tom Kyte - Mon, 2021-09-13 15:26
I read the article, titled: <b>Event Monitor Process: Architecture and Known Issues (Doc ID 105067.1)</b> ? Notification implements a high-watermark scheme where when the backlog of notification events hit an in-memory limit the messages are, in the case of 9.2, spilled onto disk into the SYS.AQ_EVENT_TABLE_Q queue if a watermark has been set. In 9.2 the default value for the watermark is 0 which means that the no messages will spill onto disk and the entire shared pool could be used for notification events. In 10.1 onwards the procedures DBMS_AQADM.GET_WATERMARK and DBMS_AQADM.SET_WATERMARK are available to set the amount of memory available for notification events but the messages are no longer spilled onto disk. Instead the enqueueing process are subject to flow control until the backlog has been cleared by the emon process. ? What does it mean for me as a developer? I?ve a 12.2 EE database under Linux(64 bit). I registered a notification callback procedure. How can I know , when enqueuing a message to the queue, that my message will be approved by EMON, or it will be placed to the backlog? And why should I know that? Does ?until the backlog has been cleared by the emon process? mean that entire backlog will be lost without any further processing by EMON? I noticed many times that there are messages in the queue and in the same time EMON?s job PLSQL_NTFN hangs in state ?Waiting for messages in the queue?. And these messages seems to be never processed. May be it is the case of clearing backlog? And where can I find an information (in server logs, in database objects) about clearing backlog and about MSGId?s of messages that had been cleared? TIA, Andrew.
Categories: DBA Blogs

INLIST ITERATOR

Tom Kyte - Mon, 2021-09-13 15:26
Hi Tom, Some questions about the SQL tuning. 1) I found that when using "IN" in the where clause , INLIST ITERATOR is shown on the explain plan in a cost-based database (and using the index correctly, the response is fast). However, no such INLIST ITERATOR in rule-based (and using the full table scan, the response is slow). Is INLIST ITERATOR only occur on cost-based? Is it possible to force the optimizer to use INLIST ITERATOR in a rule-based database (without any hints added to the SQL statement or using alter session set optimizer_mode = choose)? Or is it possible to rewrite the "IN" to other forms such that the index can be used in rule-based database? I have tried to rewrite "IN" to "OR" but the index still cannot be used. The only way the index can be used is using UNION ALL the values of "IN". 2) If the database is rule-based (optimizer_mode=rule), and the table has statistics, will Oracle use cost-based to answer the query? I rememeber that Oracle will use rule-based if the optimizer_mode is set to rule (from Oracle documentation), no matter whether the table has statistics. But I found that in some situation Oracle will use cost-based. Thanks, David
Categories: DBA Blogs

Defaulting Argument Values in a BASH script

The Anti-Kyte - Mon, 2021-09-13 13:49

I recently found out (again) how to default argument values in a shell script, so I thought I’d write it down this time.
The ABBA flavour in what follows is because we’re talking about BASH – the Bourne Again SHell and Bjorn Again are an ABBA tribute band. This is quite handy because the ABBA back catalogue is rather versatile having already lent itself to an explanation Oracle password complexity.

Welcome to the inside of my head. Sorry about the mess…

Let’s start with a simple script – abba.sh :

#!/bin/bash

if [ $# = 0 ];  then
    echo 'No argument values passed in'
else
    TRACK=$1
    echo $TRACK
fi;

…which we now make executable :

chmod u+x abba.sh

Run this – first with no arguments passed in and then with a single argument and we get :

If we want to default the value of $TRACK (and let’s face it, who doesn’t love a bit of Dancing Queen), we can do the following (saved as abba2.sh) …

#!/bin/bash

TRACK="${1:-Dancing Queen}"
if [ $# != 1 ];  then
    echo 'No argument values passed in'
fi;
echo $TRACK

Now, when we run this, we can see that it’ll accept an argument as before. However, if no argument is passed in the argument count is unaffected but the variable is initialized to it’s default value :

FastAPI on Kubernetes with NGINX Ingress

Andrejus Baranovski - Mon, 2021-09-13 03:29
A simple tutorial about a complex thing - how to expose FastAPI app to the world on Kubernets with NGINX Ingress Controller. I explain the structure of Kubernetes Pod for FastAPI along with Kubernetes service. I show how FastAPI properties should be set to be accessible through Ingress path definition. You will learn how to check the log for NGINX Ingress Controller and FastAPI Pod.

 

Considering Reclaimed Wooden Flooring for Your Home

OraQA - Mon, 2021-09-13 01:33

Home is a place where you find real peace of mind. This is the main point why most of the contemporary individuals feel their home as a recreational place to live in. When it comes to adding additional charm to your home, you need to install wooden flooring. Obviously, wooden flooring is a great choice to go with when it comes to transforming the look and feel a home.

However, wooden flooring is a great flooring option for homeowners, but problem comes with its price. It means that many homeowners find wooden flooring an expensive option to go with.

Are you also among those homeowners?

If so, then reclaimed wood flooring can be the best solution for your flooring requirements.

What Is Reclaimed Wooden Flooring?

As the name suggests reclaimed wood flooring comes out used wooden. Whether it is about using damage wooden furniture or stuffs made out of wooden, you can reclaim the same wood for other requirements. In other words, you create wooden flooring out of used wooden.

Engineered flooring or reclaimed wooden flooring is surely a great choice for modern homeowners. Since it is an echo-friendly flooring option, most of the contemporary homeowners are choosing it as their flooring option.

Can I Make My Flooring Stylish and Attractive with Engineered Wood Flooring?

There are individuals who assume that they may not be able to avail the benefits of using wooden flooring manufactured out of fresh wooden. Of course, it’s a wrong perception to go with. If you are also dealing with this misconception, you need to eliminate it as soon as possible.

You need to remember that engineered wood flooring is surely a great choice. Moreover, you can easily make your reclaimed wooden flooring more stylish and amazing than ever before. Obviously, choosing used wood for manufacturing wood flooring doesn’t mean that you can’t make it a desired design. You can do whatever you want to do with reclaimed wood.

What Are My Flooring Budget and Requirements?

If you don’t want to repent on your own buying decision at later stage, you first need to know about your budget and flooring requirements. Obviously, you won’t choose a wooden flooring option that you actually don’t need. Similarly, you will never like to choose an expensive flooring option that may not fit well with your limited budget.

So, if you don’t want to cope with unwanted financial trouble, you first need to take your budget and flooring requirements into consideration. Taking these two significant factors into consideration can help you making your life better than ever before.

Where to Find Best Manufacturer of Reclaimed Wooden Flooring?

If you want to choose top-quality of wooden flooring, you first need to know about a right wood flooring manufacturer. Of course, a right manufacturer can manufacture amazing quality, style and design of reclaimed wood flooring for modern homes.

When it comes to searching for the best reclaimed wooden flooring manufacturer, you need to look at nowhere else but internet. By searching online, you can know about top manufacturers of engineered wood flooring online.

The post Considering Reclaimed Wooden Flooring for Your Home appeared first on ORA QA.

Review: Intent based networking for dummies

Dietrich Schroff - Sat, 2021-09-11 16:38

I found the book intent-based networking on linkedin posted by juniper:




The book contains 5 chapters on 44 pages.



Chapter one (expressing intent and seeing the basics of IBN) tries to give a motivation for intent based networking. And the story goes like this: "humans are slow, expensive, error prone, and inconsistent. [...] the systems are vulerable to small mistakes that can have enormous costs to business."
In addition we have "inadequate automation", "data overload", and "stale documentation". (At this point i think we are generally doomed and should stop networking at all).
BUT with IBN "you can manage what requires auto- mation, make your system standardized and reliable, and ensure you’re free to move and adjust heading into the future." The promise of IBN is to do a change from node-to-node management to an autonomic system. "The sys tem self-operates, self-adjusts, and self-corrects within the parameters of your expressed technical objectives."
So everthing should work like this: you express your intent - this intent is translated and then orchestration configuration will roll out the changes onto your network.
I think on good phrase for IBN is: "You say what, it says how"


The second chapter is named "Looking at the characteristics of IBN. This chapter does not give any helpful information at all. One nice concept is mentionend here: "Simple Pane of Glass": "t’s an important concept and a valuable benefit of having a single source of truth: You can see your entire network from a single, consistent perspective." But is think this is not possible for networks. Depending on your perspective (ethernet, vlans, ips, mpls, ...) the view is completely different. Just think about hardware ports vs virtual ports...
 

"Detailing the IBN architecture" is the titel of chapter 3. This chapter is with 9 pages the biggest chapter inside the booklet. In this chapter an example is drilled through: The intent "I want a VLAN connecting servers A, B, C, and D." is analyzed and the steps from define, translate, verify, deploy and monitor are shown.
In addition there are some subsection where the reference design, abstractions, inventory are put into relation to each other. This is illustrated with very nice figures. Really a good chapter!
 

In chapter four the book moves forward from fulfillment to assurance. "This chapter shows you why your IBN system (IBNS) requires sophisticated, deep analytics that can detect when a deployed service is drifting out of spec and either automatically make the adjustments to bring it back into compliance or alert you to the problem."
It starts with differentiating uncontrolled changes from controlled changes. This is nothing special to IBN. I think this is useful for any kind of operation in IT.
 

Chapter 5 is as always in this "dummmies" series a recap of the chapters before.


All in all a nice booklet which introduces very well in this new kind of network management system. But if IBN can keep the promises - let's see...
 




Questions on Locking

Tom Kyte - Fri, 2021-09-10 14:06
Hi Tom, Here are the few questions on Locking. 1)How do I use v$lock to figure out lock details ? Details like whos has locked,on which table,who else is waiting, and what type of locking (shared or exlusive ?), since when it is locked. 2)Why and when exclusive/shared locks are used ? Could you give me example please ? Thanks for the answer, I understand better. 3)Could you tell me significance of v$mystat and v$locked_object ? Thank you . 4) Do we have page level locking in Oracle ? Thank you Thanks in advance, Gururaj
Categories: DBA Blogs

Memory used for decode, regexp

Tom Kyte - Thu, 2021-09-09 19:46
Hi Tom Want to understand how and where memory is allocated in an instance to evaluate decode, regexp functions in oracle? If we are reading data from a table and writing it any other location(not oracle), then what would be better-->while firing the query with regexp function in where clause OR taking the data out of oracle without regexp filter and applying regexp in some other server. E. G. Select col1, col2 from table1 where regexp_like(col1, '[^a-zA-Z0-9]') and date_col between to_date('08/02/2021 00.00.00') and to_date('08/02/2021 06.00.00') OR Select col1, col2 from table1 where date_col between to_date('08/02/2021 00.00.00') and to_date('08/02/2021 06.00.00')
Categories: DBA Blogs

PFCLForensics is released a tool for forensic analysis of a breached database

Pete Finnigan - Thu, 2021-09-09 17:46
We have had a very busy year despite the Covid pandemic. I personally managed to catch covid last January and was very unwell for weeks with coviid and then many many weeks recovering after that. Then I managed to get....[Read More]

Posted by Pete On 09/09/21 At 01:53 PM

Categories: Security Blogs

[Recap] Day 4 and Day 5: OIC Agents [Oracle Integration Cloud] [1Z0-1042]

Online Apps DBA - Thu, 2021-09-09 04:12

Posting Content: [Recap] Day 4 and Day 5: Connections, Adapters, Integrations and Web Services [Oracle Integration Cloud] [1Z0-1042] An Oracle Integration Cloud specialist is responsible for planning, coordinating, and supervising all activities related to the integration of enterprise information systems. This blog will share the Oracle Integration Cloud Cloud Day 4 and Day 5 Live Session Reviews and FAQ’s. […]

The post [Recap] Day 4 and Day 5: OIC Agents [Oracle Integration Cloud] [1Z0-1042] appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Async in managed ODP.NET

Tom Kyte - Thu, 2021-09-09 01:26
When will Oracle be introducing proper async support into the managed provider?
Categories: DBA Blogs

Update all listeners to new Oracle Home after upgrade

Tom Kyte - Thu, 2021-09-09 01:26
Is there a way to update all listeners to a new OHome after upgrade? Going in manually to each listener (say 10) to update the new OHome seems like there should be an easier way but I'm unable to locate anything on it.
Categories: DBA Blogs

Reorganization Within Same Tablespace and To New Tablespace

Tom Kyte - Thu, 2021-09-09 01:26
Hi Tom, Just want to ask is there any difference between doing objects(tables,indexes and etc) reorganization within same tablespace and to new tablepsace? Normally i used to do reorganization to another tablespace but this method will need extra space to move the objects before we able to shrink old tablespace. Recently i used to do reorganozation within same tablespace to reuse the space but seem the reorganization within the same tablespace need to start reorg the objects in end of the blocks so that we able to shrink the size of the tablepsace. There is some incident where i need to reorg big objects few times before it able to move to below blocks and let us to resize. Just wanted whether there is any other differences between reorganization within the same tablespace and to new tablespace. Thanks.
Categories: DBA Blogs

DBSAT running on Amazon Linux

Tom Kyte - Wed, 2021-09-08 07:06
For Mr. Lopes: I have recently discovered the DBSAT, but my site runs Oracle instances off of the Amazon cloud platform. Are there any known issues for running DBSAT on Amazon Linux 2? Thanks.
Categories: DBA Blogs

Object reference in DBA_TAB_PRIVS but no object in DBA_OBJECTS

Tom Kyte - Wed, 2021-09-08 07:06
I have found records in DBA_TAB_PRIVS that do not have corresponding records in DBA_OBJECTS. <code>select * from dba_tab_privs where owner='ODSMGR' and table_name like 'BIN$%';</code> Returns privileges defined against a table. <code>select * from dba_objects where owner='ODSMGR' and object_name like 'BIN$%';</code> Returns no records. There is a process in this database that is trying to process REVOKE statements for the privs it finds in dba_tab_privs, but the statements are blowing up because the object doesn't seem to exist (in DBA_OBJECTS). How can I process these REVOKE statements (to get them out of DBA_TAB_PRIVS) when they don't show up in DBA_OBJECTS? Thanks
Categories: DBA Blogs

XML to JSON with nested XML elements

Tom Kyte - Wed, 2021-09-08 07:06
Hi, For below query, i am able to extract the header, but need the lines array in the same sql as json output. DB Version: 12.2.0.1.0 <code>SELECT JSON_OBJECT('hdr_id' VALUE hdr_id ,'prno' VALUE prno ) FROM XMLTABLE( '/hdr' PASSING XMLTYPE('<hdr> <hdr_id>2238770</hdr_id> <prno>64922</prno> <creation_date>2021-09-01</creation_date> <status>in process</status> <lines> <line> <line_id>2618885</line_id> <line_num>1</line_num> <item_description>Test1</item_description> <uom>each</uom> <unit_price>400</unit_price> <quantity>1</quantity> </line> <line> <line_id>2618886</line_id> <line_num>2</line_num> <item_description>Test2</item_description> <uom>each</uom> <unit_price>555</unit_price> <quantity>1</quantity> </line> </lines> </hdr>') COLUMNS hdr_id VARCHAR2(20) PATH 'hdr_id', prno VARCHAR2(20) PATH 'prno' );</code> Desired Output: <code>{ "hdr_id": "2238770", "prno": "64922", "lines": [ { "line_num": 1, "item_description": "Test1" }, { "line_num": 2, "item_description": "Test2" } ] }</code>
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator