Power to Build

Home » Scripting

Category Archives: Scripting

Gotcha: UPDATE in Oracle PL/SQL


A friend asked me why his stored procedure wasn’t working. He was calling it from a program, so there was no way to test it, other than saying the update didn’t happen.

I looked at this procedure, it had one UPDATE statement. The meat of the code is shown below (I am paraphrasing):

CREATE OR REPLACE PROCEDURE​ ​upd_employee
​​​AS
BEGIN
(
​p_employee_nbr VARCHAR2,​
p_employee_name​ VARCHAR2​​,​
p_address​ VARCHAR2 ​
…)
AS
UPDATE employee
SET employee_name = p_employee_name,
address = p_address,
created_dt = SYSDATE

WHERE
employee_nbr = p_employee_nbr
;
EXCEPTION
WHEN OTHERS THEN
RAISE;
COMMIT;
END test;

When I saw this, the first thing I noticed was that there is no proper error checking there. I see the EXCEPTION handling this. It’s natural to think that it will take care of all errors. Not so!

UPDATE Statement is an anamoly. When UPDATE statement is executed, it simply sets a variable SQL%ROWCOUNT to the # of rows updated and moves on. When UPDATE “fails” find any record to update, the ROWCOUNT will be 0. It is simply not an error as far as the database is concerned.
(This is unlike the case of a SELECT..INTO, where you get a NO_DATA_FOUND exception. Actually, not many developers do that. That’s for another story later).

Coming back to the issue, I suspected that the UPDATE did not happen and since there is no error handling, it “failed” silently.

First thing I did was to prove this theory. Looking at the procedure code, since it’s not checking for anything specific, I can write a crude test like so:

BEGIN
test (
‘019189097’,
‘Scott Employee’,
‘123 State St, Los Angeles’

);
Dbms_Output.put_line(SQL%ROWCOUNT);
END;
/

I am essentially testing it as a blackbox like his program did. But, I am in an interactive session, where I can print using dbms_output.put_line to print debug messages. Now, you can see the real story. I am printing SQL%ROWCOUNT and it was 0 as expected.

For a quick fix, I just added the below line above the EXCEPTION and it did the job.

IF (SQL%ROWCOUNT = 0) THEN
RAISE NO_DATA_FOUND;
END IF;

But, in reality I will add lot more checks – including the types of the field etc. For application, instead of just RAISE, I would have added Raise_Application_Error function. Raise_Application_Error allows us to use -20000 .. -20999 as user error codes.
(I saw one program where they used -30000 which caused a brand new error to be generated. I will reserve that for another day!!). I may even add the below lines to be complete:

SHOW ERRORS PROCEDURE ​upd_employee
EXIT

These are particularly helpful in SQL*Plus. First line makes sure the errors are printed and the EXIT leaves no ambiguity. (If you don’t have it, you will have to find ways to exit from OS level. See my earlier post for this).

And from a standards point of view, I would name the procedure sp_upd_employee to indicate it’s a stored procedure. I have a full standards document I made for a client years ago. I will post it some other time.

Before closing, I would like to mention couple of pet peeves of mine. One, a PL/SQL block must have the below syntax:

BEGIN
<body>
EXCEPTION …
END;
/

COMMIT should be part of the <body> and ROLLBACK should be in the EXCEPTION section. Having a COMMIT after the exception block is accepted syntactically, but it is WRONG! Because, he has a RAISE statement, it is never reached here, in case of any errors. But, the actual UPDATE statement is not COMMITted either!! (Not exactly true – it will actually COMMIT, but we are now leaving it to the mercy of SQL*Plus or any tool, which COMMITs when exiting the script, by default).

The other pet peeve of mine is the blank line inside SQL statements. I agree, it looks more readable, but wrong. PL/SQL ignores this, but if you run the SQL by itself, SQL*Plus will throw an error. I try to be consistent between SQL and PL/SQL to avoid surprises. If I really need a spacer, I add a comment line

UPDATE <table
— this is a comment
<rest of the SQL>;

And coming back to the original issue, the procedure “failed” silently because he was passing in a number to a VARCHAR field and in doing so, he dropped the leading zero in the employee_nbr, thus UPDATE couldn’t find it!! So watch for your types too!

Oracle Sequences contd…


​We are currently in the process of upgrading our Oracle database from version 11g to 12c. There are lot of challenges. One of the challenge we have is of course space. We have one test (Unix) machine that is running several Oracle instances. During the Oracle 12c, the DBA decided to upgrade only one of the instances to 12c. This meant that, whenever they refresh the database, their automatic refresh from a production standby would no longer work, because of the version differences. They will have to manually export and import from production (or standby) using datapump (expdb on the source and impdb on the target database).

While testing this new database (DB12c), we ran into a strange problem – the user started getting a lot of “Primary key violations”. After some checking, found out that the sequences were out of whack. Essentially, the sequences were not part of the export or import and thus they retained the old values before we did the refresh. This seems to be a problem Oracle is aware of. See here.

We were in the middle of a comparison testing (DB11g is a test instance in Oracle 11g and DB12c in 12c) and we needed a quick solution. I decided to attack the issue as a programming problem. A bit of Googling gave solutions to reset the sequence to 0. This won’t work for us, as we had live data in the database on which we were running tests. I considered different solutions.

  1. What if we simply bump up every sequence in the database by some big number, so they will not run into trouble? Say +100,000 to each sequence?

This seemed to work OK, until someone ran a huge batch program on DB11g and 100,000 wasn’t good enough. We started seeing Primary key violations sooner. So, what # can we increase it by?

  1. The second solution I considered, was to simply bump up the sequences to match those in DB11g instance by using dblink.

This worked ok too, but we hit the same snag as above. This worked as long as the DB11g was not refreshed through datapump (which means it’s sequences were not correct as well!!)

  1. The 3rd option was simply a programmer’s way of thinking:

Why not just create a table, insert the sequence name and its value into this table and bring that table to target database instance and then run a script to update the sequences with the values from the table we just brought in. This works in any situation. This is what we currently use.

Here is the script for the first approach:

DECLARE
v_seq_val NUMBER;
v_max_id NUMBER;
v_adjust_value NUMBER;
v_sql VARCHAR2(4000);

BEGIN
Dbms_Output.put_line('spool sequences_fixes.log');

-- Increment by constant; change as needed
v_adjust_value := 200000;

FOR seq IN (
SELECT sequence_name, last_number FROM all_sequences WHERE sequence_owner = 'SCHEMA1'
)
LOOP

BEGIN
Dbms_Output.put_line('-- ' || seq.sequence_name);

v_sql := 'ALTER SEQUENCE ' || seq.sequence_name || ' INCREMENT BY ' || v_adjust_value || ';';
Dbms_Output.put_line(v_sql);

v_sql := 'SELECT ' || seq.sequence_name || '.nextval FROM dual;';
Dbms_Output.put_line(v_sql);

-- reset "increment_by" back to 1
v_sql := 'ALTER SEQUENCE ' || seq.sequence_name || ' INCREMENT BY 1;';
Dbms_Output.put_line(v_sql);
Dbms_Output.put_line('--');

EXCEPTION WHEN NO_DATA_FOUND THEN
NULL;
END;
END LOOP;
Dbms_Output.put_line('spool off');

END;
/

That didn’t work out for us, especially when production, source and target databases were all in a flux.

The second approach using dblink was a little more reliable, as long as the source itself was not refreshed using datapump.

FOR seq IN (
 SELECT sequence_name, last_number FROM all_sequences@LTXSYT15 WHERE sequence_owner = 'SCHEMA1'
)
LOOP

  BEGIN
   Dbms_Output.put_line('-- ' || seq.sequence_name);

   SELECT last_number
   INTO v_seq_val
   FROM all_sequences
   WHERE sequence_owner = 'SCHEMA1'
   AND sequence_name = seq.sequence_name;

   v_diff := seq.last_number - v_seq_val;

   IF (v_diff > 0) THEN
    v_sql := 'ALTER SEQUENCE ' || seq.sequence_name || ' INCREMENT BY ' || v_diff || ';';
    Dbms_Output.put_line(v_sql);

    v_sql := 'SELECT ' || seq.sequence_name || '.nextval FROM dual;';
    Dbms_Output.put_line(v_sql);

    -- reset "increment_by" back to 1
    v_sql := 'ALTER SEQUENCE ' || seq.sequence_name || ' INCREMENT BY 1;';
    Dbms_Output.put_line(v_sql);
  END IF;

    Dbms_Output.put_line('--');

  EXCEPTION WHEN NO_DATA_FOUND THEN
  NULL;
  END;
END LOOP;
Dbms_Output.put_line('spool off');

END;
/

And finally, the approach that worked was to bring the latest sequence values from the production instance (original source from which both DB1 and DB2 were refreshed):

On the production database (or source):

-- Run this in the Source database = production?
DROP TABLE schema1.latax_sequence_values;
CREATE TABLE schema1.latax_sequence_values (sequence_name, seq_value) AS
 SELECT sequence_name, last_number FROM all_sequences WHERE sequence_owner = 'SCHEMA1';

-- export this to a file, so it can be imported to the target database.

Then export the table just created and import into target database.

DECLARE
v_seq_new_value NUMBER;
v_max_id NUMBER;
v_diff NUMBER;
v_sql VARCHAR2(4000);

BEGIN
Dbms_Output.put_line('spool sequences_fixes_using_another_db.log');

FOR seq IN (
 SELECT sequence_name, last_number FROM all_sequences WHERE sequence_owner = 'SCHEMA1'
)
LOOP

  BEGIN
   --Dbms_Output.put_line('-- ' || seq.sequence_name);

   SELECT seq_value
   INTO v_seq_new_value
   FROM schema1.latax_sequence_values
   WHERE sequence_name = seq.sequence_name;

   v_diff := v_seq_new_value - seq.last_number;
   Dbms_Output.put_line('-- ' || seq.sequence_name || ' , curr val=' || seq.last_number || ', new val=' || v_seq_new_value || ', ' || ', diff=' || v_diff);

   IF (v_diff > 0) THEN
    v_sql := 'ALTER SEQUENCE ' || seq.sequence_name || ' INCREMENT BY ' || v_diff || ';';
    Dbms_Output.put_line(v_sql);

    v_sql := 'SELECT ' || seq.sequence_name || '.nextval FROM dual;';
    Dbms_Output.put_line(v_sql);

    -- reset "increment_by" back to 1
    v_sql := 'ALTER SEQUENCE ' || seq.sequence_name || ' INCREMENT BY 1;';
    Dbms_Output.put_line(v_sql);
  END IF;

    Dbms_Output.put_line('--');

  EXCEPTION WHEN NO_DATA_FOUND THEN
  NULL;
  END;
END LOOP;
Dbms_Output.put_line('spool off');

END;
/

Notes:

  • ​In all the above scripts, I have left out the SQL (spooling etc) portion and show only the meat of the solution in PL/SQL.
  • ​The core logic is to use ALTER SEQUENCE to bump up the sequence. Remember, Sequence only moves forward, typically by an increment of 1.​ In these scripts, we change the increment value to be the difference of the current value in the sequence and what it should be (diff new value of sequence – old value of sequence).

​But, don’t forget to reset the “increment by” value back ​to 1. Otherwise, you will be surprised by the next value you get in the database!! See my earlier post here for more about resetting the sequence.

  • ​Also, note that the above values I got (either through dblink or through a table) are approximate. I am reading the data dictionary (user_sequences)​ to get the last value. Th​e value I got may be much larger than​ the Sequence.nextval, but that didn’t matter. This is probably because of caching (cache_size column in all_sequences), but this was OK for us. ​I just wanted to get the sequence.nextval out of “primary key” above the danger zone. ​
  • Like I mentioned above, this was a programmer’s solution. The DBA solution would have been something they mentioned at the bottom of the link I mentioned earlier). There a DBA, I assume, is suggesting to drop all the sequences and reload them using the DDL package. That may be more efficient from a DBA point of view, but mine is poorman DBA’s solution!
  • And finally, this is one case where a proper naming convention would have alleviated the problem. Suppose you have tables named A through Z. If we named every sequence as A_primary_Seq, B_primary_Seq etc to mean the sequence for the primary of table A etc, then we could have easily addressed it.

​ In fact, this was my first attempt at fixing this issue. Soon, I found out that our ​​sequence naming was all over the place.

Since the primary key violation happens because the sequence value is below the max(id) in the table, we could have simply,

sequence new value  = max(id) + 1

For e.g.,

A_primary_seq value = Max(primary_key_field) from Table A;

To get the primary key field for the table, I used something like:

SELECT i.table_name, c.column_name  FROM dba_indexes i, dba_ind_columns c
WHERE i.owner = 'SCHEMA1'
AND c.table_owner = 'SCHEMA1'
AND uniqueness = 'UNIQUE'
AND i.index_name = c.index_name
AND i.table_name = c.table_name
AND c.char_length = 0;

Running scripts across shells in *nix


A friend asked me about running csh scripts on Linux. Apparently, they were porting from Solaris to Linux (how did a great Operating system like Solaris end up here? Another of those grave mistakes Sun Microsoft did!!). The linux box had korn shell as the default shell(1). When they ran it, it was giving a lot of errors. They were thinking of rewriting the script in ksh syntax. This definitely looked like a momentous task. I knew you can run one shell from another. (Any doubt? Simply type csh when you are in K-Shell. Now, you are in C-Shell – as long as it’s installed). Then why was the script failing with a lot of errors? We sat down to troubleshoot. In the end it turned out to be a simple PATH issue! If you are facing similar issues with your old shell scripts, then read on!

If your machine doesn’t have the target shell, you need to install it first (2). There is a linux port of the C-Shell called tcsh. Download and install it. See here for instructions on how to. It’s pretty straight forward. Though the program is tcsh, the installation creates some symbolic links with the name csh in /usr/bin and /bin, so you can run it as just csh.

If you try to simply run your csh script in another shell (Bash or ksh), it will fail. There are many differences between the 2 shells. For e.g. to define variables in csh, you will use setenv. To do this in ksh (and in bash) you will have to do export. So a csh script will not run inside ksh shell and vice versa. You will have to either rewrite or force it to run inside the corresponding shell.

To force a script to run in a specific shell, you typically add shebang as the first line inside the script. This helps the script to run with the right script interpreter automatically. (instead of associating specific types with programs, as we do in Windows). But, this doesn’t work when you are running the script inside a different shell, it tries to interpret it using current shell. To run a script inside a target shell, you need to get into the target shell, then type the script to run. You can do this in one step, by “sourcing” the script into the target shell, by using the fast (-f) option as,

csh -f <script>

Now back the problem my friend had. She was indeed using the above syntax to run her C-Shell script. Still it failed with several “command not found” errors, particularly on date command. Hmm! This script used to work in C-Shell on Solaris and date is a common unix utility, it must exist everywhere! I went into C-Shell(just type csh on command prompt, it will switch to C-Shell) and tried date command. It wasn’t there! So, now we have a more specific issue. Find out where the date program is.

to do this, I typed “which date” on K-Shell and the mystery resolved. This command used to be in /usr/bin directory on Solaris and on the linux box it was in /bin. The PATH variable used in the script included /usr/bin, but not /bin. This was the reason why date command wasn’t working. Once we fixed the PATH, everything was fine again.

The lesson is, you don’t always have to rewrite your script when you are changing machines/shells. You *can* invoke any type of shell scripts, from any other type of shells. Chances are there is a port for your favorite shell on your new *nix machine.

Also, when you are getting “command not found” type of errors, try to use the which command to find out where a program is and try to include its path in the environment variable, $PATH. You can also look at the man pages for the command you cannot find. If the command is installed, man will list the path information also.

And finally, never assume anything in the *nix world!

Note:
(1) Suse Linux typically comes with Bash as the default shell. Since our standard is ksh, I think they must have installed and changed default to ksh. See here for how to switch shell associated with a login.

(2) C-shell is an older shell which was very popular in the early days of Unix. Several places have phased out of this, because of its limitations. Ksh or Bash is more modern and Bash is typically default on Linux boxes. So, unless there is a real need for it, or you are a C-Shell junkie, you don’t really want to switch to C-Shell. See here and here for some notes on why you shouldn’t be using C-shell.

Oracle Pivot Operator


While working on complex reports, we occasionally have a need to transform Columns to Rows and vice versa. Converting Rows to Columns is typically called Pivot and the opposite, converting Columns to Rows is called Unpivot. Concept of Pivot, and the need for it, has been around for a long time. Excel for example, has a Pivot table to do these type of data transformations. It’s been available in SQL Server for a while. Pivoting is sometimes called Crosstab. PowerBuilder has a datawindow style like that. These are very useful in building data warehouses, reports etc.

Coming back to the topic, Oracle introduced the operators recently (in 11g). Before that you would have to do a round about way. There are several ways to do this, but here is a simple way, when # of columns are small.

CREATE TABLE payment_list(pay_type VARCHAR2(3), description VARCHAR2(30), amount NUMBER);
INSERT INTO payment_list VALUES('CHK', 'Check', 200);
INSERT INTO payment_list VALUES('CRE', 'Credit Card', 500);
INSERT INTO payment_list VALUES('DEB', 'Debit Card', 210);
INSERT INTO payment_list VALUES('ACH', 'ACH payments', 2000);
INSERT INTO payment_list VALUES('CHK', 'Check', 700);

Here is what we get, with a normal SELECT:

DESCRIPTIONAMOUNT
Check
200
Credit Card
500
Debit Card
210
ACH payments
2000
Check
700

PAY_TYPE
CHK CRE DEB ACH CHK

Each row has a payment type and the amount paid. Sometimes, you will want to see these in a column:

SELECT Decode(pay_type, 'CHK', amount, NULL) AS check_pmt,
        Decode(pay_type, 'CRE', amount, NULL) AS cc_pmt,
        Decode(pay_type, 'DEB', amount, NULL) AS debit_pmt,
        Decode(pay_type, 'ACH', amount, NULL) AS ach_pmt
    FROM payment_list;
CHECK_PMT CC_PMT DEBIT_PMT ACH_PMT
200
500
210
2000
700

By using Oracle Decode function, we separated the amount for each type into separate columns. Of course, in reality, we will have lot more rows for each type and we may want to see the Total for each type:

SELECT Decode(pay_type, 'CHK', Sum(amount), NULL) AS check_pmt,
<span style="color: #0000ff;">    Decode(pay_type, 'CRE', Sum(amount), NULL) AS cc_pmt,
Decode(pay_type, 'DEB', Sum(amount), NULL) AS debit_pmt,
Decode(pay_type, 'ACH', Sum(amount), NULL) AS ach_pmt
FROM payment_list
GROUP BY pay_type;</span>
CHECK_PMT CC_PMT DEBIT_PMT ACH_PMT
900
210
2000
500

We can do the same thing with the new Pivot operator:

SELECT * FROM (
	SELECT pay_type, amount
	FROM payment_list)
		PIVOT (Sum(amount)
		FOR pay_type IN ('CHK', 'CRE', 'DEB', 'ACH')
		);
‘CHK’ ‘CRE’ ‘DEB’ ‘ACH’
900 500 210 2000

 

By the way, if you wanted to collapse your DECODE SQL to return single row result like above, you would use:

SELECT Sum(check_pmt) CHK, Sum(cc_pmt) CC, Sum(debit_pmt) DEB, Sum(ach_pmt) ACH FROM 
SELECT Decode(pay_type, 'CHK', Sum(amount), NULL) AS check_pmt,
Decode(pay_type, 'CRE', Sum(amount), NULL) AS cc_pmt,
Decode(pay_type, 'DEB', Sum(amount), NULL) AS debit_pmt,
Decode(pay_type, 'ACH', Sum(amount), NULL) AS ach_pmt
FROM payment_list
GROUP BY pay_type
)

 

Pivot operator is available in many modern databases (and Excel too). I couldn’t find a site that shows Oracle syntax for PIVOT operation in a simple format yet. However, Oracle’s site has great content on the operator (and unpivot) here.  For sake of completeness, I am including the SQL syntax Pivot operator in SQL Server. The only difference, I could see here is SQL Server’s usage of square brackets instead of quotes.

SELECT &lt;non-pivoted column&gt;,
	[ pivoted column] AS &lt;column name&gt;,
	[ pivoted column] AS &lt;column name&gt;,
	...
	[ pivoted column] AS &lt;column name&gt;
FROM(&lt;SELECT query that produces the data&gt;) AS &lt;alias for the source query&gt;
PIVOT
(
    &lt;aggregation function&gt;(column)
    FOR
    [&lt;column that contains the values that will become column headers&gt;]
    IN ( [pivoted column], [pivoted column],
    ... [pivoted column])
AS &lt;alias for the pivot table&gt;
[ORDER BY clause];

That’s all folks. I know, it has a weird syntax, but The inner most SQL (SELECT query that produces the data), is what you are pivoting. The row(s) for each value in the FOR .. IN list will be converted to a column. So, when the inner SQL finishes, you will get the columns CHK, CRE, DEB, ACH. SELECT * in the outer SQL, makes sure all these columns are picked up.

To be contd…

PBGrep – A simple Search Tool for PowerBuilder Code – in Perl


I wrote this script a few years ago. This comes in handy if you have a lot of PowerBuilder export files and would like to search for specific patterns, sort of like grep does. To use this all your PowerBuilder code must be exported to files (.SR? files). (We always keep the source files).

The simplest usage is to provide a pattern using -p option and directory to search in with -d option.

For e.g.,

To look for all the PB source files that have “dropdownlistbox” in them, I use:

pbgrep.pl -p “dropdownlistbox” -d z:

This will result in the below listing. Each entry shows, Source File Name, Line #, Event or function name with the line # within the function/event and finally the line with the pattern we are looking for.

Z:/Core/CLT/ITS_ClientAdHocQry/w_adhoc_results.srw(1020):
selectionchanged;(2):
type ddlb_printlayout from dropdownlistbox within w_adhoc_results

Z:/Core/CLT/ITS_ClientCase/w_cse_caseflow_rule_defn.srw(1205):
selectionchanged;(54):
type ddlb_resulttype from dropdownlistbox within tabpage_resultdetails

Z:/Core/CLT/ITS_ClientMain/w_login.srw(560):
clicked;closewithreturn(w_login,(2):
type ddlb_serverselection from dropdownlistbox within w_login

Z:/Core/CLT/ITS_ClientReporting/w_search_correspondence.srw(1282):
clicked;n_cst_correspondence(2):
type ddlb_printqueue from dropdownlistbox within w_search_correspondenc

Z:/Core/DES/ExpertSystem/Client/ITS_ClientEpsDesigner/w_login.srw(473):
clicked;closewithreturn(w_login,(2):
type ddlb_serverselection from dropdownlistbox within w_login

I could use the -FCSV format, to generate an Excel file. All the entries above for each file, will be collapsed into a single row.

Here is the complete usage from the script:

usage: $0 [-hv] [-d dir] [-p pattern] [-F TAB|CSV|REPORT|LIST*] [-o<filename>|-oa]

-h : this (help) message
-d dir : Directory to search in
-p <pattern>: Pattern to search

-F <format>: Output can be TAB or CSV delimited or LIST or Report (default)
LIST outputs object name and function name where pattern was found.

-o <filename>: Optional, if missing, the output will be sent screen (STDOUT)
-oa : Optional, when present, pattern is used for name of the file (with special chars removed).

example: $0 -v -d Z:\LA -p <pattern>
If <pattern> has spaces or is a regular expression, enclose it in double quotes

Script: http://pastebin.com/kszxR1ar

Here is the full Perl script:

# pbgrep.pl - Grep like utility for a PoweBuilder Application
# Usage: pbgrep.pl ....

# Author: Sam Varadarajan
# Date: 12/01/2009
# Purpose: Script to grep PB export files to find patterns; Each find will show the function name and line #s.

# Modification History:
#
# Copyright, Sam Varadarajan, 2009 - 2014

#
# Globals
#
use vars qw/ %opt /;

init();
my $ptrn = $opt{'p'};
my $dir = '';
my $verbos = 0;
$verbos = 1 if $opt{'v'};
my $file_name = '';

if ($opt{'o'})
{
	$file_name = $opt{'o'};

	#oa for automatic file name
	if (!$file_name || $file_name eq "a")
	{
		$file_name = "search_".$ptrn.".txt" ;
	}
	$file_name =~ s/ /_/g;
	$file_name =~ s/[^a-zA-Z0-9_.\-]*//g; #derive the name from the pattern
}
else {
	$file_name = '-'; #stdout
}

open FILE, ">$file_name"; 

#my $bare = 0;
# see below $bare = 1 if $opt{'b'};

my $format = "REPORT"; # default Report
$format = uc($opt{'F'}) if $opt{'F'};

my $delim = "\t";
if ($format eq "CSV")
{
	$delim = ",";
}
elsif ($format eq "TAB")
{
	$delim = "\t" ;
}
elsif ($format eq "LIST")
{
	$delim = "\t" ;
}

usage() if (!$ptrn);

print $ptrn."\n";

$dir = $opt{'d'} if $opt{'d'};
if (!$dir)
{
	$dir = 'Z:\\';
}

if ($ptrn)
{
		process_dir($dir, \&process_file, $ptrn);
}

close(FILE);
#end of main

sub process_dir {
    my ($path, $code_ref, $ptrn) = @_;
    my ($DH, $file);
	#debug print $path;
    if (-d $path) {
        unless (opendir $DH, $path) {
            warn "Couldn't open $path.\n$!\nSkipping!";
            return;
        }
        while ($file = readdir $DH) {
				#DEBUG print "processing $file\n";
            next if $file eq '.' || $file eq '..';
			#recursion.
            process_dir("$path/$file", $code_ref, $ptrn);
        }
    }
	elsif (-f $path && (grep /\.sr[u|w|f]/, $path))
	{
#DEBUG		print $path." ".$ptrn."\n";
		$code_ref->($path, $ptrn);
	}
}

sub process_file()
{
	my $path = @_[0];
	my $ptrn = @_[1];
	my $proto_done = 0;
	my $curr_func = "";
	my @words = ();
	my $end_proto = 0;
	my $func_line_nbr = 0;
	my $file_line_nbr = 0;
	my @lines = ();
	my $in_proto = 0;
	my $cnt = 0;

	open (SRFILE, $path)
    or die "$0: Cannot open $path: $!\n";
	#DEBUG 	print "debug*** ".$path." ".$ptrn."\n";

	@lines = ();

	# print "processing $path\n";

	# this assumes every file will have forward prototypes
	while (<SRFILE>) {
		$file_line_nbr++;
		next unless ($_ =~ /forward prototypes/);
		$proto_found = 1;
		last;
	}
	if ($proto_found)
	{
		while (<SRFILE>) {
			$file_line_nbr++;
			next unless ($_ =~ /end prototypes/);
			last;
		}
	}

	LOOP1: while (<SRFILE>)
	{
		chomp;
		s/\t+/   /g;
		s/^[ ]*//g;
		$file_line_nbr++;

		@words = split(/\s+/);

#		if (((/public/i || /protected/i || /private/i) &&  /function/i || /subroutine/i) || (/event/i))
# SV, 3/17, a line with 	inv_log.Event ue_closelogfile() caused it to think it was inside an event. Added ^ to event check.
# REDTAG, may be we will have to treat them separately and also, what if there was a function statement or event that
# was commented out?

		if (((@words[0] =~ /public/i || @words[0] =~ /protected/i || @words[0] =~ /private/i) &&
				(@words[1] =~ /function/i || @words[1] =~ /subroutine/i)) ||
			(@words[0] =~ /^event/i   && (@words[1] !~ /Post/i && @words[1] !~ /Trigger/i)))
		{
			if (@words[0] =~ /^event/i)
			{
				$curr_func = @words[1];
			}
			else {
				$curr_func = @words[3];
			}
#			print "found func header@words[3]\n";
			$func_line_nbr = 0;
			next LOOP1;
		}
		if ($_ =~ /^[ |\t]*end function/i || $_ =~ /^[ |\t]*end event/i)
		{
			if ($cnt && ($format eq "LIST"))
			{
					$path =~ s/sr.[ ]*$//g;
					print FILE $path.$curr_func."\n";
			} else {
				if (@lines)
				{
					my $count = scalar(@lines);
					foreach (my $i=0; $i < $count; $i++)
					{
						my $li = $lines[$i];
						#DEBUG print "*****".{$lines[i]}."\n";
						if ($format eq "REPORT")
						{
							if ($ptrn eq $curr_func)
							{
								print FILE "<<Function defition>>\n";
							}
							print FILE $path."(".$li->{'FILE_LINE'}."):\n   ".$curr_func."(".$li->{'FUNC_LINE'}."):\n      ".$li->{'LINE'}."\n\n";
						}
						else {
							print FILE $path.$delim.$curr_func.$delim.$li->{'FILE_LINE'}.$delim.$li->{'FUNC_LINE'}.$delim.$li->{'LINE'}."\n";
						}
					}
					#VERBOSE print "****# of items found = ".scalar(@lines)."\n\n";
				}
				else {
					# VERBOSE print $path."\t".$curr_func." No match found*******\n\n";
				}
			}
			$curr_func = "";
			$end_proto = 0;
			$func_line_nbr = 0;
			@lines = ();
			$cnt = 0;
			next LOOP1;
		}
		# all other lines
		$func_line_nbr++;
		if ($_ =~ /$ptrn/ig) {
			$cnt++;
			my $line_info = {};
			#if (!$bare)
			if ($format ne "LIST")
			{
				# bare format - list of functions (obj.func)
				$line_info->{'FILE_LINE'} = $file_line_nbr;
				$line_info->{'FUNC_LINE'} = $func_line_nbr;
				$line_info->{'LINE'} = $_;
			}
			push(@lines, $line_info);
		}
		next LOOP1;
	}
	close(SRFILE);
}

#
# Command line options processing
#
sub init()
{
	use Getopt::Std;
	my $opt_string = 'hvbd:p:F:o:';
	getopts( "$opt_string", \%opt ) or usage();
#	use Getopt::Long;
#	GetOptions("h"=>\$opt{'h'},
#		"v"=>\$opt{'v'},
#		"b"=>\$opt{'b'},
#		"optional:o"=>\$opt{'o'},
#		"d"=> \$opt{'d'},
#		"p"=> \$opt{'p'},
#		"F"=> \$opt{'F'}
#	); 	

	usage() if $opt{'h'};
}

#
# Message about this program and how to use it
#
sub usage()
{
  print STDERR << "EOF";

	This program does...

	usage: $0 [-hv] [-d dir] [-p pattern] [-F TAB|CSV|REPORT|LIST*] [-o<filename>|-oa]

	-h        	: 	this (help) message
	-d dir    	: 	Directory to search in
	-p <pattern>: 	Pattern to search

	-F <format>: 	Output can be TAB or CSV delimited or LIST or Report (default)
						LIST outputs object name and function name where pattern was found.

	-o <filename>: Optional, if missing, the output will be sent screen (STDOUT)
	-oa 			:	Optional, when present, pattern is used for name of the file (with special chars removed).

	example: $0 -v -d Z:\LA -p <pattern>
	If <pattern> has spaces or is a regular expression, enclose it in double quotes
EOF
	exit;
}

PowerBuilder DB Trace – SQL Extraction Utility


If you ever have to debug database scripts in PowerBuilder, the DB Trace option comes in handy. As I mentioned in my earlier post on Enabling DB Trace, you just need to prefix the DBMS name with Trace. For e.g., if your application works with Oracle, you will have say,

SQLCA.DBMS=Trace ORA

A typical DB Trace file gets created in C:\Windows and contains a lot of noise. Some of these can be changed, but I was trying to get all the SQLs while the user was doing data entry – sort of reverse engineering the SQLs. The file grew really large and was not easy to extract SQLs manually.

Here is an excerpt with 2 SQLs from the original dbtrace.log:

/*—————————————————*/
/*                 10/08/2010  16:17                 */
/*—————————————————*/
(ad2ca34): LOGIN: (90.315 MS / 90.315 MS)
(ad2ca34): CONNECT TO TRA O10:
(ad2ca34): LOGID=latax
(ad2ca34): DBPARM=UseContextObject=’Yes’,Disablebind=1,CacheName=’DEV2-ITS’ (0.000 MS / 90.315 MS)
(ad2ca34): PREPARE: (0.000 MS / 90.315 MS)
(ad2ca34): PREPARE:
(ad2ca34): SELECT COUNT ( *) FROM SYSPARM WHERE PARM_NAME =’ENFORCE_CLIENT_VERSION_MATCH’  (0.123 MS / 90.438 MS)
(ad2ca34): GET AFFECTED ROWS: (0.003 MS / 90.441 MS)
(ad2ca34): ^ 0 Rows Affected
(ad2ca34): GET NUMBER OF COLUMNS SELECTED: (7.282 MS / 97.723 MS)
(ad2ca34): ^ 1 Columns
(ad2ca34): DESCRIBE SELECT: (0.000 MS / 97.723 MS)
(ad2ca34): 1 Bind Columns
(ad2ca34): DECIMAL Length=80
(ad2ca34): EXECUTE: (0.000 MS / 97.723 MS)
(ad2ca34): BIND SELECT OUTPUT BUFFER (PowerScript): (0.014 MS / 97.737 MS)
(ad2ca34): 1 Bind Columns
(ad2ca34): INT Length=80
(ad2ca34): FETCH NEXT: (1.186 MS / 98.923 MS)
(ad2ca34): GET AFFECTED ROWS: (0.002 MS / 98.925 MS)
(ad2ca34): ^ 1 Rows Affected
(ad2ca34): FETCH NEXT: (0.024 MS / 98.949 MS)
(ad2ca34): *** DBI_FETCHEND *** (rc 100)
(ad2ca34): CANCEL: (0.001 MS / 98.950 MS)
(ad2ca34): PREPARE:
(ad2ca34): SELECT PARM_VALUE FROM SYSPARM WHERE PARM_NAME =’ENFORCE_CLIENT_VERSION_MATCH’  (0.109 MS / 99.059 MS)
(ad2ca34): GET AFFECTED ROWS: (0.002 MS / 99.061 MS)
(ad2ca34): ^ 1 Rows Affected
(ad2ca34): GET NUMBER OF COLUMNS SELECTED: (2.139 MS / 101.200 MS)
(ad2ca34): ^ 1 Columns
(ad2ca34): DESCRIBE SELECT: (0.000 MS / 101.200 MS)
(ad2ca34): 1 Bind Columns
(ad2ca34): CHAR Length=302
(ad2ca34): EXECUTE: (0.000 MS / 101.200 MS)
(ad2ca34): BIND SELECT OUTPUT BUFFER (PowerScript): (0.020 MS / 101.220 MS)
(ad2ca34): 1 Bind Columns
(ad2ca34): CHAR Length=302
(ad2ca34): FETCH NEXT: (1.218 MS / 102.438 MS)
(ad2ca34): GET AFFECTED ROWS: (0.002 MS / 102.440 MS)
(ad2ca34): ^ 1 Rows Affected
(ad2ca34): FETCH NEXT: (0.039 MS / 102.479 MS)
(ad2ca34): *** DBI_FETCHEND *** (rc 100)
(ad2ca34): CANCEL: (0.000 MS / 102.479 MS)
(ad2ca34): DISCONNECT: (0.031 MS / 102.510 MS)
(ad2ca34): SHUTDOWN DATABASE INTERFACE: (0.000 MS / 102.510 MS)
/*—————————————————*/
/*                 10/08/2010  16:17                 */
/*—————————————————*/

In comes PERL to the rescue. Attached script in pastebin is the one I wrote to do the extract.
I wrote this script in 2010. When I ran the script against the dbtrace, it extracts the SQLs from it, as shown below.
–dbtrace.1008.log
— 10/08/2010  16:17
SELECT COUNT ( *)
FROM SYSPARM
WHERE PARM_NAME =’ENFORCE_CLIENT_VERSION_MATCH’  ;
SELECT PARM_VALUE
FROM SYSPARM
WHERE PARM_NAME =’ENFORCE_CLIENT_VERSION_MATCH’  ;
To run it, you simply pass in the trace file and reroute the output to a file.
pb_sqls.pl -f dbtrace.1008.log > dbtrace.1008.sql
Years later

Script:

Perl Version:

http://pastebin.com/FZW1TRcp

Python Version:

http://pastebin.com/bHf2NfdV

Oracle SQL*PLUS: How to silence SQL*Plus


While researching about Oracle Command Line parameters, stumbled on the following question.
https://forums.oracle.com/message/2449066

The OP is asking how to silence SQL*Plus, so he/she could turn off all the prompts coming out in the log. The simple answer is, using command line option -S, like this:

 

sqlplus -s user/password@db @script

Of course, there are more commadline options available. See here for the details.

I notice a couple of things in this post. The question is simple and the answer is not so intuitive. SQL*Plus has so many SETs for everything else. Why not SET silent ON or something? Instead, they have a command line option -S to silence the tool. Even HTML mode is allowed both ways -M or SET MARKUP! Talk about being consistent (or not!).

SET ECHO OFF seems to be relevant, but it has a different meaning. It only turns off echoing the SQLs. The output still comes out with SQL prompt. The effect of this is more visible, if you run SQL*Plus in batch mode.

There is another setting called FEEDBACK. This also sounds logical for silencing SQL*Plus. This one actually only turns off the feedback in the result, like “1 row selected”.

So, there is no way to completely silence SQL*Plus using only SETs.

Coming back to the post again, I find a few things worth mentioning here:

See the script has a “/” at the end. This will make the SQL(s) above to run again. In fact, Sybrand Bakker mentions this in his answers, that was completely ignored. OP seems knowledgeable, so it could be a slip. I hope the OP noted and corrected it. Though, it doesn’t do any harm here (imagine this was a DML or a DDL, then the effect may be more profound), the SELECT will be executed twice. What if it was a long running query? Do you see the resources being wasted in the duplicate run?

The other thing that bothered me with this is the suggestion to turn off SQLPROMPT, thus:

set sqlprompt ' '

Though it seems logical, it only removes the prompt “SQL>”!! I hope this poster is not using such to somehow “clean” the output. It definitely won’t leave out the banner etc. I hope the poster did not write a script to remove those. The reason I am pointing this out, I’ve seen such scripts at work before!! Try to use the language/tool feature as much as possible.