Power to Build

Home » Programming » Powerbuilder

Category Archives: Powerbuilder

Quick Tip: Line continuation Char in code


In English, we use sentences that are made up of words which are typically separated by spaces (” “) and tabs. We use commas, and semi-colons to create pauses and we keep reading until we see a period. If the sentence is too long to fit on a line (typically 72 – 80 characters on a page). Often times, we feel the need to break in the middle of a word, to keep the right margin neat. For this we use hyphen (“-“).

Programming languages and scripts are not very different. After all, they are made by us, humans. Each language has a set of punctuations like English. As for wrapping a statement onto next line, each language may have it’s own character and conventions. In any case, you cannot break in the middle of a word.

Generally, Microsoft programs and script use hyphen (“-“) to indicate the statement is continuing on the next line. Unix world typically use a backslash (“\”).

 

Language

Character

English

DOS/Windows Batch File

^, (

C, C++

\

FORTRAN

&

SQL*Plus

Python

\

VB

_

 

 

 

See this wiki post for more on this and other punctuation in programs.

Sybase EAServer: Taking Control – Changing JVM in Jaguar Manager


EA Server 5.5 comes with a manager application called Jaguar Manager (also called Sybase Central). This is really a Java product, but in this version it’s wrapped in a program called scjview.exe. This is available in %SYBASE%\Shared\Sybase Central 4.3\win32.

I found out this is using an old JRE 1.4. (I use process explorer in Sysinternals. It’s a great tool!). I wanted to switch the EA Server to Java 1.5 , which is the latest version of Java supported in EAS 5.5. With the EA Server itself, this is easy. Just pass -jdk15 to the serverstart batch file.

With Jaguar Manager, this is not as straightforward. There is a batch file called, jagmgr.bat, but this only calls scjview.exe mentioned above. The Java runtime is probably picked up using a DLL named jsyblib142.dll, in %SYBASE%\Shared\win32.

After googling for a bit, I found a solution to this problem. Apparently, scjview.exe has a command line argument -batch.

 

scjview -batch

When you run it like that, the program generates a batch file called sybasecentral.bat in the same directory as scjview.

Now, this batch file runs Java with a bunch of Jar files. You can change the path of the Java command to change the Java version it’s run in. Such a simple solution, completely hidden! And with this available, why did they have to create an EXE like that?? Beats me!

Of course, once you convert to batch file, you can tweak the java parameters such as memory etc to run better.


Powerbuilder: Datawindow – what tier?


In continuation of my recent post about PowerBuilder – what tier, I would like to explore a little more on the technical front. In any books or articles on the power of PowerBuilder, you will always confront a Datawindow. A Datawindow is the heart of the GUI development in PowerBuilder.

By design, a datawindow is truly 2-tier. You design a Datawindow by adding a SQL Statement – from the database tier, the 2nd tier. Then you design the GUI of the database to be displayed in the client – tier 1. This is a datawindow. If you follow MVC pattern, Model is there, View is there and the controller is sort of built into the GUI itself (like validation etc). It’s a great innovation of that time.

dw-pic1

Fig 1. How a (2-tier) Datawindow works – couresy: Hyderabad College

To use a datawindow in such an environment, all you need to do is the following:

dw_employee.DataObject = d_employee
dw_employee.SetTransObject(SQLCA)
dw_employee.Retrieve()

You are done! (I left out details of the datawindow object, connection parameters and Retrieveal parameters for simplicity). In this case, you are directly connecting to the database from the client and the data flows directly into the datawindow control. You could actually share the data on a datawindow with another (using ShareData function). This allows you to retrieve once and show the same data in different formats.

Distributed Datawindows/Datastores

Then came the n-tier or the distributed architecture revolution. With Datawindow being at the heart of PowerBuilder design, how would you get the middle tier in? Particularly, when the middle tier being the one that’s supposed to connect to the Database.

Before going full on to middle tier in the form of an Application Server, PowerBuilder floated the idea of the Distributed PB as early as ver 5.0. They introduced the concept of HTML datawindows. They also introduced the idea of a Datastore, a non-GUI datawindow object. (Before datastore, we used to create a hidden datawindow, which still painted the datawindow behind the scene and thus still the overhead). These were steps in the right direction to distribute an application. (I have a gripe though that Datawindow (control) and Datastore don’t share the same object hierarchy!! So, you constantly have to test for the type of the object. How Sybase missed such a basic thing, beats me!).

Then came the Application Server called, EAServer. To be able to retain the datawindow and still be able to follow a MVC pattern, Sybase introduced the distributed Datawindow. Well, it wasn’t exactly an attribute of the data model, but a method in the datawindow object to synchronize a Datawindow (or datastore) object structure and data across the tiers. To do this, they use Serializing/De-serializing a datawindow using couple of new methods – GetFullState and SetFullState. See here for more detail.

3tierig 2. 3-tier architecture – courtesy: Simcrest

With the distributed Datawindow, you create a Datastore in the Application Server (EAServer) and synchronize it with a client side Datawindow (control) with the same structure.

With the Datawindow control being on the client and that you cannot connect to the database directly, above example requires some code adjustment:
Client

dw_employee.DataObject = d_employee
// Get the blob from the Server
Connection.CreateInstance(iuo_employee)
lblb_dwblob =iuo_employee.RetrieveData()
dw_employee.SetFullState(lblb_dwblob) ...

Where iuo_employee is a non-visual object created on the server.

Server

Function iuo_employee.RetrieveData()

...

ds_employee.DataObject = d_employee ds_employee.SetTransObject(SQLCA) ds_employee.Retrieve() lblb_dsblob = ds.employee.GetFullState()

// Somehow send it to the client.

 

Distributed_DatawindowFig 3. Simple implementation of a distributed Datawindow

There you have it. PowerBuilder somehow, did manage to distribute their GUI object called the datawindow. You essentially create a non-visual Datastore that has the same attribute as the Datawindow control. Retrieve it and send the results as a blob to the client to be shared and shown on the Datawindow control.

Only one problem is that you are sharing both the data and the structure of the Datawindow object between client and server. This is slightly in cotrast with how other platforms would implement this. In Java, for example, your JTable still stays on the client, while the TableModel could be made to use a middle tier object to fetch the data. The middle tier object could then connect to the database to get the actual data.

I would have preferred  if they made the Datastore the data/model portion of the Datawindow, instead of making it a new object as a Datawindow minus GUI. This way we could have reused the ShareData function in a distributed capacity in some way and implement the dealings with blob behind the scene. But that’s crying over a spilled milk. EAServer and the associated architecture is going away and thus the idea of distributed datawindow may disappear from PowerBuilder altogether. (However, Appeon may carry the concept forward. See here).

Powerbuilder – what tier left?


First it was 2-tier

Client/Server (2-tier) technology was the leading application development methodology in the early 1990s. This was in total contrast to the centralized development methodologies applied to mainframe architecture before. Now, suddenly, you can have localized programs that can be made to cooperate, to get major business functions done. Companies were feverishly working on replacing their old (mainframe) green screens with shiny new Windows programs. People who wrote spaghetti code in COBOL before suddenly found this new thing called IDE where you design GUI and write a few lines of code, bam! a database application was done. I worked on a couple of pilot projects in PowerBuilder where I brought along mainframe programmers and users into the shiny new Windows (Windows 3.1!!) machines!

PowerBuilder started in the early 1990’s and showed a lot of promise. It gave run for the money to other similar tools – Gupta SQL, Oracle CDE, Visual Basic etc. Visual Basic (through 6.0) was a good competitor, but their domains were different. PowerBuilder beat VB as the tool of choice for Business (database) application development. It was all great.

Then came trouble in those 2-tiers

Sybase bought Powersoft in 1994! I heard that Powersoft wanted to sell themselves to Sybase to protect another corporate giant, Oracle, gobbling them. Trouble already started for PowerBuilder, as Sybase, a database company, seemed least interested in developing the cash cow any further than that.

Trouble was also brewing for Client/Server technologies in general. As the client/server programs started growing in size, suddenly we ran into performance issues. Flashy Windows suddenly didn’t matter, if they took 5 minutes to load. People looked at ways to split the program logic into multiple parts and try to improve the performance by running these parts on different machines. I myself worked on a project where I refactored a PowerBuilder client/server application, so users in Hong Kong didn’t have to connect to our databases all the time. I used a store-forward technique for this. Data was transferred across behind the scenes using C programs on Unix (what they call ETL programs now).

Later, I found out that part of the blame was the same companies that tried to move out of mainframes. They kept one foot in the door. The global shipping company where I worked, they kept the network still geared up 60% of the traffic on the SNA network (mainframe) and the rest for all the things you do – email, your PB applications, Oracle database etc. It was bound to fail!

PowerBuilder meets n-tier world

In the later part of the 1990s, since the advent of internet technology as a viable public medium, the dynamic web became a necessity. This was mainly achieved with Perl CGI back then. Several other programming environments added access from/to the web from every programming environment. PowerBuilder was no exception. As of PB 5.0, there was “Distributed PB” which was not unlike CGI – running and accessing PowerBuilder application on a remote node through sockets. PB provided a new Transport object that helped with the connectivity, but data exchanges needed to be handled by the programs.

Then came Java J2EE that changed* the way the server side programming was done. CGI had a lot of overhead in having to invoke a runtime environment (VM) each time the program was invoked. What if there was a server that provided this environment all the time and lets you access those programs running inside this server. A Java J2EE server is much more than that. It bundled several software components including an application server, transaction monitor/processor, security module etc.

EAServer** (or more popularly referred to as Jaguar) was Sybase’s answer to bringing Powerbuilder to middle tier and n-tier architecture. It was actually an application server (and others) built over a J2EE Java server. EAServer is actually many containers in one. It’s a J2EE server, a COM server (to be able to run Windows components) and PowerBuilder Application Server rolled into one. It incorporates a C++ application Server, a Java Container (Actually, it includes full-blown J2EE server; EAS 5.5 supports JDK 1.2 – 1.5), a basic Apache Web Server (so you can run HTML based pweb pages inside EAServer), a Servlet Container (version of Tomcat in versions through 5.5 and Jetty Container in Ver 6.x). It also includes a Transaction Processing (TP) Monitor, a Database interface. Components running in EA Server can be accessed through CORBA IIOP or Microsoft Active-X/COM interfaces. It was all cutting edge, but alas! only briefly!

So, one could actually develop multi-lingual (PB, Java, VB, C/C++ etc) applications talking to each other through EAServer. The only criterion was that they had to be CORBA or “COM” compliant. You can thus run a .Net (C#, VB.Net) components inside EAServer, as long as they are COM-compliant. Imagine that: a server that can connect PB, Java and .Net components.

Sybase EAServer seemed to deliver till ver 5.5. If they left it at that, things would have at least continued status quo. Briefly, before Sybase started leaning towards Microsoft .Net, Sybase was seemingly heading towards the Java world. So, they came up with ver. 6.3 which is essentially a full blown Java EE server. What happened to Server side PowerBuilder? Well, PB was still supported, but now, PB objects were wrapped up and presented as J2EE objects. This caused the PB objects to be suddenly slow (almost twice) and complex.

The idea was good as this technique led to Sybase creating another product called Application Server plugin, which is essentially Java wrapped PB objects that can then be run in any Java containers. Great idea, right? Even before it was adapted, Sybase killed it. Why they didn’t go fully that route, beats me. Sometimes, companies killed products by ignoring it, and sometimes by paying too much attention to it! Either way, we developers pay for it.

Appeon, a company involved with PowerBuilder for over a decade, has been recently asked by SAP to maintain PB code base. They have vowed to revive PowerBuilder, but more as a .Net tool. Appeon being a young company and having a vested interest in PB, may be able to give PB a new life. Hopefully, SAP doesn’t interfere or another company doesn’t buy out PB product to kill it!

Unfortunately, EA server does not share the same fate as PowerBuilder. It has been end of life’d by Sybase (and now SAP) and is no longer supported. With that Powerbuilder is back to the 2-tier product that it used to be. In the times of several open source technologies for the web and mobile, who is going to pay any attention to it? I hope Appeon does something quick!

History repeats (at least for me)

This is the second time I see a good product goes away, for no fault of its own, leaving the customers in a limbo. In the early 2000’s we were trying to migrate out of a great product called Forte into something else? Not everyone could move to Java. Sun made so much money in providing extended support for Forte for the next decade. Now, you can still find information on Forte. History repeats again. Now, in 2016, my client is looking to migrate out of PowerBuilder/EAServer platform to Java or .Net platform.

Forte was such a great n-tier development IDE and was ahead of its time in the 90’s. But its life was short-lived. I remember, when I went for a training class in Forte (in Oakland,CA) the instructor announced “Welcome to Forte. er.. Welcome to Sun”. That was in 1999. Sun bought that to kill the product, as Forte was a precursor to Java J2EE and a direct threat to Java J2EE. Though Java EE was powerful, it wouldn’t stand a chance against Forte in terms of simplicity. I would take Forte any day over Java, .Net or PB.

Sun systematically replaced the information on Forte on the web with their own half-baked product called Forte for Java (I believe this was the precursor to their Netbeans, but don’t quote me on this). It was an insult to any decent programmer. They seemed to have succeeded, but Sun itself was bought over by Oracle who is a database company. History seems to repeat itself. Will Java face the same fate as PB suffered in the hands of another database company, Sybase, is something for us to see!

It’s a shame. It seems corporate America decides not only what toothpaste you use or what it is going to be called, but also what software tool/language is good for you. They ripped up all the railroad tracks in the past, so airplane and automobile industry can thrive. Now, in the 21st century, several cities are scrambling to pull out those tracks to run their light trains and subways! Within 50+ years of inception, the computer software industry is in the same situation. Thank God for open source!!



Notes:
* Actually Micoroft’s server side scripting, ASP was a precursor to Java’s J2EE technology.
* It’s probably more appropriate to write it as EA Server for Enterprise Application Server.  I write it as a one word to avoid getting hits to EA.com, a gaming site or servers in general.
** For the same reason I write the Sybase GUI based IDE as one word, PowerBuilder. My uppercasing of the words is just a personal preference.

Powerbuilder and Source Code Control (SCC)


I posted about Source Code Control in general earlier. Various source code control software exist, named as Version Control Systems (VCS) or Source Control Systems (SCS) or even Source Code Control System (SCCS)*. If you noticed, these tools works at the file level – the smallest unit that can be handled in any Operating System. Typically, you open your version control system, check out necessary files, work with them and check back in.

*PB refers to the SCCS simply as Source Control System (SCS). In this post we will refer to it as SCS or as SCC tool, as it has to implement Microsoft SCC API.

Source Control in an IDE

If you are working in an IDE environment, this will pose a problem. You edit, compile and test from within IDE, why would you want to go out to the OS just to work with files from version control system and then come back into IDE to complete the development cycle, and go back out to the OS and the SCC tool again to check back the work. (Trust me, I’ve done this. Typically, you may do this with Java or C/C++ files, depending on the IDE/editor you use). To alleviate it, the IDE/Editor may provide toolbar buttons to execute OS commands, so you can execute them from within the IDE/Editor. But, it’s not tightly coupled.

Seriously, wouldn’t it be nice to have the SCC integrated into the IDE? But then again, considering how many different SCC tools exist, IDE builder can’t add support for every possible tool in the IDE. This would be easy if there was only one type of SCC interface. Or, if all the tools follow a certain standard!

Actually, several such standards exist; on Windows it’s pretty much based around Microsoft’s Source Code Control (SCC) API**. By implementing this API as the interface to their own SCS (VCS), vendors can “standardize” their product. Of course, internally they could do totally different things, but as long as they provide standard set of interface functions, they will all look and work the same to system using them. If IDE tools can build their SCC operations around these API, then they will be able to “talk” to the SCS (or VCS) from various vendors easily. Both Visual Basic and PowerBuilder offer such connectivity.

** This is an older Microsoft Standard. See here for more details about newer MS Standards. Since classic PB doesn’t support the newer standards out of the box, we will have to stick with legacy SCC API.

PowerBuilder and Source Control

But, PB poses a further challenge. Where as, Visual Basic saves it’s objects and programs as files in the Operating System level, PB saves it’s objects and the source code in a library file called PBL (pibble). The smallest unit the OS sees in a PB application is a PBL file and not individual object files. So, we cannot really track the changes in a PB application correctly, if we do check-in/check-out at PBL file level. What we need is a way to track at the individual object level, smallest unit of source code entity in PB.

PB offers a plugin approach for the 3rd party SCC tool to work with it. It also provides a wrapper to the SCC API mentioned above, so the Source Control Systems themselves, don’t need to know about PB objects. PB will export and import objects to files before passing the control to SCC plugin software.

In this post, we will see how PowerBuilder can be setup to work with different SCCS tools. Particularly we will explore PB Native, PVCS and later we will explore how PB can be made to work with SVN. Only SCS (or VCS) that implement Microsoft SCC API will work with PB.

Setting up Workspace to use SCC provider (SCS or VCS)

First of all, to enable Source Code Control in a PB application, you need to pick your SCC provider. You do this by right clicking on the Project workspace and choosing Source Control System (SCS) tab. Here you enter the user credentials for connecting to the SCC provider.  Once you connected,  SCC status for every object will show up as a small icon before the object name in the System Explorer. (A + sign marks a new object not in SCC yet, a small circle refers to those that exist in the repository, a check mark refers to object checked out  – either by you or others.

pb_pvcs_connection

Fig 1. Connecting to a SCC Provider at the Workspace level

As shown, there is a drop list to choose the SCC provider to use for the current PowerBuilder project. The SCC wrapper inside PB takes care of translating to and from the SCC tool. My friendly fetcher, Registry Scanner, came to the rescue. I searched for “Merant Version Manager” (another incarnation of good old PVCS).  I found it – it is stored neatly in the registry.

Where is this information stored?

I really wanted to find out where all this information is stored. Workspace file (.PBW) is a logical place to store it. But, it is not stored in this file. The .pbw file seems to be a list of the targets in the project workspace. Then I looked around and found that it was stored in Windows registry.

2013-12-17 15_55_14-RegScanner

Fig 2. List of SCC Providers available to us inside PowerBuilder is stored in Registry

2013-12-17 16_03_18-Registry Editor

Fig 3. A specific SCC Provider chosen at the Workspace level is stored in Registry

When you choose source code control provider in the PB workspace properties, PB actually stores the details in a key named SourceControl.  If this is missing, then that means you didn’t setup SCC in Workspace properties. So, if you have a workspace that has problem with SCC on open, you may never be able to open the workspace. In such case, if you rename or delete the SourceControl key, PB will no longer try to connect the workspace to SCCS.

Notice, PB had replaced all the “\” with $. PB has quirky ways of escape chars. Tilde (~) for example is used as an escape character a lot in PB.

Incidentally, the above image shows other keys like Connection, MRUList, Painters etc. These are the “Recent Objects”, “Recent Connections”  and “Recent Workspaces” listed under File menu.

Object level Tracking in SCCS

Remember, the PB code is now tracked at the individual object level and any SCCS will only know to store files. So, how does PVCS know about objects in PB? It doesn’t!! This is where PB export files come in handy. Every type of object can be exported to a text file with specific extension. (See here for a nice post on exporting PB objects as files). If that link ever breaks down, see here for my take on it.

When you checkout/Get a copy of the file to your working directory, PB (the SCC wrapper) exports a file to the File System. The file is made writable if it’s being checked out. In that case, the object is also locked in PB IDE. Anything that can be exported as a text file will be stored in the SCC repository. So, a PB target (which is a file with .pbt extension) will have SCC status. All PB objects such as windows, applications, Datawindows etc will have SCC status. Right clicking on such objects in the System Explorer will show additional, SCC related, options in the popup menu. A PBL itself cannot be checked into repository, at least not from within PB, so right clicking on a PBL will not include SCC options in the menu. In general, the SCC options will show up, only if the Workspace is setup to use Source Control and the object is “SCC”able.

pb_vcs_context_menu

Fig 4. Objects that can be stored in SCC Repository

What is a PBG file and why do we need?

To keep track SCC status of the objects in a library, PowerBuilder uses a .PBG file*.  Imagine, once all the objects are exported and checked into PVCS, how will you know, which objects goes to which PBL, in case you ever have to recreate the PBL file from souces (Bootstrapping for e.g). This is where the PBG file comes into the picture:

Any components you add to the libraries then will be added to PBG. PBG file is created for each PBL and is a laundry list of the objects in that library for PVCS to use. Anything you add/remove/change will be in the PBG file and this is what is used to sync up with PVCS.

It has sort of a mapping for each object and the library it’s supposed to be in. This file essentially contains a list of all the objects in a PowerBuilder library (PBL) file or conversely, it tells us which PBLs, these objects (really, export files) will need to be loaded into, the moment they come out of SCCS.

When objects are added or deleted in the library and checked into version control, the PBG file is automatically checked out, updated and checked back in, by PowerBuilder.

Not having PBG file in sync with the PBL, may cause unexpected results. For e.g., if you added a new object to PBL (say, while NOT connected to SCCS) and the PBG file wasn’t up-to-date with it, that object will be missing if the PBL is ever bootstrapped from existing object (files) in SCCS and thus could result in compile to runtime errors.

pbg_file_format

 Fig 5. PBG files

* See here for a nice description why PB needs a PBG file.

Troubleshooting

Removing SCC from Workspace having issues

Sometimes the connection between PB IDE and the SCC system doesn’t work. If you have set up to always connect to the SCC system, then PB won’t come up. In such as a case, edit the registry and “turn off” SCC option. After this, PB will open fine again. If you need to connect to SCC, you will have to reconnect to it, as shown in Fig. 1 above.

Keeping PBG file in sync manually
If you are done with checking in new objects for a library/target, you can open the PBG file on (same folder as PBL) in any text editor to check if it’s up-to-date.
If for some reason, your PBG is out of sync with the one in SCC, you can lock it in SCCand check it in directly from the source directory.
Update, 05/30/2017
Noticed that Agent SVN added a link to this post in their description of setting up their SVN client for PowerBuilder. See their page for more details.

Some Useful Links

http://pbdj.sys-con.com/node/42500

PBDJ article that explains it all – PB with SCC.

http://codicesoftware.blogspot.com/2009/05/plastic-scm-powerbuilder-installation.html

The author describes using PowerBuilder with Plastic SCM as SCC tool.

http://anvil-of-time.com/wordpress/powerbuilder/powerbuilder-simple-source-control/

Agent SVN:

http://www.zeusedit.com/agent/ide/pb/classic.html

 

PBGrep – A simple Search Tool for PowerBuilder Code – in Perl


I wrote this script a few years ago. This comes in handy if you have a lot of PowerBuilder export files and would like to search for specific patterns, sort of like grep does. To use this all your PowerBuilder code must be exported to files (.SR? files). (We always keep the source files).

The simplest usage is to provide a pattern using -p option and directory to search in with -d option.

For e.g.,

To look for all the PB source files that have “dropdownlistbox” in them, I use:

pbgrep.pl -p “dropdownlistbox” -d z:

This will result in the below listing. Each entry shows, Source File Name, Line #, Event or function name with the line # within the function/event and finally the line with the pattern we are looking for.

Z:/Core/CLT/ITS_ClientAdHocQry/w_adhoc_results.srw(1020):
selectionchanged;(2):
type ddlb_printlayout from dropdownlistbox within w_adhoc_results

Z:/Core/CLT/ITS_ClientCase/w_cse_caseflow_rule_defn.srw(1205):
selectionchanged;(54):
type ddlb_resulttype from dropdownlistbox within tabpage_resultdetails

Z:/Core/CLT/ITS_ClientMain/w_login.srw(560):
clicked;closewithreturn(w_login,(2):
type ddlb_serverselection from dropdownlistbox within w_login

Z:/Core/CLT/ITS_ClientReporting/w_search_correspondence.srw(1282):
clicked;n_cst_correspondence(2):
type ddlb_printqueue from dropdownlistbox within w_search_correspondenc

Z:/Core/DES/ExpertSystem/Client/ITS_ClientEpsDesigner/w_login.srw(473):
clicked;closewithreturn(w_login,(2):
type ddlb_serverselection from dropdownlistbox within w_login

I could use the -FCSV format, to generate an Excel file. All the entries above for each file, will be collapsed into a single row.

Here is the complete usage from the script:

usage: $0 [-hv] [-d dir] [-p pattern] [-F TAB|CSV|REPORT|LIST*] [-o<filename>|-oa]

-h : this (help) message
-d dir : Directory to search in
-p <pattern>: Pattern to search

-F <format>: Output can be TAB or CSV delimited or LIST or Report (default)
LIST outputs object name and function name where pattern was found.

-o <filename>: Optional, if missing, the output will be sent screen (STDOUT)
-oa : Optional, when present, pattern is used for name of the file (with special chars removed).

example: $0 -v -d Z:\LA -p <pattern>
If <pattern> has spaces or is a regular expression, enclose it in double quotes

Script: http://pastebin.com/kszxR1ar

Here is the full Perl script:

# pbgrep.pl - Grep like utility for a PoweBuilder Application
# Usage: pbgrep.pl ....

# Author: Sam Varadarajan
# Date: 12/01/2009
# Purpose: Script to grep PB export files to find patterns; Each find will show the function name and line #s.

# Modification History:
#
# Copyright, Sam Varadarajan, 2009 - 2014

#
# Globals
#
use vars qw/ %opt /;

init();
my $ptrn = $opt{'p'};
my $dir = '';
my $verbos = 0;
$verbos = 1 if $opt{'v'};
my $file_name = '';

if ($opt{'o'})
{
	$file_name = $opt{'o'};

	#oa for automatic file name
	if (!$file_name || $file_name eq "a")
	{
		$file_name = "search_".$ptrn.".txt" ;
	}
	$file_name =~ s/ /_/g;
	$file_name =~ s/[^a-zA-Z0-9_.\-]*//g; #derive the name from the pattern
}
else {
	$file_name = '-'; #stdout
}

open FILE, ">$file_name"; 

#my $bare = 0;
# see below $bare = 1 if $opt{'b'};

my $format = "REPORT"; # default Report
$format = uc($opt{'F'}) if $opt{'F'};

my $delim = "\t";
if ($format eq "CSV")
{
	$delim = ",";
}
elsif ($format eq "TAB")
{
	$delim = "\t" ;
}
elsif ($format eq "LIST")
{
	$delim = "\t" ;
}

usage() if (!$ptrn);

print $ptrn."\n";

$dir = $opt{'d'} if $opt{'d'};
if (!$dir)
{
	$dir = 'Z:\\';
}

if ($ptrn)
{
		process_dir($dir, \&process_file, $ptrn);
}

close(FILE);
#end of main

sub process_dir {
    my ($path, $code_ref, $ptrn) = @_;
    my ($DH, $file);
	#debug print $path;
    if (-d $path) {
        unless (opendir $DH, $path) {
            warn "Couldn't open $path.\n$!\nSkipping!";
            return;
        }
        while ($file = readdir $DH) {
				#DEBUG print "processing $file\n";
            next if $file eq '.' || $file eq '..';
			#recursion.
            process_dir("$path/$file", $code_ref, $ptrn);
        }
    }
	elsif (-f $path && (grep /\.sr[u|w|f]/, $path))
	{
#DEBUG		print $path." ".$ptrn."\n";
		$code_ref->($path, $ptrn);
	}
}

sub process_file()
{
	my $path = @_[0];
	my $ptrn = @_[1];
	my $proto_done = 0;
	my $curr_func = "";
	my @words = ();
	my $end_proto = 0;
	my $func_line_nbr = 0;
	my $file_line_nbr = 0;
	my @lines = ();
	my $in_proto = 0;
	my $cnt = 0;

	open (SRFILE, $path)
    or die "$0: Cannot open $path: $!\n";
	#DEBUG 	print "debug*** ".$path." ".$ptrn."\n";

	@lines = ();

	# print "processing $path\n";

	# this assumes every file will have forward prototypes
	while (<SRFILE>) {
		$file_line_nbr++;
		next unless ($_ =~ /forward prototypes/);
		$proto_found = 1;
		last;
	}
	if ($proto_found)
	{
		while (<SRFILE>) {
			$file_line_nbr++;
			next unless ($_ =~ /end prototypes/);
			last;
		}
	}

	LOOP1: while (<SRFILE>)
	{
		chomp;
		s/\t+/   /g;
		s/^[ ]*//g;
		$file_line_nbr++;

		@words = split(/\s+/);

#		if (((/public/i || /protected/i || /private/i) &&  /function/i || /subroutine/i) || (/event/i))
# SV, 3/17, a line with 	inv_log.Event ue_closelogfile() caused it to think it was inside an event. Added ^ to event check.
# REDTAG, may be we will have to treat them separately and also, what if there was a function statement or event that
# was commented out?

		if (((@words[0] =~ /public/i || @words[0] =~ /protected/i || @words[0] =~ /private/i) &&
				(@words[1] =~ /function/i || @words[1] =~ /subroutine/i)) ||
			(@words[0] =~ /^event/i   && (@words[1] !~ /Post/i && @words[1] !~ /Trigger/i)))
		{
			if (@words[0] =~ /^event/i)
			{
				$curr_func = @words[1];
			}
			else {
				$curr_func = @words[3];
			}
#			print "found func header@words[3]\n";
			$func_line_nbr = 0;
			next LOOP1;
		}
		if ($_ =~ /^[ |\t]*end function/i || $_ =~ /^[ |\t]*end event/i)
		{
			if ($cnt && ($format eq "LIST"))
			{
					$path =~ s/sr.[ ]*$//g;
					print FILE $path.$curr_func."\n";
			} else {
				if (@lines)
				{
					my $count = scalar(@lines);
					foreach (my $i=0; $i < $count; $i++)
					{
						my $li = $lines[$i];
						#DEBUG print "*****".{$lines[i]}."\n";
						if ($format eq "REPORT")
						{
							if ($ptrn eq $curr_func)
							{
								print FILE "<<Function defition>>\n";
							}
							print FILE $path."(".$li->{'FILE_LINE'}."):\n   ".$curr_func."(".$li->{'FUNC_LINE'}."):\n      ".$li->{'LINE'}."\n\n";
						}
						else {
							print FILE $path.$delim.$curr_func.$delim.$li->{'FILE_LINE'}.$delim.$li->{'FUNC_LINE'}.$delim.$li->{'LINE'}."\n";
						}
					}
					#VERBOSE print "****# of items found = ".scalar(@lines)."\n\n";
				}
				else {
					# VERBOSE print $path."\t".$curr_func." No match found*******\n\n";
				}
			}
			$curr_func = "";
			$end_proto = 0;
			$func_line_nbr = 0;
			@lines = ();
			$cnt = 0;
			next LOOP1;
		}
		# all other lines
		$func_line_nbr++;
		if ($_ =~ /$ptrn/ig) {
			$cnt++;
			my $line_info = {};
			#if (!$bare)
			if ($format ne "LIST")
			{
				# bare format - list of functions (obj.func)
				$line_info->{'FILE_LINE'} = $file_line_nbr;
				$line_info->{'FUNC_LINE'} = $func_line_nbr;
				$line_info->{'LINE'} = $_;
			}
			push(@lines, $line_info);
		}
		next LOOP1;
	}
	close(SRFILE);
}

#
# Command line options processing
#
sub init()
{
	use Getopt::Std;
	my $opt_string = 'hvbd:p:F:o:';
	getopts( "$opt_string", \%opt ) or usage();
#	use Getopt::Long;
#	GetOptions("h"=>\$opt{'h'},
#		"v"=>\$opt{'v'},
#		"b"=>\$opt{'b'},
#		"optional:o"=>\$opt{'o'},
#		"d"=> \$opt{'d'},
#		"p"=> \$opt{'p'},
#		"F"=> \$opt{'F'}
#	); 	

	usage() if $opt{'h'};
}

#
# Message about this program and how to use it
#
sub usage()
{
  print STDERR << "EOF";

	This program does...

	usage: $0 [-hv] [-d dir] [-p pattern] [-F TAB|CSV|REPORT|LIST*] [-o<filename>|-oa]

	-h        	: 	this (help) message
	-d dir    	: 	Directory to search in
	-p <pattern>: 	Pattern to search

	-F <format>: 	Output can be TAB or CSV delimited or LIST or Report (default)
						LIST outputs object name and function name where pattern was found.

	-o <filename>: Optional, if missing, the output will be sent screen (STDOUT)
	-oa 			:	Optional, when present, pattern is used for name of the file (with special chars removed).

	example: $0 -v -d Z:\LA -p <pattern>
	If <pattern> has spaces or is a regular expression, enclose it in double quotes
EOF
	exit;
}

PowerBuilder DB Trace – SQL Extraction Utility


If you ever have to debug database scripts in PowerBuilder, the DB Trace option comes in handy. As I mentioned in my earlier post on Enabling DB Trace, you just need to prefix the DBMS name with Trace. For e.g., if your application works with Oracle, you will have say,

SQLCA.DBMS=Trace ORA

A typical DB Trace file gets created in C:\Windows and contains a lot of noise. Some of these can be changed, but I was trying to get all the SQLs while the user was doing data entry – sort of reverse engineering the SQLs. The file grew really large and was not easy to extract SQLs manually.

Here is an excerpt with 2 SQLs from the original dbtrace.log:

/*—————————————————*/
/*                 10/08/2010  16:17                 */
/*—————————————————*/
(ad2ca34): LOGIN: (90.315 MS / 90.315 MS)
(ad2ca34): CONNECT TO TRA O10:
(ad2ca34): LOGID=latax
(ad2ca34): DBPARM=UseContextObject=’Yes’,Disablebind=1,CacheName=’DEV2-ITS’ (0.000 MS / 90.315 MS)
(ad2ca34): PREPARE: (0.000 MS / 90.315 MS)
(ad2ca34): PREPARE:
(ad2ca34): SELECT COUNT ( *) FROM SYSPARM WHERE PARM_NAME =’ENFORCE_CLIENT_VERSION_MATCH’  (0.123 MS / 90.438 MS)
(ad2ca34): GET AFFECTED ROWS: (0.003 MS / 90.441 MS)
(ad2ca34): ^ 0 Rows Affected
(ad2ca34): GET NUMBER OF COLUMNS SELECTED: (7.282 MS / 97.723 MS)
(ad2ca34): ^ 1 Columns
(ad2ca34): DESCRIBE SELECT: (0.000 MS / 97.723 MS)
(ad2ca34): 1 Bind Columns
(ad2ca34): DECIMAL Length=80
(ad2ca34): EXECUTE: (0.000 MS / 97.723 MS)
(ad2ca34): BIND SELECT OUTPUT BUFFER (PowerScript): (0.014 MS / 97.737 MS)
(ad2ca34): 1 Bind Columns
(ad2ca34): INT Length=80
(ad2ca34): FETCH NEXT: (1.186 MS / 98.923 MS)
(ad2ca34): GET AFFECTED ROWS: (0.002 MS / 98.925 MS)
(ad2ca34): ^ 1 Rows Affected
(ad2ca34): FETCH NEXT: (0.024 MS / 98.949 MS)
(ad2ca34): *** DBI_FETCHEND *** (rc 100)
(ad2ca34): CANCEL: (0.001 MS / 98.950 MS)
(ad2ca34): PREPARE:
(ad2ca34): SELECT PARM_VALUE FROM SYSPARM WHERE PARM_NAME =’ENFORCE_CLIENT_VERSION_MATCH’  (0.109 MS / 99.059 MS)
(ad2ca34): GET AFFECTED ROWS: (0.002 MS / 99.061 MS)
(ad2ca34): ^ 1 Rows Affected
(ad2ca34): GET NUMBER OF COLUMNS SELECTED: (2.139 MS / 101.200 MS)
(ad2ca34): ^ 1 Columns
(ad2ca34): DESCRIBE SELECT: (0.000 MS / 101.200 MS)
(ad2ca34): 1 Bind Columns
(ad2ca34): CHAR Length=302
(ad2ca34): EXECUTE: (0.000 MS / 101.200 MS)
(ad2ca34): BIND SELECT OUTPUT BUFFER (PowerScript): (0.020 MS / 101.220 MS)
(ad2ca34): 1 Bind Columns
(ad2ca34): CHAR Length=302
(ad2ca34): FETCH NEXT: (1.218 MS / 102.438 MS)
(ad2ca34): GET AFFECTED ROWS: (0.002 MS / 102.440 MS)
(ad2ca34): ^ 1 Rows Affected
(ad2ca34): FETCH NEXT: (0.039 MS / 102.479 MS)
(ad2ca34): *** DBI_FETCHEND *** (rc 100)
(ad2ca34): CANCEL: (0.000 MS / 102.479 MS)
(ad2ca34): DISCONNECT: (0.031 MS / 102.510 MS)
(ad2ca34): SHUTDOWN DATABASE INTERFACE: (0.000 MS / 102.510 MS)
/*—————————————————*/
/*                 10/08/2010  16:17                 */
/*—————————————————*/

In comes PERL to the rescue. Attached script in pastebin is the one I wrote to do the extract.
I wrote this script in 2010. When I ran the script against the dbtrace, it extracts the SQLs from it, as shown below.
–dbtrace.1008.log
— 10/08/2010  16:17
SELECT COUNT ( *)
FROM SYSPARM
WHERE PARM_NAME =’ENFORCE_CLIENT_VERSION_MATCH’  ;
SELECT PARM_VALUE
FROM SYSPARM
WHERE PARM_NAME =’ENFORCE_CLIENT_VERSION_MATCH’  ;
To run it, you simply pass in the trace file and reroute the output to a file.
pb_sqls.pl -f dbtrace.1008.log > dbtrace.1008.sql
Years later

Script:

Perl Version:

http://pastebin.com/FZW1TRcp

Python Version:

http://pastebin.com/bHf2NfdV

Sybase EAServer: Taking Control -Troubleshooting Memory leaks


Background
I recently posted about object life cycle in Sybase EAServer (Jaguar). EAServer provides object pools (cache) where objects are cached when a client lets go of it. Since, the object creation and destroy are so costly, having such an object pool helps with performance immensely.

The issue
Last Year, there was a memory leak in one of our EAServers. After debugging the issue, I narrowed it down to couple of methods. Whenever the methods were called, the memory usage seemed to increase. This kept happening until the server ran out of memory and crashed. During our peak season, our web users ended up calling this method more frequently, thus crashing the server sooner.

The real problem was that each time the user called these methods, a new instance of an object (nvo_rtn_processor) was created inside the function and was never let go.

When we looked at the code, the nvo_rtn_processor was being destroyed as expected, but still it was not released for some reason. After analyzing the issue, found that the DESTROY actually caused the issue. Changed to call of_deactivate and it worked out.

A little PB Background
When a new instance of a EA Server component is needed in PB it’s done by calling CreateInstance method in TransactionServer object. This creates an instance on the EA Server side and passes back a reference to it in PB.

For e.g., to create an instance of nvo_rtn_processor, we issued,
l_transaction_service.CreateInstance(lnv_rtn_processor,”its_returns/nvo_rtn_processor”)

Where lnv_rtn_processor is the local reference in PB.

When we are done with the object thus created, we need to let go of it, so it can be returned to pool or garbage collected (see my post on object life cycle). To do this, we could either DESTROY the object or deactivate it.

Typically DESTROY is used for objects that were created using CREATE statement. For objects that were created with CreateInstance, better approach is to use SetComplete on the TransactionServer. (Unless object’s Automatic Deactivation flag is set, then PB will take care of this).

EAServer is distributed environment where distributed components participate in distributed transactions. The components don’t commit/rollback themselves. They merely indicate that they can be committed or rolledback using SetComplete or SetAbort methods. When the Jaguar Transaction Processor sees that, it commits or aborts the component’s changes. This helps Transaction processor to commit or rollback all objects participating in a single distributed transaction.

Solving the mystery

The original developer used DESTROY to get rid of nvo_rtn_processor. This should have worked. Our deactivate and destructor events for this object1 were almost identical. So the issue was not in whether it was destroyed or deactivated. The problem was actually that the component was never “Deactivated”. The key is that we are actually calling a function called of_deactivate which in turn calls SetComplete on the transaction Server. SetComplete actually did the trick – this released the component and thus returned to pool as expected.

GetContextService(“TransactionServer”, lts)
// SetComplete to allow instance pooling/destruction
li_result = lts.SetComplete()

The key lesson is that DESTROY only destroys the handle (local variable) to the component, but not the component itself. When nvo_rtn_processor was destroyed, the PB reference and the container reference probably got destroyed, but the actual component was never let go by the TransactionServer object, as we never told it to, thus the memory leak. (If the automatic Demarcation/DeActivation was set for the component, we might not have seen this issue.)

Conclusion

For any component that is created through TransactionServer, we need to call SetComplete (or SetAbort in case of a failure) to complete transaction and deactivate the component. For a lot of components in our application, this is done automatically, by enabling the component’s Automatic Demarcation/DeActivation.

Note:

1. The deactivate event for an object will be called when it is deactivated – after SetComplete is called in this case. Destructor event will be called when the object is being destroyed.

2. nvo_rtn_processor itself is not declared transactional, but above discussion is still true irrespective of the transaction type of the component. Automatic Demarcation can be set for components that are not transactional as well. If this is not done, then we must use SetComplete/SetAbort to release the object explicitly.

eas_component_settings

Fig 1 EA Server Component settings for nvo_rtn_processor.

Sybase EAServer: Taking Control – Component Lifecycle


This post talks about the life cycle of a component, deployed in EA Server, at run time. The component is instantiated on the server by a CreateInstance call from a client (PB) code. The picture below must be self explanatory.

eas_component_life_cycle

To debug/trace the life cycle, EAServer provides some log/trace flags. By turning on these, we could see the events of a EA Server component life cycle.

Sybase EAServer: Taking Control -Object Pooling


Memory/Object Pooling
Memory allocation is an expensive task in any system. Whether it’s a C program doing malloc or a Java program creating objects on the fly, the system has to constantly go to memory handler and request for more memory. Releasing used memory was also a problem, it took resources, and not releasing properly caused memory leaks. Java came up with Garbage collection (gc), but even this uses a lot of resources, if not setup properly.

In a heavily used system, this means the system will be spending considerable amount of time in object creation and destruction (this will be done by Garbage collection). To alleviate this problem, some type of pooling (or caching) method is often used. Simply put, this is where memory released is held in a pool and reused when the next person asks for it. This way we reduce the number of times objects are created or released.

EA Server offers Object Pooling for components deployed to EA server. When this option is turned on for a component, the object instances are pooled before returning to memory.

EA Server/PB settings that effect Pooling
Our Software uses object pooling option available in EA Server. This option is external to the PB language and is part of EA Server administration. The settings are available in component Properties -> Instances tab in Jaguar Manager.

Below setting enables Pooling for a component in EA Server, if checked.

component_properties1

Fig 1: This defines if pooling needs to be used for a particular object.

Below settings for the component in EA Server define how and when Pooling of the object can take place. As mentioned in the note, an object instance is kept in the pool only if it doesn’t exceed maximum setting. In this case, the pool will have at least 3 instances at any time. By adjusting these settings we can optimize the memory handling for the object. Too low setting for the minimum will cause more frequent garbage collection and too high will keep too many instances in the pool, thus wasting memory.

On the other hand, “Maximum Pooled Instances” setting effects how many can be kept in the pool. In a scenario where this object is constantly requested, increasing maximum setting will help. Again, this also means too many will be sitting in the pool during idle times.

component_properties2
Fig 2:
These settings define how instances of the object are pooled.

PB settings for Pooling
The below PB settings only decide if the releasing of object reference from PB side is automatic or manual. These settings are set at the component level in EAServer Component Generator (Project Painter).

The actual pooling is effected by the settings on the EA Server side, as mentioned above.

 

component_properties3

Fig 3: This shows the settings on the PB side.

 

EA Server object instance life cycle

Below image is a capture of a visio diagram (attached) that attempts to show how pooling plays a role when objects are created from Powerbuilder code. As mentioned, Auto demarcation affects how the object references are released (automatic or manual) on the PB side.

When pooling is enabled for the object on the EA Server side (see Fig1), once the references are released the object instance is typically qualified for garbage collection.

If pooling is enabled, the object instance is then returned to Object Pool in EA Server, if the other conditions are met (see Fig 2).

latax_easerver_object_pooling

Fig 3: Object Life Cycle during a typical method call

References:

http://en.wikipedia.org/wiki/Memory_pool
http://en.wikipedia.org/wiki/Object_pool_pattern
http://www.javaworld.com/jw-06-1998/jw-06-object-pool.html
http://www.codeproject.com/KB/cpp/MemoryPoolIntroduction.aspx
http://mikebarlotta.sys-con.com/node/42163/

http://manuals.sybase.com/onlinebooks/group-eag/eag0500e/easpg/@Generic__BookTextView/1308;nh=1;pt=1260?DwebQuery=pooling&DwebSearchAll=1
http://manuals.sybase.com/onlinebooks/group-pb/pbg0900e/apptech/@Generic__BookTextView/31253;pt=28986/*