Power to Build

Home » Operating Systems

Category Archives: Operating Systems


Unix: sudo, su etc

At work, on unix, we always use,

sudo su - {application_adminid}

command to gain access to the Application admin’s files and home.

Sometimes we get confused and type it the wrong way and it doesn’t work correctly. So, I decided to dissect it, so we get it right each time.

First there is sudo – a command to execute another with superuser permissions:


This is the command that lets you run commands/programs executable by another id, typically an admin or root id, in Unix.

Once you sudo with a root or an admin id, you are running the command, with elevated permissions. This is Unix way of getting even the normal users to run some special programs, that they wouldn’t have access to otherwise. sudo permissions live only through the time of the command execution.

What if you want it to stick around longer? You can keep typing sudo this, sudo that. For that, Unix’s answer is su – substitute user. You just become that other person for a session, during which you can run any command. Windows has RunAs command or “Run As Administrator” option foroption we will see about this in another post.

su also refers to Super User or switch user, depending on who you ask. With this, you are actually switching to use another person’s (usually an admin id) shell.

Now, when you do su, you are substituting for another user. The environment, path etc remain the same as yours. So, if you didn’t get certain permission(s), you still won’t be able to access paths/files. What we really want is to switch to other (super) user’s environment completely, as if she herself was logging in. This is where you use, su with – (minus sign).

su - {superuserid}

The above command switches your session to the other user’s logon environment.

So far so good. When you want to run a super command, use sudo and when you want to act “super” or root, use su – .

But, wait! You have surely seen,

sudo su - {superuserid}

Huh? What is that? Why do you need both? There lies the power of Unix.

su switches your user to the other (super) user. But, when you do su, don’t you need the other person’s password? Then where is the security in that? This is where sudo plays a role. Remember, sudo gives you root permission for the command you are executing currently.

sudo executes the command you are trying to run, as long as you are in a sudoer list. Once you sudo, the command you execute, assumes root previleges. When you run su with that privilege, you are logging into other person’s id with root privilege, thus, you don’t need any password to login to superuser’s logon!! See that? So,

sudo su - {superuserid}

means, that you are logging in with super id, without actually knowing her password. But, only if you are given that type of access. So, there you have it. “Sudo” to assume privileges of a super user without even knowing about it.

Security behind the commands

In case you are worried, no we cannot use this to login to anyone else’s login. Only “allowed” id’s can be sudo/su into. This is where the sudoers file comes in to picture. Here is a nice picture that gives you an idea about sudo.


Courtesy: Guillermo Garron‘s post

Read Guillermo’s very well written post for more details on sudoers file. Essentially, if your id is not listed there, you cannot sudo.

su has similar restriction too – on some installations of *nix. You need to be in a group called wheel to be able to su – substitute user.



Tools: WindirStat – Diskspace Statistics Viewer/cleaner

I ran into a problem at work this morning – my hard disk was almost full, yet again. Couldn’t do any work as the machine was crawling. I have been backing up and cleaning up, but with so many different things I do (parsing huge log files is one culprit), I just cannot keep up. Over the last 3 years, I have been working on project after project, I have too many “important” files sitting on my disk – logs, tests, versions of programs and of course screenshots and documents.

Today, I really wanted to find those and archive them and cleanup my disk. But, where do I start and where exactly are those big files hiding? If you have used Windows Explorer and Search that comes with it, you will see what I mean. So, I went on a mission to find a better tool! While researching this, I stumbled upon a nice blog about cleaning up on Windows 7.

From there, I got onto the tool WindirStat! Great tool. I see a lot of programs every day, this one stands out. Program is very nicely written and is visually appealing. It scans the drive(s) and lists out the folders and files, sorta like what Windows Explorer does. But, it also visualizes, which is where it’s strength is (looks like scandisk, but this shows files).

That visualization really helps to get to extreme corners of your hard disk easily and find those unnecessary files that may be hiding there. I was able to find some GB’s of Windows memory dump files that even Windows disk cleaner didn’t find. This should have been part of Windows!

2016-02-02 17_16_10-C_ - WinDirStat

There, I just included a screenshot of the tool in action, so you can see what I am talking about. For starters you get a full list of folders with sizes in the top grid, nicely sorted with the biggest offender on top. Those colored cells/blocks in the bottom grid are your files and folders, perfectly color-coded by file type shown on the top right. You can click these file types to “identify” them in the bottom maze. You can click your way through that maze to find those files that you want to get to. Wow!

As a programmer, I am even more intrigued by this utility, as it puts TreeMaps to a great (ca)use! If you are interested, see here for a full overload on TreeMaps!

If you are in a similar situation and in need of a tool to cleanup your hard disk, this is definitely one of the must have. And if you are looking for similar tools on Linux, If you are looking for tools on Linux, apparently KDirStat has been superceded by K4DirStat and QDirStat. And on Unix you have du. Happy (file) hunting! But, before you toss those files, please make sure to back up.

Running scripts across shells in *nix

A friend asked me about running csh scripts on Linux. Apparently, they were porting from Solaris to Linux (how did a great Operating system like Solaris end up here? Another of those grave mistakes Sun Microsoft did!!). The linux box had korn shell as the default shell(1). When they ran it, it was giving a lot of errors. They were thinking of rewriting the script in ksh syntax. This definitely looked like a momentous task. I knew you can run one shell from another. (Any doubt? Simply type csh when you are in K-Shell. Now, you are in C-Shell – as long as it’s installed). Then why was the script failing with a lot of errors? We sat down to troubleshoot. In the end it turned out to be a simple PATH issue! If you are facing similar issues with your old shell scripts, then read on!

If your machine doesn’t have the target shell, you need to install it first (2). There is a linux port of the C-Shell called tcsh. Download and install it. See here for instructions on how to. It’s pretty straight forward. Though the program is tcsh, the installation creates some symbolic links with the name csh in /usr/bin and /bin, so you can run it as just csh.

If you try to simply run your csh script in another shell (Bash or ksh), it will fail. There are many differences between the 2 shells. For e.g. to define variables in csh, you will use setenv. To do this in ksh (and in bash) you will have to do export. So a csh script will not run inside ksh shell and vice versa. You will have to either rewrite or force it to run inside the corresponding shell.

To force a script to run in a specific shell, you typically add shebang as the first line inside the script. This helps the script to run with the right script interpreter automatically. (instead of associating specific types with programs, as we do in Windows). But, this doesn’t work when you are running the script inside a different shell, it tries to interpret it using current shell. To run a script inside a target shell, you need to get into the target shell, then type the script to run. You can do this in one step, by “sourcing” the script into the target shell, by using the fast (-f) option as,

csh -f <script>

Now back the problem my friend had. She was indeed using the above syntax to run her C-Shell script. Still it failed with several “command not found” errors, particularly on date command. Hmm! This script used to work in C-Shell on Solaris and date is a common unix utility, it must exist everywhere! I went into C-Shell(just type csh on command prompt, it will switch to C-Shell) and tried date command. It wasn’t there! So, now we have a more specific issue. Find out where the date program is.

to do this, I typed “which date” on K-Shell and the mystery resolved. This command used to be in /usr/bin directory on Solaris and on the linux box it was in /bin. The PATH variable used in the script included /usr/bin, but not /bin. This was the reason why date command wasn’t working. Once we fixed the PATH, everything was fine again.

The lesson is, you don’t always have to rewrite your script when you are changing machines/shells. You *can* invoke any type of shell scripts, from any other type of shells. Chances are there is a port for your favorite shell on your new *nix machine.

Also, when you are getting “command not found” type of errors, try to use the which command to find out where a program is and try to include its path in the environment variable, $PATH. You can also look at the man pages for the command you cannot find. If the command is installed, man will list the path information also.

And finally, never assume anything in the *nix world!

(1) Suse Linux typically comes with Bash as the default shell. Since our standard is ksh, I think they must have installed and changed default to ksh. See here for how to switch shell associated with a login.

(2) C-shell is an older shell which was very popular in the early days of Unix. Several places have phased out of this, because of its limitations. Ksh or Bash is more modern and Bash is typically default on Linux boxes. So, unless there is a real need for it, or you are a C-Shell junkie, you don’t really want to switch to C-Shell. See here and here for some notes on why you shouldn’t be using C-shell.

Check if a process is running (Windows)

While making the script to switch the desktop between 2 versions of PB (see here), I needed a command expression to check if a task (PB and EAServer in this case) is running already. There are many ways to do it on *nix. But on Windows, it’s a bit convoluted. After researching on the net, this is what I came up with.

tasklist /nh /fi "imagename eq notepad.exe" | findstr /i "notepad.exe" >nul && (echo Notepad is running) || (Echo Notepad is not running)

That’s all in one line. If you need to break, use the continuation char, ^.

I use the above command expression in the batch file to check if a process is running already before running it again. I use this is a batch file I created to switch between 2 different versions of PowerBuilder (PB). I wanted to make sure, one version of PB is not running, before letting the user to switch, hence this check. (replace notepad.exe with PB125.exe above).


/nh means no header, /fi means filter. We are filtering for Notepad.exe only above. Then, we take that and look for Notepad.exe using a FindStr command. It will still work, even if we don’t have those 2 flags, but it makes finding the right program quicker.

&& and || are part of the conditional expressions see here. We use these to print for the IF…ELSE condition.

>nul is the equivalent of Unix /dev/null

Rest is self explanatory, I think.

I’ve also posted this scriptlet on the commandfu.com site.


Symbolic Links Contd…

Ever since I found symbolic links are available on Windows, I’ve used it in various places. To reduce the PATH entries, so it doesn’t overflow, to overcome program idiosyncrasies etc. In this post I will tell you what I did with some DLLs, so I could run different versions of a software.

Both the software are in the context of PowerBuilder, a development IDE from Sybase (Now SAP). PowerBuilder with a scripting language called ORCA. Each version of PowerBuilder comes with a version of a tool that implements OrcaScript. Most things you do inside PB IDE can be done through Orca script – for e.g., you can build, deploy entire workspaces or targets using Orca script. Yes, it’s typically used in automating build processes. Though PB comes with Orca Tool (called orcascr125.exe in PB 12.5), I had already used another tool PBOrca. I really like this tool, so I wanted to continue to use this tool. But, this is an older tool that may not work with/understand PB 12.5.

In a PBOrca script, to work with a particular version of PB, you use the Session Statement:

session begin pbOrc100.dll

The other use was when I did some beta testing with PowerBuilder 15 recently. To build and deploy our PB application, we use Powergen. This is a build tool developed by E.Crane computing for building complex PB applications. It resolves circular references in PB libraries and builds applications smoothly. Powergen apparently uses ORCA to interface with Powerbuilder.

Symbolic Links to the rescue

When I first tried using PBOrca to build code in PB 12.5, I wasn’t sure if it would work, since 12.5 wasn’t in the support list. I decided  to try posing PB 12.5 dll as 10 dll with the help of, you guessed it, Symbolic Links. I created a symbolic link named pborc100.dll pointing to pborc125.dll. Bingo, I was able to connect to and build PB 12.5 code. Get it? The software thinks it’s still loading to PB 10.0 DLL (the link PBORC100.dll, which indirectly invoked PBORC125.dll!). You could do this magic to some directory renames as well. I mentioned about Repository pointing to 2 different versions in my other post.

(Though, in all fairness to PBOrca, it actually worked with PB 12.5 dlls fine, but didn’t try that initially. All I had to was change the dll name in session statement).

Next time Symbolic link came to the rescue, was when I wanted to test Powergen build process with PB 15. When I did my beta testing, I had scripts setup for me to switch from 10.2 to 12.5 (See here). I wanted to reuse these scripts, without much changes. Unfortunately, Powergen does not know, PB 15 exists yet.  After some thinking, realize symbolic link could help me here. I just created a symbolic link for a PB dll used by Powergen. The symbolic link posed the PB 15 dll as a PB 12.5 dll. This worked!!! I was able to build a PB 15 version of the application using Powergen 8.5, which doesn’t naturally support PB 15 yet.

In both the above cases, posing one version of DLL as another worked only because, there wasn’t major changes to the DLLs themselves across the version. Otherwise, I would have gotten some compile or linker error.

Note: We have paid full license fee for Powergen upgrades. The hack I did was to make it work, was for test only. Please do not construe this as way to avoid licensed upgrades.

How Windows 64-bit Supports 32-bit Applications

A sharp and clear explanation on how Windows 64-bit supports 32-bit applications. You will get a full picture after reading this article.

via How Windows 64-bit Supports 32-bit Applications.

Moving to Windows 7 – First Impression

At work, we are upgrading to Windows 7 (64-bit). We were on Windows XP for a while. After initial adjustment, I am beginning to like it. I will be posting my experiences here, as I go through it. But, here are my first impression(s):

On the old machine, I was using Office 2000 for the last 4 years. Office 2010 is fancy!! I could create fancier documents in a giffy. Yes, the ribbon interface needs some learning to do, but once you get it, it’s nice!

Back to Windows 7, it’s look and feel is way better than my old XP. It seems equally stable as well. (Apparently, it’s built over the same kernel as Windows Vista, which was a disaster. I use Vista at home, lot of problems, but still going).

I always wonder, why they change something that works. Previously, you right click on Start Menu. You will see Explore option. Now, it’s called “Open Windows Explorer”. Makes you stumble a bit.

But the Windows explorer itself is better designed. I feel, they finally got the Search option right. You search in place in folder and when you clear search, it returns to the folder.

Control Panel is rearranged too. It starts in Category view, which kind of bundles various options. Luckily you can change the view to “Small Icons” or “Large Icons”. These bring back full listing like XP did. But even here, they renamed a few items. What used to be called “Printers and Faxes” for example, is now called “Devices and Printers”. If you are used to searching for options alphabetically, it takes a minute longer. Go figure.

Network icon on the tray icon has changed. It opens a “Network Sharing Center”. You will have to get used to it, but it’s definitely much better than the old way of drilling down Network properties.

Speaking of changing names, our good old “My Documents” is simply “Documents” and “My Computer” is just Computer. All other folders with “My” lost the prefix. Pictures and Videos used to be inside “My Documents”. Now, they are outside and easier to get to. I read that they have used Symbolic (or is it hard?) links (More on this later) to keep the old naming pointing to the same folders.

All users are now stacked under, easier-to-reach, C:\Users. No longer the “Documents and Settings” (What was that?). If you open C:\Users\<your user name> folder, you will see the old “My Documents” etc, which essentially point to the Documents etc folders in Windows 7.

These were just my first impressions. Generally good. But there are problems though.

UAC (User Access Control)

Windows 7 (like Vista) uses UAC (user Access Control) – a newer security model. Essentially moving more and more towards Unix here. (I read that Microsoft hired ex-Unix gurus in building Windows 7). If you are coming from a free-for-all Windows XP, it’s a nightmare to deal with.

Normal users cannot access a lot of things, like Registry, Windows System directory, Program Files etc. This means, if you are installing software, you need to login as Administrator. This was always the case, but more pronounced in Windows 7. Read on. Unlike before, it’s not enough if you login as Administrator, you need to “run the programs as Administrator”. This really got me. Each time, I try to install something that touches something in the system, it will ask me for Admin ID and Passsword. Puzzled me for a bit, until I read about RunAsAdministrator option. Apparently, windows 7 always runs with the lowest permission possible, until you elevate the program permissions to the maximum available for the user id.

32-Bit Applications on Windows 7 (64-bit)

Not only that. They have extended this security model further in Windows 7 64-bit. If you are running a 32-bit application on a 64-bit OS, things change drastically. For one, if you are installing a 32-bit application it is now installed in “Program Files (x86)”. (64-bit applications go into C:\Program Files!). What kind of naming is that? (x86)? We had tons of problems with “C:\Program Files” with a space in it. Now, another special character? This definitely caused enough grief with some 32-bit software. More later.

Also, your 32-bit applications no longer access C:\Windows\System32. They access another folder named SYSWOW64. (In case you are wondering what is WOW, like I did, it’s Windows on Windows). Contrary to what 64 may lead you to believe, it’s actually the System32 directory for 32-bit applications!!!! Talk about Naming issues!!? Why couldn’t they leave System32 alone and go for logical System64? So, anyways per Wikipedia post,

The operating system uses the %SystemRoot%\system32 directory for its 64-bit library and executable files. This is done for backward compatibility reasons, as many legacy applications are hardcoded to use that path. When executing 32-bit applications, WoW64 transparently redirects 32-bit DLLs to %SystemRoot%\SysWoW64, which contains 32-bit libraries and executables. 32-bit applications are generally not aware that they are running on a 64-bit operating system. 32-bit applications can access %SystemRoot%\System32 through the pseudo directory %SystemRoot%\sysnative.

Virtual What?

Another big topic in Windows 7 is the Virtual Store. When users are not allowed to access some registry key or file location (for e.g., C:\Program Files or C:\Windows), they are rerouted to a Virtual Store. It’s caused a rage in some forums and enough grief trying to debug our programs. For example, we recently got our users moved to Windows 7 (yes they did before us, talk about budget issues!!) our 32-bit Powerbuilder application couldn’t connect to the database. After debugging the issue, found out the normal user cannot access Windows Registry location where the software installation saved the connection information. Virtual Store came in between. User was reading Virtual Store location corresponding to the original location, which was empty and thus the issue. We resorted to using INI file for now!

There are several compatibility issues. Some programs need to be run in Windows XP mode, some need to be run as Administrator, some need to be installed outside C:\Program Files (x86). I will go over those in the next post.

So, overall I like the look and feel and even performance. I am not happy with Microsoft ditching backward compatibility every time (though they claim backward compatibility in every version, there is always a device or software that stops working, forcing us to buy more!).


Here are some links for further reading: