Power to Build

Home » Operating Systems » *nix

Category Archives: *nix

Unix: sudo, su etc


At work, on unix, we always use,

sudo su - {application_adminid}

command to gain access to the Application admin’s files and home.

Sometimes we get confused and type it the wrong way and it doesn’t work correctly. So, I decided to dissect it, so we get it right each time.

First there is sudo – a command to execute another with superuser permissions:

sudo

This is the command that lets you run commands/programs executable by another id, typically an admin or root id, in Unix.

Once you sudo with a root or an admin id, you are running the command, with elevated permissions. This is Unix way of getting even the normal users to run some special programs, that they wouldn’t have access to otherwise. sudo permissions live only through the time of the command execution.

What if you want it to stick around longer? You can keep typing sudo this, sudo that. For that, Unix’s answer is su – substitute user. You just become that other person for a session, during which you can run any command. Windows has RunAs command or “Run As Administrator” option foroption we will see about this in another post.

su also refers to Super User or switch user, depending on who you ask. With this, you are actually switching to use another person’s (usually an admin id) shell.

Now, when you do su, you are substituting for another user. The environment, path etc remain the same as yours. So, if you didn’t get certain permission(s), you still won’t be able to access paths/files. What we really want is to switch to other (super) user’s environment completely, as if she herself was logging in. This is where you use, su with – (minus sign).

su - {superuserid}

The above command switches your session to the other user’s logon environment.

So far so good. When you want to run a super command, use sudo and when you want to act “super” or root, use su – .

But, wait! You have surely seen,

sudo su - {superuserid}

Huh? What is that? Why do you need both? There lies the power of Unix.

su switches your user to the other (super) user. But, when you do su, don’t you need the other person’s password? Then where is the security in that? This is where sudo plays a role. Remember, sudo gives you root permission for the command you are executing currently.

sudo executes the command you are trying to run, as long as you are in a sudoer list. Once you sudo, the command you execute, assumes root previleges. When you run su with that privilege, you are logging into other person’s id with root privilege, thus, you don’t need any password to login to superuser’s logon!! See that? So,

sudo su - {superuserid}

means, that you are logging in with super id, without actually knowing her password. But, only if you are given that type of access. So, there you have it. “Sudo” to assume privileges of a super user without even knowing about it.

Security behind the commands

In case you are worried, no we cannot use this to login to anyone else’s login. Only “allowed” id’s can be sudo/su into. This is where the sudoers file comes in to picture. Here is a nice picture that gives you an idea about sudo.

sudo-sudoers-make-me-sandwich.png

Courtesy: Guillermo Garron‘s post

Read Guillermo’s very well written post for more details on sudoers file. Essentially, if your id is not listed there, you cannot sudo.

su has similar restriction too – on some installations of *nix. You need to be in a group called wheel to be able to su – substitute user.

 

Running scripts across shells in *nix


A friend asked me about running csh scripts on Linux. Apparently, they were porting from Solaris to Linux (how did a great Operating system like Solaris end up here? Another of those grave mistakes Sun Microsoft did!!). The linux box had korn shell as the default shell(1). When they ran it, it was giving a lot of errors. They were thinking of rewriting the script in ksh syntax. This definitely looked like a momentous task. I knew you can run one shell from another. (Any doubt? Simply type csh when you are in K-Shell. Now, you are in C-Shell – as long as it’s installed). Then why was the script failing with a lot of errors? We sat down to troubleshoot. In the end it turned out to be a simple PATH issue! If you are facing similar issues with your old shell scripts, then read on!

If your machine doesn’t have the target shell, you need to install it first (2). There is a linux port of the C-Shell called tcsh. Download and install it. See here for instructions on how to. It’s pretty straight forward. Though the program is tcsh, the installation creates some symbolic links with the name csh in /usr/bin and /bin, so you can run it as just csh.

If you try to simply run your csh script in another shell (Bash or ksh), it will fail. There are many differences between the 2 shells. For e.g. to define variables in csh, you will use setenv. To do this in ksh (and in bash) you will have to do export. So a csh script will not run inside ksh shell and vice versa. You will have to either rewrite or force it to run inside the corresponding shell.

To force a script to run in a specific shell, you typically add shebang as the first line inside the script. This helps the script to run with the right script interpreter automatically. (instead of associating specific types with programs, as we do in Windows). But, this doesn’t work when you are running the script inside a different shell, it tries to interpret it using current shell. To run a script inside a target shell, you need to get into the target shell, then type the script to run. You can do this in one step, by “sourcing” the script into the target shell, by using the fast (-f) option as,

csh -f <script>

Now back the problem my friend had. She was indeed using the above syntax to run her C-Shell script. Still it failed with several “command not found” errors, particularly on date command. Hmm! This script used to work in C-Shell on Solaris and date is a common unix utility, it must exist everywhere! I went into C-Shell(just type csh on command prompt, it will switch to C-Shell) and tried date command. It wasn’t there! So, now we have a more specific issue. Find out where the date program is.

to do this, I typed “which date” on K-Shell and the mystery resolved. This command used to be in /usr/bin directory on Solaris and on the linux box it was in /bin. The PATH variable used in the script included /usr/bin, but not /bin. This was the reason why date command wasn’t working. Once we fixed the PATH, everything was fine again.

The lesson is, you don’t always have to rewrite your script when you are changing machines/shells. You *can* invoke any type of shell scripts, from any other type of shells. Chances are there is a port for your favorite shell on your new *nix machine.

Also, when you are getting “command not found” type of errors, try to use the which command to find out where a program is and try to include its path in the environment variable, $PATH. You can also look at the man pages for the command you cannot find. If the command is installed, man will list the path information also.

And finally, never assume anything in the *nix world!

Note:
(1) Suse Linux typically comes with Bash as the default shell. Since our standard is ksh, I think they must have installed and changed default to ksh. See here for how to switch shell associated with a login.

(2) C-shell is an older shell which was very popular in the early days of Unix. Several places have phased out of this, because of its limitations. Ksh or Bash is more modern and Bash is typically default on Linux boxes. So, unless there is a real need for it, or you are a C-Shell junkie, you don’t really want to switch to C-Shell. See here and here for some notes on why you shouldn’t be using C-shell.

Linux: Netdisk on Ubuntu


Update 05/12/2016:

For anyone coming to this post looking for solutions on IOCELL Net Disk, I am sorry. The company seems to have folded and is no longer hosting any of the links I mentioned here. It’s a great product and mine worked on Ubuntu with the instructions here, but my disk failed (hardware) and thus I am not able to experiment with it anymore. If you find any updates outside of this blog, please feel free to post in comments below. Thanks. -Sam

—–

Last week, my Windows PC stopped working. (I have to look into it later). At the moment, I am working with my taxes and I needed some files from the old machine, desperately. Luckily, I’ve iocell_ndasbeen backing up. For this, I use IOCell Networks (also Ximeta)1, NetDisk. NetDisk is a Network Direct Attached Storage (NDAS) device. This seems to be a combination of Network Attached Storage (NAS) and Direct Attached Storage (DAS). As a device, NetDisk is really an external hard disk enclosure that is network enabled and you add hard disk (I added a 1TB drive to it). The difference from other external drive enclosures is that this is network enabled, so once plugged into Ethernet, it can be accessed from any PC on the network. And Direct Access because, you can just plug in the device directly to Ethernet, not through any computer or server.  To add security, they have added a software layer (IOCell/Ximeta NDAS) that you need to install on each PC to access the drive. You will have to register the software with the specific serial # of the NetDisk. Thus, only specific PCs with the right software and serial # register can access the drive. I had installed this software on my Vista machine and that worked like a charm. (This post relates only to the Linux version of this software. If you need help with Windows version, please refer to their user manual).

Now, at this critical moment, when I needed to access the back up, I couldn’t. My main (Vista) PC is dead. My wife’s laptop runs Windows 7 and is not capable of running Ximeta NDAS software (It freezes – even the version 2.72 they suggest to fix the issue). Then my other PC, where I am typing this from, is running Ubuntu 12.04 for which Ximeta does not have a driver or programs for NDisk. At least, not any installation binaries!

But there was hope. Luckily, Ximeta provides (open source) source code at https://github.com/iocellnetworks/ndas4linux/tree/master/ for Linux. Downloaded the source code (If you go to zip tab in the above page, it will download a file named ndas4linux-master.zip. The source directory has several versions. Which version to use depends on your kernel version. (To find your kernel’s version, you can use uname -r on linux terminal). For Ubuntu 12.04, I used version 3.2.0. Though I work with large systems at work, I never venture to compile big programs at home. This time, it’s different. Desperate situation require desperate measures. I ventured on compiling the driver for Linux myself. I was amazed. Huge sets of files got compiled while I watched, without major issues. Once the source code is compiled, we need to install and load it. Here is a list of things to do from their documents:

  1. Download a source tarball.
  2. Unpack
  3. CD into the right folder and run make
  4. Change to root or sudo make install
  5. Start ndas as root
  6. As root, or by sudo, register and enable ndas devices with ndasadmin
 Apart from the documents provided, each directory inside the main folder has a README. Read these for instructions on how and where to compile. Here is a little better instructions on how to build it.
Because their instructions are spread across different documents, I’ve gathered and summarized the steps here for the newcomer.
Building the software
This is really confusing, as there are many directories and many make files. To do this correctly, you need to be in the right directory. Change to ndas4linux-master/3.2.0/doc (I use 3.2.0; change to the right version) and read
how_to_export.txt first. Here is the summary of how to compile:
cd ndas4linux-master/3.2.0
make linux-rel
This creates a new directory, ndas4linux-master/3.2.0/build_x86_linux. linux-rel is one of the options. This is the final version. But, you can also build dev or debug versions. Take a look at the doc files.
Installing the software
Once you compiled, you need to install the driver and start it. Below commands do that. This step actually builds the ndasadmin program and installs it.
cd ndas4linux-master/3.2.0/build_x86_linux/ndas-3.2.0.x86
make
The make command compiles so many files and actually builds ndasadmin and “installs” in /usr/sbin. This is the command we will use to work with (mount, dismount etc) NetDisk.
ndasadmin has several options. To run this command, you need to be root or use sudo. If you just type sudo ndasadmin on command prompt, it will display various options available.

Starting ndas Driver

After building and installing the driver software, you need to start the driver before you can use the device.

sudo ndasadmin start
 
Mounting the Device

You also need to register the device serial #, before you can mount the device volumes. The readme file, how_to_use_ndasadmin.txt inside the version folder (3.2.0 in my case) lists the steps to install and run NDAS software. To register the device, you need to find the serial number (and write key) of the NetDisk device. These are available at the bottom of the Netdisk box.

sudo ndasadmin register <SERIAL_NUMBER> -n NetDisk1
sudo ndasadmin enable -s 1 -o r
brings up the NetDisk volumes. NetDisk1 is just name I gave to the device. This can be anything. Also, register option have a slight variation with Serial # and Write key. See the readme file mentioned above.

First, I didn’t see them come up. Then I found them listed in “Places” option in Gnome desktop menu. To see the device listing, you can use the following command:

cat /proc/ndas/devs

To disable/unmount the device, use the following commands:

sudo ndasadmin unregister  -n NetDisk1
sudo ndasadmin disable -s 1 -o r
Further Help
Like I said earlier, the doc folder inside the version you are working with (ndas4linux-master/3.2.0/doc) actually contains all the instructions. Here are some of the files in that directory.
how_to_build.txt
how_to_export.txt
how_to_use_ndasadmin.txt
Note that the software seems to be constantly evolving, so it may not support all the functionalities, the actual device can support. And if you need more help, here are some links if you want to learn more about Iocell Ximeta on Linux.
  1. http://ndas4linux.iocellnetworks.com/
  2. https://github.com/iocellnetworks/ndas4linux/wiki/How-to-export-NDAS-source-for-different-architectures
  3. http://ndas4linux.iocellnetworks.com/trac/index.cgi/browser/ndas4linux/release
  4. http://ndas4linux.iocellnetworks.com/kermit/index.cgi/wiki/Usage
  5. http://ndas4linux.iocellnetworks.com/kermit/index.cgi/wiki/HowToBuildDEB

1 IOCell acquired Ximeta’s NDAS; so you will see the names used interchangeably here and the web. See here

Ubuntu: File Managers


I am continuing to use Ubuntu as my primary desktop and I am loving it(I still use Windows desktop. It has it’s advantages). As I get used to the environment, I am always looking to mimic familiar options from Windows. This time around I was looking to find out about adding applications to Context menu in Ubuntu. Like always, I stumbled on something bigger – File Manager(s).

Earlier I posted about Linux architecture and how different layers of the Operating System can be replaced/added. If you read that post, you will see how I struggled with Unity Desktop that came with Ubuntu 12.04  and eventually learned to replace it with Gnome desktop. Apparently, each desktop has it’s own flavor of File Manager and then some.

If you are coming from the Windows world (like I did), here are some terminologies for you. What we call Windows in Windows Operating System (like Windows XP) is really the Desktop. We saw earlier there are many different Desktops including Gnome, KDE, Unity etc. The Desktops have several parts and one of them is a File Manager. This is called Windows Explorer (it used to be called File Manager in versions before Windows 95). There are choices for Explorer, but I’ve not seen many people replacing the Windows Explorer with another.

In Ubuntu (for that matter entire *nix [Unix, Linux]) world, everything is customizable and/or replaceable. I was able to replace my troubled Unity desktop with Gnome desktop and overcome some issues earlier. When I started researching on Context Menus, I stumbled on Nautilus package, which is the default File Manager in Gnome. Since Ubuntu 12.04 didn’t support Gnome desktop natively, it didn’t install Nautilus either. It had a File Manager called Thunar instead. And, when I installed Gnome, unknowingly, I had also opted for Nautilus, default file manager for Gnome. Apparently, there are at least 20 different File Managers available. See this blog.

This site has a nice picture of the Linux Architecture.

In Gnome Desktop, you get to File Manager by clicking on Places menu on top.

Workspace 1_057Fig 1: Menu to reach File Manager

Screenshot from 2013-01-06 19:06:37Fig 2: Nautilus File Manager

So what about Context Menu?
Coming back to adding context menu, it’s done using Nautilus Actions.I had to install it first. (This you could do it in Ubuntu Software Center or use apt-get command). Once you installed it, you can add context menu items as shown below:

Nautilus Actions ConfigurationFig 3: Nautilus Actions Configuration

And of course, after changing the context menu you need to reload the Nautilus File Manager. To do this, you can type the below command:

<code>killall nautilus</code>

This kills all processes associated with the particular program (here nautilus). For Windows users, this is like going to Windows Task Manager and doing “End Program”. See here for more on killall. Incidentally, they also talk about killing nautilus!!

Running KDiff3 from Context Menu
The program I was trying to add to context menu was KDiff3 (it’s a great diff utility, try it, if you haven’t already. I use it on Windows as well). Nautilus Actions allows you to add programs to the context menu. I select 2 files to be compared and right click to select and open KDiff3 and voilà! KDiff3 opens and diff’s the files automatically!

Nautilus Context MenuFig 4: Nautilus Context Menu

kdiff3_074Fig 5: Kdiff3 From Context Menu

References

  1. http://www.tuxarena.com/2011/06/20-file-managers-for-ubuntu/
  2. http://www.techdrivein.com/2010/05/what-is-nautilus-elementary-and-how-to.html
  3. https://live.gnome.org/Nautilus/Screenshots
  4. http://www.tellmeaboutlinux.com/content/linux-architecture
  5. http://en.wikipedia.org/wiki/File_manager
  6. http://askubuntu.com/questions/88480/adding-extra-options-to-right-click-menu
  7. http://www.omgubuntu.co.uk/2011/12/how-to-add-actions-emblem-support-back-to-nautilus-in-ubuntu-11-10/
  8. http://www.linfo.org/killall.html

Quick Tip: How to exit from SQL*Plus on command line


Updated, 06/05/2017

This is about running SQL*Plus in batch (script) mode. Suppose you have a script (shell or batch file) that calls SQL*Plus to execute a script file. I am talking about running it from Command line, thus:

$ sqlplus <user_id/password>@SID @<sql_file_name)

Chances are, you will want to exit SQL*Plus as soon as the script is done (EOF file is reached in SQL file), so we can continue our processing.

Typically, we add an EXIT statement at the end of the SQL file itself and this will force SQL*Plus to quit. What if you forgot or couldn’t add EXIT (in case, you use the same script in a different scenario – like this script gets called by another sql script, like a wrapper sql). If you don’t have an EXIT or QUIT statement at the end of your SQL file, you will end up seeing the SQL Prompt:

SQL>

And it waits for the user to type the next statement (or if called from within a wrapper, it may produce unpredictable results depending on the next script file/statement wrapper sql has).

In such scenarios, you can add the EXIT statement externally, by typing

echo EXIT | sqlplus <user_id/password>@SID @<sql_file_name)

This works in both Unix shell scripts and Windows batch files.

This essentially types EXIT into the SQL Prompt. In case you are wondering, if it will exit immediately, the EXIT statement is really queued after all that is in the SQL*Plus input, in this case what’s in the script file. Thus, it correctly executes EXIT when SQL*Plus finished executing rest of the statements – this means when it reaches the EOF in this case.

Here is another way to achieve the same result:

exit | sqlplus <user_id/password>@SID @<sql_file_name)

(That’s it. Essentially piping exit into sqlplus command! When the End of the current script file is reached, SQL*Plus returns to the shell and your shell script (or the wrapper sql) can go on!

This also works on both DOS (Windows command prompt)_ and *nix systems.

Update: 06/05/2017

I shared the original tip in 2012. After many years of being there, I see that this post is one of the popular ones! Who could have thought?!!

I have updated the post, to include “echo EXIT”, based on a comment below.

EXIT and QUIT are SQL*Plus commands (case doesn’t really matter, just wanted to distinguish them from OS commands). By echoing them into SQL Prompt, you force SQL*Plus to exit.

My suggestion to use exit | sqlplus, uses sending a signal to the program to exit (this is like executing Ctrl-D on Unix). In this case exit is an OS level command/program.

The difference in the 2 approaches is when you use echo EXIT, you are literally “typing” into sqlplus prompt where as in the other case, you are signaling it to end. Both will work, but using exit OS command may be OS dependent. Test on command line, before use.

Quick Tip – *nix: Back Quote


While researching on an issue with a Java Stored procedure at work recently, I had to go to oracle udump directory to find out if there was any errors. To do this, we do grep like,

grep -i -l "nullpointer" *

to list the names of all log files that may contain NullPointer exceptions. -i = ignore case, -l = list only file names.

This tip is to be able to use that list in another command using back-quote (`). This is the character (also called grave accent) before 1 on PC keyboards, if you haven’t used it. Back-quote is used for command substitution in *nix (unix, linux) systems. It’s available in

For eg, to list those files with full directory listing:

ls -lt `grep -i -l "nullpointer" *`

Or to open the trace files found by grep, all at once in vi:

vi `grep -i -l "nullpointer" *`

(and then use :n in vi to go through the files)

Below line copies the found files out to another directory,

cp `grep -i -l "nullpointer" *` ~/nullsod

(where nullsod is a directory in my home directory.)

Command Substitution (using Back quote)

This process of executing commands inside back-quote is called command substitution. The command inside the back-quote is executed first and the output is plugged into the commandline that contains it. For e.g.,

echo `date`

returns System Date. Notice a back-quoted command is mostly embedded inside another. If you miss the echo there, it will try to return the date string to the shell which it will try to execute as a command which results in “Command not found” error.

Back-quote is often used in scripts to set the result into a variable. For e.g., we can save the date value in a variable called dt. Then we can use this variable in script.

$> export dt=`date +"%m%d%C%y%H%M%S"`
$> echo "Date is $dt"
$> cp output.log output.log."$dt"

#Where, $> is the shell prompt

In the above list, Date is executed with a specific format (mmddyyyhhmiss). This is then stored in a variable called dt. First we echo dt. Then we use dt to rename a file with date suffix, very handy when you are running a script on a regular basis and want to save the output in timestamped logs.

Incidentally, the statements above, show another kind of substitution, a string or variable substitution using double quotes(“). Any expression inside a double quote is often evaluated first and substituted in the command that uses it. Notice the difference: double quote simply evaluates a string/variable and plugs in the value. A back-quote on the other hand, actually executes the string inside as a command and returns the  output to the enclosing command.

You can combine many of these expressions, and that’s what makes it all powerful. If you haven’t tried back-quote yet, try it. You will be glad you did!

Useful links

  1. Command Substitution in Unix
  2. Unix Shell Substitutions
  3. Advanced Bash Scripting

† udump – user dump – contains user trace files in Oracle database; all print statements from stored programs in Oracle (Java, PL/SQL) end up here.
‡ udump directory may have a large # of files depending on activity, in which case, grep may not work with wild card (*). You may have to refine the grep parms.

Quick Tip – *nix: ulimit


Last week one of our DBAs called me. She was trying to install Oracle 11g. She obtained zip files for the installation from another co-worker. This had many zip files, probably one per installation disk. She was trying to unzip the files and got some error.

Like always, I got down to business of checking and googling. When I tried to unzip the first file, it gave an error as to missing zip file headers. It suggested that the file could be part of a multi-part zip file. Good old google suggested how to combine a multi-part zip file. Just cat each file into one big file, then unzip it:

cat file1.zip file1.zip file3.zip > fullfile.zip

Unzip failed on the fullfile.zip. When I checked the file size, it had he same size as the 1st file, the largest file. Strangely, they both had the size of about 1GB. More googling revealed role of ulimit!

Resource limits on *NIX systems (ulimit)

All *nix systems (all flavors of Unix and Linux) have a limits on system resources per user. The limits include cpu time per process, number of processes, largest file size, memory size, number of file handles etc (file size is defined in number of blocks.). The root user gets unlimited resources, but all others are assigned a finite settings as defined in /etc/security/limits on AIX. (/etc/security/limits.conf in Ubuntu). Here is a good post, as to why you want to limit resources for users. To check the limits, use ulimit command.

ulimit -a

There are Hard limits (ulimit -Ha) set by the root and Soft limits (ulimit -Sa) a user can set herself.  See this blog for details on using ulimit.

Role of ulimit in our problem

Continuing our story about faile Oracle install above, when we did ulimit -a, one particular entry caught our attention. file size was set to 1GB (approx., remember it’s set in blocks?). Bingo, this is why when we try to create the fullfile.zip, it got truncated at 1GB, the limit for the user. Apparently, the DBAs are supposed to have unlimited file size. She got onto calling the Unix admin.

To cut the story short, apparently ulimit played a role in getting the original zip files as well. When she downloaded the files, it truncated the biggest of files to 1GB each. Our first file happened to be one and that’s why it looked incomplete. To make matters worse unzip and gzip ended in pointing to a multi-part zip file, which they weren’t.

Like I said, this is true for all *nix systems. This is very powerful, in controlling runaway processes sucking system resources away. But there may be a legitimate situation like above, where you need to override the settings. So, next time your file seem corrupted or truncated, remember to check ulimit.

References

  1. ulimit in IBM AIX
  2. ulimit on HP-UX
  3. ulimit on Linux – blog
  4. ulimit on Unix – blog

Depending on the *nix you are using, this may be variously mentioned as limits per user, per shell and it’s descendants. Some of the settings like cpu time is applied to individual processes user/shell is running.