Save 
Join IIUG
 for   
 

Informix News
18 Nov 13 - ZDNet - Top 20 mobile skills in demand... Read
09 Sep 13 - telecompaper - Shaspa and Tatung have shown a new smart home platform at Ifa in Berlin. Powered by the IBM Informix software... Read
06 Sep 13 - IBM data magazine - Mission Accomplished - Miami, Florida will be the backdrop for the 2014 IIUG Informix Conference... Read
01 Feb 13 - IBM Data Magazine - Are your database backups safe? Lester Knutsen (IBM Champion) writes about database back up safety using "archecker"... Read
14 Nov 12 - IBM - IBM's Big Data For Smart Grid Goes Live In Texas... Read
3 Oct 12 - The Financial - IBM and TransWorks Collaborate to Help Louisiana-Pacific Corporation Achieve Supply Chain Efficiency... Read
28 Aug 12 - techCLOUD9 - Splunk kicks up a SaaS Storm... Read
10 Aug 12 - businessCLOUD9 - Is this the other half of Cloud monitoring?... Read
3 Aug 12 - IBM data management - Supercharging the data warehouse while keeping costs down IBM Informix Warehouse Accelerator (IWA) delivers superior performance for in-memory analytics processing... Read
2 Aug 12 - channelbiz - Oninit Group launches Pay Per Pulse cloud-based service... Read
28 May 12 - Bloor - David Norfolk on the recent Informix benchmark "pretty impressive results"... Read
23 May 12 - DBTA - Informix Genero: A Way to Modernize Informix 4GL Applications... Read
9 Apr 12 - Mastering Data Management - Upping the Informix Ante: Advanced Data Tools... Read
22 Mar 12 - developerWorks - Optimizing Informix database access... Read
14 Mar 12 - BernieSpang.com - International Informix User Group set to meet in San Diego... Read
1 Mar 12 - IBM Data Management - IIUG Heads West for 2012 - Get ready for sun and sand in San Diego... Read
1 Mar 12 - IBM Data Management - Running Informix on Solid-State Drives.Speed Up Database Access... Read
26 Feb 12 - BernieSpan.com - Better results, lower cost for a broad set of new IBM clients and partners... Read
24 Feb 12 - developerWorks - Informix Warehouse Accelerator: Continuous Acceleration during Data Refresh... Read
6 Feb 12 - PRLOG - Informix port delivers unlimited database scalability for popular SaaS application ... Read
2 Feb 12 - developerWorks - Loading data with the IBM Informix TimeSeries Plug-in for Data Studio... Read
1 Feb 12 - developerWorks - 100 Tech Tips, #47: Log-in to Fix Central... Read
13 Jan 12 - MC Press online - Informix Dynamic Server Entices New Users with Free Production Edition ... Read
11 Jan 12 - Computerworld - Ecologic Analytics and Landis+Gyr -- Suitors Decide to Tie the Knot... Read
9 Jan 12 - planetIDS.com - DNS impact on Informix / Impacto do DNS no Informix... Read
8 Sep 11 - TMCnet.com - IBM Offers Database Solution to Enable Smart Meter Data Capture... Read
1 Aug 11 - IBM Data Management Magazine - IIUG user view: Happy 10th anniversary to IBM and Informix... Read
8 Jul 11 - Database Trends and Applications - Managing Time Series Data with Informix... Read
31 May 11 - Smart Grid - The meter data management pitfall utilities are overlooking... Read
27 May 11 - IBM Data Management Magazine - IIUG user view: Big data, big time ( Series data, warehouse acceleration, and 4GLs )... Read
16 May 11 - Business Wire - HiT Software Announces DBMoto for Enterprise Integration, Adds Informix. Log-based Change Data Capture... Read
21 Mar 11 - Yahoo! Finance - IBM and Cable&Wireless Worldwide Announce UK Smart Energy Cloud... Read
14 Mar 11 - MarketWatch - Fuzzy Logix and IBM Unveil In-Database Analytics for IBM Informix... Read
11 Mar 11 - InvestorPlace - It's Time to Give IBM Props: How many tech stocks are up 53% since the dot-com boom?... Read
9 Mar 11 - DBTA - Database Administration and the Goal of Diminishing Downtime... Read
2 Feb 11 - DBTAs - Informix 11.7 Flexible Grid Provides a Different Way of Looking at Database Servers... Read
27 Jan 11 - exactsolutions - Exact to Add Informix Support to Database Replay, SQL Monitoring Solutions... Read
25 Jan 11 - PR Newswire - Bank of China in the UK Works With IBM to Become a Smarter, Greener Bank... Read
12 Oct 10 - Database Trends and Applications - Informix 11.7: The Beginning of the Next Decade of IBM Informix... Read
20 Sep 10 - planetIDS.com - ITG analyst paper: Cost/Benefit case for IBM Informix as compared to Microsoft SQL Server... Read
20 Jul 10 - IBM Announcements - IBM Informix Choice Edition V11.50 helps deploy low-cost scalable and reliable solutions for Apple Macintosh and Microsoft Windows... Read
20 Jul 10 - IBM Announcements - Software withdrawal: Elite Support for Informix Ultimate-C Edition... Read
24 May 10 - eWeek Europe - IBM Supplies Database Tech For EU Smart Grid... Read
23 May 10 - SiliconIndia - IBM's smart metering system allows wise use of energy... Read
21 May 10 - CNET - IBM to help people monitor energy use... Read
20 May 10 - ebiz - IBM Teams With Hildebrand To Bring Smart Metering To Homes Across Britain... Read
19 May 10 - The New Blog Times - Misurare il consumo energetico: DEHEMS è pronto... Read
19 May 10 - ZDNet - IBM software in your home? Pact enables five-city smart meter pilot in Europe... Read
17 March 10 - ZDNet (blog) David Morgenstern - TCO: New research finds Macs in the enterprise easier, cheaper to manage than... Read
17 March 2010 - Virtualization Review - ...key components of Big Blue's platform to the commercial cloud such as its WebSphere suite of application ser vers and its DB2 and Informix databases... Read
10 February 2010 - The Wall Street Journal - International Business Machines is expanding an initiative to win over students and professors on its products. How do they lure the college crowd?... Read


End of Support Dates

IIUG on Facebook IIUG on Twitter

Apache-DBD::Informix Howto



v0.5d Dec, 8 1998, original draft by Marco Greco, with notable contributions from Jonathan Leffler

From this point forward, anything in Italic needs to be formatted/adjusted/expanded/filled in

Overview

This howto covers the use of perl and the DBI / DBD::Informix perl modules to allow apache to access data from Informix databases.

Disclaimer

I'll keep it short: don't let the fact that I am currently employed by Informix UK trick you into believing that the contents of this document are in any way endorsed or supported by Informix. Should anything go wrong, don't call tech support, and don't blame the company, or me, for that matter: I reserve the right to be wrong. Also, feel free to use this material in whichever way you want, however, nobody likes a plagiarist, so quote your source, or don't use it at all.

Prerequisites

Perl, Apache, HTTP, CGI, HTML, SQL and Informix are the buzzwords here. However, for several reasons, we will touch different areas at different levels: for instance Perl or SQL wisdom is a bit too massive to be just put here, and is best pontificated upon by somebody else, while everything you wanted to know about Informix products installation / management / tricks is very well covered elsewhere in this section, as well as elsewhere on this site, and of course, on the Informix site, and duplicating it was definitely beyond the scope of this howto.
In spite of the assumptions I've made on your knowledge in all these areas, you should find described here every applicable gotcha. If I missed any, please drop me a line.

Ingredients

Note that you should really be considering using the latest versions of the DBI and DBD::Informix modules, which as of this writing are 1.02 and 0.60 respectively. Also, note that the mod_ssl module, which you need if you plan to provide secure pages, requires Apache 1.3x+

Installing Perl

Chances are that you want to skip this step: all distributions carry a fully functional perl interpreter. However, should you have anything but the latest stable perl version, you should consider removing it, and proceed with a new perl installation all the same, since the latest DBI and DBD::Informix need a certain minimal perl version, which at present is 5.004+.

If, in spite of the free advice you are getting here, you decide to stick to something like perl 5.003, be aware that anything newer than DBI 0.85 and DBD::Informix 0.56 will almost certainly fail some tests, if not to compile altogether.

The entire installation process is detailed within the INSTALL file (which I encourage you to read), and, briefly, goes as follows:

  • if you haven't done already, extract the source
  • if on the other hand you have already installed before, do an

    rm config.sh
    make realclean
    to start again from scratch

  • next, run the configuration script

    sh Configure

    and answer all the questions.
    Be aware that Configure accepts a pletora of useful options (Configure -h lists them all). You might be interested in -des, which forces it to accept all the defaults, and in general take care of everything without bothering you too much, and in -Dprefix=/your/preferred/installation/location, which allows you to specify a location in which to ultimately install perl (default is /usr/local)

    If you'd rather interact with a GNU style ./configure instead, just go for it!

  • now it is the time to

    make
    make test
    make install

  • finally, and optionally, remove the sources

Installing Apache

Chances are that you want to skip this step: all distributions carry a fully functional Apache server. On the other hand, if you are going to install mod_perl (which is a good idea — see the section on installing mod_perl) you must have the Apache source available to you, and you must know where it is located.

Here's a brief summary of how to compile and install Apache:

  • Extract the source into some directory (eg /work/apache).
  • Change into the version-specific sub-directory (eg cd /work/apache/apache-1.3.1).
  • Configure Apache (assuming Apache 1.3.0 or later, this is simply ./configure). If you want to specify where Apache will be installed, run:

    ./configure --prefix=/usr/apache

  • If you are going to install mod_perl too, skip to the section on installing mod_perl.
  • Build Apache (eg make).
  • Install Apache (eg make install)
  • Configure Apache; see the section on configuring Apache.
  • Clean up the sources.

Optional: Installing mod_perl

Installing mod_perl will, amongst other things, allow Apache to execute CGI Perl scripts within the httpd daemon itself. It also maintains a cache of cgi scripts making server responses faster. If you really are into performance, mod_perl will allow you to have persistent database connections, which put it simply means that past the first execution, CGI scripts do not have to open new connections to the database, and can execute SQL statements straight away. (Persistent connections can be pretty tricky, so never use them without the supervision of an adult!)

Different releases require different minimal versions of Perl and Apache. As an example, the mod_perl version 1.15 installation requires Apache 1.2.6 or later, or Apache 1.3.0 or later, and Perl 5.004 or later.

To install mod_perl, you must have the Apache source code on the machine. With mod_perl 1.15, you will need to supply the path if the installation process below does not find it automatically. Other versions are probably similar.

  • Configure but do not compile Apache.
  • Extract the mod_perl source into a directory (eg /work/apache/apache_1.3.1/mod_perl-1.15).
  • Change directory into the directory containing the mod_perl source.
  • Configure mod_perl:

    perl Makefile.PL

    This will attempt to locate the Apache source; if you have followed the suggestions above, it will ask:

    Configure mod_perl with ../src ? [y]

    Answer yes, and it will ask:
    Shall I build httpd in ../apache_x.xx/src for you? [y]

    Again, answer yes. The configuration will continue, and may suggest a variety of extra Perl modules (eg CGI 2.39 or later if you are using an older version, LWP::UserAgent, HTML::HeadParser, and libwww-perl). These are not mandatory, but will make life easier. You can obtain them from CPAN.

  • There are a large number of configurable options for mod_perl, most of which are turned off by default. It is probably simplest to leave the defaults alone until you know what the non-default settings will do for you.
  • Build Apache with mod_perl (eg make). This will do an astonishing amount of work, but it is completely automatic unless there is something seriously wrong with your configuration.
  • If you have the modules necessary for testing, test the result (eg make test).
  • Install the Apache you have built and tested (eg make install).Note that you should do this from the mod_perl directory, not the Apache directory.

Configuring Apache

Chances are that if you now decide to try your newly installed Apache server, it will respond to your browser request without a hitch (normally some kind of 'It worked' page), however, some configuration tweaking is needed before your fab perl scripts will be able to access an Informix engine: we need to tell Apache where cgi scripts are located, what kind of security options should be used, and finally what environmental variables to pass to them.

To start with, depending on your linux distribution, or where you have installed Apache, the configuration files will be found in places like /usr/apache/etc, /var/lib/httpd/conf, /usr/local/httpd/conf or /etc/httpd/conf.
The files that might need modifications are access.conf (directory permission on a service, host, and user base), httpd.conf (the main configuration file), and srm.conf (namespace settings). Even though each file address a certain area, in most cases it doesn't really matter in which one you put a particular directive.

But let's proceed in order:

Cgi scripts locations

In your apache directory, you should find a subdirectory named cgi-bin which contains a couple of sample cgi scripts. Note that it is not placed within the DocumentRoot tree: although it is entirely possible to mix documents and cgi scripts in the same directory, this is in general not a good idea. If nothing, keeping documents and cgi scripts apart helps diminishing the probability that Apache mistakenly displays a cgi script to the user, rather than executing it (very possible, in case of misconfiguration).

Note also that it is entirely possible to have more than one cgi or document tree configured, and that you are not forced to stay with the Apache default configuration. Creating new cgi locations is described next.

To make a directory outside the Document root look like one of its subdirectory, the Alias directive is used, like thus:

Alias /url-path/ /actual/directory/

Directory attributes

We will use the <Directory /actual/directory> ...</Directory> directive to specify what actions we want allowed on a particular directory.

<Files> and <Location> offer similar functionality, but file type or URL oriented respectively. <Files> is peculiar in the sense that it can be used from with the .htaccess (or whichever other file name you specified in the AccessFileName directory) file in each directory, allowing you to taylor file types on a much finer grain.

Within those directives, a number of others can be specified to determine the behaviour of a particular directory, filetype, url tree. Those that are of interest to us are Options and SetHandler. The first determines what kind of actions are allowed; in particular, ExecCGI specifies that it is possible to execute cgi scripts. The second specifies what handler should be used (ie what action is to be performed) on the files there: cgi-script specifies that it should consider them scripts.

Other interesting directives are allow, deny and order, which allow to specify which hosts have access to the particular directory, and AllowOverride, which specifies whether and which options can be overriden on a directory by directory basis by settings specified in a file named .htaccess.

In short, the following is the bare minimun you should have in your configuration:
#assuming you installed apache in /opt/apache

Alias /cgi-bin/ /opt/apache/cgi-bin/

<Directory /opt/apache/cgi-bin>
Options ExecCGI
SetHandler cgi-script
allow all
</Directory>

Finally,
ScriptAlias /url-path/ /actual/directory/

is in essence a quick shorthand for

Alias /url-path/ /actual/directory/

<Directory /actual/directory>
Options ExecCGI
SetHandler cgi-script
</Directory>

Environmental Variables

use the

SetEnv variablename value

directive to set and pass environmental variables like INFORMIXDIR, INFORMIXSERVER, DBPATH, INFORMIXSQLHOSTS, or LD_LIBRARY_PATH (an extensive list of environmental variables used by Informix products can be found on the Informix guide to SQL: Reference).
As an alternative, all but LD_LIBRARY_PATH can be set from within the perl script itself, via an

$ENV{'<environmental variable name>'}=<appropriate value>;

Note that only INFORMIXDIR and LD_LIBRARY_PATH (which should be set to INFORMIXDIR/lib:INFORMIXDIR/lib/esql, with INFORMIXDIR being the actual value of $INFORMIXDIR!) are mandatory. DBPATH should be used if the engine is SE, and you don't want to put your databases in your cgi directory, (which I wouldn't recommend). As an alternative to both DBPATH and INFORMIXSERVER, you can use the /path/to/your/database@server syntax within DBI->Connect. INFORMIXSQLHOSTS needs be used only if you are not using the standard sqlhosts file.

If using the mod_perl module...

things are a bit different in various respects: in particular, note that it is not possible to make use of the #!/usr/bin/perl (or wherever else your perl executable is located) syntax at the top of the file to identify perl scripts, and that the ScriptAlias directive does not work. Instead, use

SetHandler perl-script
PerlHandler Apache::Registry
Options ExecCGI

Within the appropriate <Directory>, <Files>, or <Location> directives.

As far as environmental variable go, LD_LIBRARY_PATH needs to be defined when the Apache server is started, and you use PerlSetEnv to set environmental variables rather than SetEnv (perldoc mod_perl.pod for a discussion on this). I would also refrain from playing with environmental variables from within perl scripts, since this can inadvertitely affect the behaviour of other perl scripts.

Installing ESQL/C & the engine

This is really beyond the scope of this document. Find all the necessary info on the IIUG Linux Techical Corner.

Installing DBI

Oh, my! this is getting repetitive... Really, not much new stuff here. The only catches are that the DBI module needs Perl installed, and you shouldn't try building DBI in the Perl source tree (but you knew all that, because you read the README file).
In essence:

  • Extract files into some directory (eg /work/DBI/DBI-1.02).
  • Change into that directory.
  • Configure DBI by running perl Makefile.PL.
  • Build DBI (eg make).
  • Test DBI (eg make test).
  • Install DBI (eg make install).
  • Optionally, remove the DBI source tree.

Installing DBD::Informix

Yes, we are getting there! this is the last installation on schedule, I promise. And these are a number of catches, as told by Jonathan Leffler:

The DBD::Informix module needs Perl and DBI installed. It also requires a working Informix ESQL/C compilation system — it verifies that it can compile and execute an ESQL/C program before it writes the Makefile necessary to build DBD::Informix.

It also requires that you have access to at least one database, ideally one on which you have DBA privileges (you must have RESOURCE privileges). It can be under OnLine or SE; it can be on the local machine or on a remote machine; it can be logged, or unlogged, and if it is logged, it can also be a MODE ANSI database if desired. The test program verifies that you can access this database. The multi-connection tests will work with a second database if you set the environment correctly; read the README file.

At the risk of sounding boring, here's what you have to do:

  • Extract files into some directory (eg /work/DBD/DBD-Informix-0.60).
  • Change into that directory.
  • Set your Informix environment if you have not already done so. You will need to set INFORMIXSERVER, INFORMIXDIR, PATH, LD_LIBRARY_PATH. You might want to set DBPATH. Additionally, you can do:

    dbaccessdemo7 stores      # default database name
    export DBD_INFORMIX_DATABASE=stores       # optional (defaults to stores)
    export DBD_INFORMIX_DATABASE2=otherdb       # optional

    If you are need to specify a user name and password to connect to the database you are using (which will not be the case if you created it with dbaccessdemo7 as above), then you also need to set:

    export DBD_INFORMIX_USERNAME=<username>
    export DBD_INFORMIX_PASSWORD=<password>

  • Configure DBD::Informix by running perl Makefile.PL .
  • Build DBD::Informix (eg make).
  • Test DBD::Informix (eg make test).
  • Install DBD::Informix (eg make install).
  • Optionally, remove the DBD::Informix source tree.

A sample CGI script

very basic script accessing the customer table on the stores database, with explanation, and info on where to put it, etc

#!/usr/bin/perl
#   @(#)/var/lib/httpd/cgi-bin/dbitest1.pl 0.0
#   dbitest1.pl - Demo CGI script for the Apache-DBD::Informix howto
#
#  Author: Marco Greco, <marco@informix.com>
#
#    Initial release: Oct 98
#   Current release: Oct 98
#
#   Absolutely no warranty -- use at your own risk 

use strict;

use DBI;

#
#   output http headers and head of html
#
print "Content-Type: text/html

<HTML>
<HEAD>
<TITLE>Apache-DBI demo #1</TITLE>
</HEAD>
<BODY>
<H1>Apache-DBI demo #1</H1>\n";

#
# connect and see how the engine reacts to this
# [have all the needed environmental variables in place...]
#
my $dbh=DBI->connect('dbi:Informix:stores', '', '');
if ($DBI::err)
{
     print "Engine replied: $DBI::errstr\n</BODY></HTML>";
     exit;
}

#
# prepare select statement and open cursor
#
my $sql="SELECT * FROM customer";
my $sth=$dbh->prepare($sql);
$sth->execute();

#
# bind columns to vars
#
my($customer_num, $fname, $lname, $company, $address1, $address2,
     $city, $state, $zipcode, $phone);
$sth->bind_columns(undef, \$customer_num, \$fname, \$lname, \$company,
     \$address1, \$address2, \$city, \$state, \$zipcode, \$phone);

#
# chop blanks from char columns
#
$sth->{ChopBlanks}=1;

#
# fetch records off the database, and print them
#
while ($sth->fetch())
{
#
# avoid blank address2 line
#
     if ($address2 ne '')
     {
     $address2="$address2<BR>";
     }
   print "<P>$customer_num:<BR> $lname, $fname<BR>
$company<BR> $address1<BR>
   $address2 $city<BR> $state $zipcode<BR> $phone</P>\n";
}
#
# free statement and disconnect from engine
#
$sth->finish();
$dbh->disconnect();
#
# finish html
#
print "</BODY></HTML>";

Using Forms

Discussion on form evaluation, REQUEST_METHOD (POST/GET), QUERY_STRING, extracting fields from QUERY_STRING, warning against eval

#!/usr/bin/perl
#  @(#)/var/lib/httpd/cgi-bin/dbitest2.pl 0.0
#  dbitest2.pl - Demo CGI script for the Apache-DBD::Informix howto
#
#  Author: Marco Greco, <marco@informix.com>
#
#  nitial release: Oct 98
#  Current release: Oct 98
#
#  Absolutely no warranty -- use at your own risk

use strict;
use DBI;

#
# output http headers and head of html
#
print "Content-Type: text/html

<HTML>
<HEAD>
<TITLE>Apache-DBI demo #2</TITLE>
</HEAD>
<BODY>
<H1>Apache-DBI demo #2</H1>\n"; 

#
# Peek at the standard input
#
my $search='';
if ($ENV{'REQUEST_METHOD'} eq 'POST')
{
     my $query=<STDIN>;
     my $r;
     my @var;

#
# user input is returned as a list of var=value, separated by '&'
#
     foreach $r (split "&", $query)
     {
    @var=split "=", $r;
    if ($var[0] eq 'search')
    {
#
# be warned, fields have spaces turned into '+', and might contain
# %hh encoded chars (not dealing with those here)
#
      ($search=$var[1])=~ tr/+/ /;
    }
     }
    } 

#
# Ha! we do have something to search
#
if ($search ne '')
{
#
# connect and see how the engine reacts to this
# [have all the needed environmental variables in place...]
#
    my $dbh=DBI->connect('dbi:Informix:stores', '', '');
    if ($DBI::err)
    {
   print "Engine replied: $DBI::errstr\n</BODY></HTML>";
   exit;
    }

#
# prepare select statement and open cursor
#
     my $sql="SELECT * FROM customer where lname=\"$search\"";
     my $sth=$dbh->prepare($sql);
     $sth->execute();

#
# bind columns to vars
#
     my($customer_num, $fname, $lname, $company, $address1, $address2,
    $city, $state, $zipcode, $phone);
      $sth->bind_columns(undef, \$customer_num, \$fname, \$lname, \$company,
    \$address1, \$address2, \$city, \$state, \$zipcode, \$phone);

#
# chop blanks from char columns
#
     $sth->{ChopBlanks}=1;

#
# fetch records off the database, and print them
#
    my $rowcount=0;
    while ($sth->fetch())
    {
#
# avoid blank address2 line
#
   if ($address2 ne '')
   {
    $address2="$address2<BR>";
   }
     print "<P>$customer_num:<BR> $lname, $fname<BR>
$company<BR> $address1<BR>
   $address2 $city<BR> $state $zipcode<BR> $phone</P>\n";
    $rowcount++;
     }
#
# the row count just to say that we did search for something
#
    print "<P>$rowcount rows found for \'$search\'</P>\n";

# # free statement and disconnect from engine
#
   $sth->finish();
   $dbh->disconnect();
}

#
# whether we searched for something or not, output a form
#
print <<EOF;
<FORM METHOD="POST" ACTION="dbitest2.pl">
surname <INPUT NAME="search">
<INPUT TYPE="submit" VALUE="Go!">
</FORM>
</BODY>
</HTML>
EOF

Discussion on how to prepare and execute a statement that modifies data
Discussion on other techniques: eg using placeholders

Cookies, or how to keep state information

Discussion on the Set-Cookie: header, and how to retrieve cookies from $ENV{'HTTP_COOKIE'}

The CGI module

CGI modules fundamentals

Validating users and Security issues

So you want to have control on who uses your fab CGI scripts, and better still who sees (or gulp! does) what in your precious databases? Very well, here goes...

Before deciding whether accessing the database as the actual user (as opposed to www, the user under which apache is run) meets your needs, it might be worth going over the following points:

  • Think again: can't you grant minimal access to the database and selected tables to user www? (or whichever other user the http server is running as. Of course the http server is not running as root, is it?). You can still make relatively sure that only selected users access the database by enforcing access restrictions at the http server level.
  • On the other hand, consider that http servers and client offer in general a pretty weak user validation, and do not report user accounts under attack.
  • Persistent connections, because of their very nature, cannot make use of engine side validation.
  • In any case, limit as much as you can the set of hosts that can run the cgi script accessing the database. This is not entirely secure, as it is possible to use fake host names or ip's, but it's a start.
  • Write the script so as to access only non sensitive data. Data manipulation is out of the question.
  • Avoiding security holes: put particular care in writing sh or perl cgi scripts (one is described later), and don't spawn child processes from the cgi script. The discussion that follows assumes that you won't, and if you do that at least you won't pass them anything in anyway derived from form field values, and limit to a bare minimum the environmental variables exported.

Httpd side user validation

Pointer to .htaccess, AuthUserFile, htpasswd

Note that DBI->connect ignores the the username if a password is not supplied alongside it, so the only way to access a database as a particular user with server side validation is to use suexec to execute the perl script under a different user id. Specifically, the name of the remote user is located in $ENV{'REMOTE_USER'}.

However, this is probably not the route you want to follow. For starters clients cache user names and passwords relative to a certain document tree for the duration of the entire session, opening potential security holes. Next, http servers don't track, report or try to halt possible attacks.

Furthermore the user id and password are usually sent in clear, meaning that they are open to third parties peeking in the way. Even with Digest Authentication, malicious clients can force the server to use a low security challenge, making the whole thing pretty much useless.

On top of that, both clients and servers might be tricked into believing they are dealing with a different, legittimate host than the hostile one that is in fact dealing with them, making users hand in clear names and passwords to malicious servers, which can be later used to break in the legittimate server.

In this respect, it cannot be stressed enough that every means available should be used to restrict access to a minimal set of clients, and to make sure that both clients and servers are in fact who they say they are.

On the bright side, HTTP servers generally use their own password files, the rationale of this being an attempt not to lower security in other areas of the system. One of the point of separate password files is to have users employ passwords differing from the ones used for normal login, so that, even if the HTTP passwords have been cracked (remember, http servers do not normally report logins under attack, nor do they try to block them), the hacker is still prevented from logging in and breaking havoc in user's areas, or God knows what else. That said, whether you'll be able to force your users to use more than one password, is a completely different story...

Engine side validation

Embending the user validation part in cgi scripts, on the other hand allows a better tailoring of client challenging (user validation can be requested at each new page, or, using cookies, after some interval as expired since the last login, or at every nth page request, etc), the introduction of account attack countermeasures (On Linux, it is relatively easy to arrange for an update of the appropriate entry in /var/adm/faillog (5) whenever a user validation fails, and even easier to check whether a preset amount of failures as been reached), and, using secure servers, greatly diminishes the risk of illegal password peeking or downright handing to malicious hosts, but opens an entire new can of security risks.

Perl or shell script that check the user identity are out of necessity much more complex than simple "set the environment" wrappers, and therefore pose substantial security risks. Extreme care should be excercised when writing such scripts, in particular when using constructs like sh or perl's eval, or spawning childs to parse field contents. The net is full of examples on how to exploit innocently looking fields to actually execute malicious commands on the server.

On the dark side, engine side validation can only employ systemwide password files, which means that in addition to all the nasty things that can be done to your machine, this approach opens yet another set of security risks: if not properly hidden, user validating CGI scripts can be used by malicious users to try and guess other users passwords.

The second risky area pertains to cookie usage to allow users to access subsequent pages without revalidation for predeterminate amount of time, or until a logout. The approaches here are two:

  1. having successfully validated a user, create a cookie that contains the user id and the password, both crippled. The cgi script checks for the cookie presence and if found attempts a connection to the database using them. If not, or if the connection fails, a form prompting for user id and password is returned. Risk: a malicious user could overlook the validation cookie from an unattended client or inexpert user (like: Netscape allows to warn the user before accepting cookies, showing the cookie values...), and extract with brute force the password from that.
  2. at each successful validation, create a cookie containing a random session key, and store this, coupled with the user id just validated, in a database accessible only by the www user. Allow access only to those cookies that are found in the database, and are not older than a preset interval. Have a daemon remove expired session keys in a timely fashion. So much for the statelesness of HTTP...

The entire issue boils down to how much you are confident that your server is screened from the unathorized world, how good are your users at cracking mildly crippled strings, and how much you are happy with cookies residing in the clients for a relatively short time interval. My personal rating is 1. adequate for most purposes, 2. for the paranoids. YMMV.

This discussion assumes that cookies expire in a relatively short time span (say 10 minutes), so as not to leave the back door open after having armoured the front one. A still better precaution would be to make the cookie uncrippling fail after the same time span, which means that a cookie value will be unusable after the cookie expires, but be warned: this means making the crippling seed system clock dependent, and it isn't all that hard to guess the server system clock, even from a remote client (try finger...), so don't think this is going to make guessing the crippling process much harder for the bad guys.

Sample validating code or pointer to tool on my web site, here.

Miscellanea

iBCS related issues: cannot mix and match Perl for Linux and esql/c for SCO: to use esql/c for SCO, a SCO Perl binary needs to be used. You can use a native linux DBD::Informix against a SCO engine, though (untested)

Shared memory connections: no support for simultaneous connections (shm protocol limitation). If necessary, use tcp connections instead.

CGI scripting & mod_perl: remember to free resources at the end of the script, as the perl interpreter is part of the apache server and is not run as a separate child at each request. In particular, remember to disconnect from the engine! On a related note, mod_perl requires stream pipe or tcp connections: shm cannot be used due to the limitation outlined earlier

Limitations: having to set LD_LIBRARY_PATH from within the Apache server, and since embedded tools have to reside on the same directory as an engine installed localy, there is no way that you can access servers of different versions on the local machine. If you wish to use IDS/UDO and IDS at the same time, for instance, one of them needs to be installed in a remote machine.

Further readings

Back to Top
Back to Technical Information: Deploying Informix's Products on Linux