Join IIUG
 for   
 

Informix News
18 Nov 13 - ZDNet - Top 20 mobile skills in demand... Read
09 Sep 13 - telecompaper - Shaspa and Tatung have shown a new smart home platform at Ifa in Berlin. Powered by the IBM Informix software... Read
06 Sep 13 - IBM data magazine - Mission Accomplished - Miami, Florida will be the backdrop for the 2014 IIUG Informix Conference... Read
01 Feb 13 - IBM Data Magazine - Are your database backups safe? Lester Knutsen (IBM Champion) writes about database back up safety using "archecker"... Read
14 Nov 12 - IBM - IBM's Big Data For Smart Grid Goes Live In Texas... Read
3 Oct 12 - The Financial - IBM and TransWorks Collaborate to Help Louisiana-Pacific Corporation Achieve Supply Chain Efficiency... Read
28 Aug 12 - techCLOUD9 - Splunk kicks up a SaaS Storm... Read
10 Aug 12 - businessCLOUD9 - Is this the other half of Cloud monitoring?... Read
3 Aug 12 - IBM data management - Supercharging the data warehouse while keeping costs down IBM Informix Warehouse Accelerator (IWA) delivers superior performance for in-memory analytics processing... Read
2 Aug 12 - channelbiz - Oninit Group launches Pay Per Pulse cloud-based service... Read
28 May 12 - Bloor - David Norfolk on the recent Informix benchmark "pretty impressive results"... Read
23 May 12 - DBTA - Informix Genero: A Way to Modernize Informix 4GL Applications... Read
9 Apr 12 - Mastering Data Management - Upping the Informix Ante: Advanced Data Tools... Read
22 Mar 12 - developerWorks - Optimizing Informix database access... Read
14 Mar 12 - BernieSpang.com - International Informix User Group set to meet in San Diego... Read
1 Mar 12 - IBM Data Management - IIUG Heads West for 2012 - Get ready for sun and sand in San Diego... Read
1 Mar 12 - IBM Data Management - Running Informix on Solid-State Drives.Speed Up Database Access... Read
26 Feb 12 - BernieSpan.com - Better results, lower cost for a broad set of new IBM clients and partners... Read
24 Feb 12 - developerWorks - Informix Warehouse Accelerator: Continuous Acceleration during Data Refresh... Read
6 Feb 12 - PRLOG - Informix port delivers unlimited database scalability for popular SaaS application ... Read
2 Feb 12 - developerWorks - Loading data with the IBM Informix TimeSeries Plug-in for Data Studio... Read
1 Feb 12 - developerWorks - 100 Tech Tips, #47: Log-in to Fix Central... Read
13 Jan 12 - MC Press online - Informix Dynamic Server Entices New Users with Free Production Edition ... Read
11 Jan 12 - Computerworld - Ecologic Analytics and Landis+Gyr -- Suitors Decide to Tie the Knot... Read
9 Jan 12 - planetIDS.com - DNS impact on Informix / Impacto do DNS no Informix... Read
8 Sep 11 - TMCnet.com - IBM Offers Database Solution to Enable Smart Meter Data Capture... Read
1 Aug 11 - IBM Data Management Magazine - IIUG user view: Happy 10th anniversary to IBM and Informix... Read
8 Jul 11 - Database Trends and Applications - Managing Time Series Data with Informix... Read
31 May 11 - Smart Grid - The meter data management pitfall utilities are overlooking... Read
27 May 11 - IBM Data Management Magazine - IIUG user view: Big data, big time ( Series data, warehouse acceleration, and 4GLs )... Read
16 May 11 - Business Wire - HiT Software Announces DBMoto for Enterprise Integration, Adds Informix. Log-based Change Data Capture... Read
21 Mar 11 - Yahoo! Finance - IBM and Cable&Wireless Worldwide Announce UK Smart Energy Cloud... Read
14 Mar 11 - MarketWatch - Fuzzy Logix and IBM Unveil In-Database Analytics for IBM Informix... Read
11 Mar 11 - InvestorPlace - It's Time to Give IBM Props: How many tech stocks are up 53% since the dot-com boom?... Read
9 Mar 11 - DBTA - Database Administration and the Goal of Diminishing Downtime... Read
2 Feb 11 - DBTAs - Informix 11.7 Flexible Grid Provides a Different Way of Looking at Database Servers... Read
27 Jan 11 - exactsolutions - Exact to Add Informix Support to Database Replay, SQL Monitoring Solutions... Read
25 Jan 11 - PR Newswire - Bank of China in the UK Works With IBM to Become a Smarter, Greener Bank... Read
12 Oct 10 - Database Trends and Applications - Informix 11.7: The Beginning of the Next Decade of IBM Informix... Read
20 Sep 10 - planetIDS.com - ITG analyst paper: Cost/Benefit case for IBM Informix as compared to Microsoft SQL Server... Read
20 Jul 10 - IBM Announcements - IBM Informix Choice Edition V11.50 helps deploy low-cost scalable and reliable solutions for Apple Macintosh and Microsoft Windows... Read
20 Jul 10 - IBM Announcements - Software withdrawal: Elite Support for Informix Ultimate-C Edition... Read
24 May 10 - eWeek Europe - IBM Supplies Database Tech For EU Smart Grid... Read
23 May 10 - SiliconIndia - IBM's smart metering system allows wise use of energy... Read
21 May 10 - CNET - IBM to help people monitor energy use... Read
20 May 10 - ebiz - IBM Teams With Hildebrand To Bring Smart Metering To Homes Across Britain... Read
19 May 10 - The New Blog Times - Misurare il consumo energetico: DEHEMS è pronto... Read
19 May 10 - ZDNet - IBM software in your home? Pact enables five-city smart meter pilot in Europe... Read
17 March 10 - ZDNet (blog) David Morgenstern - TCO: New research finds Macs in the enterprise easier, cheaper to manage than... Read
17 March 2010 - Virtualization Review - ...key components of Big Blue's platform to the commercial cloud such as its WebSphere suite of application ser vers and its DB2 and Informix databases... Read
10 February 2010 - The Wall Street Journal - International Business Machines is expanding an initiative to win over students and professors on its products. How do they lure the college crowd?... Read


End of Support Dates

IIUG on Facebook IIUG on Twitter

[ View Thread ] [ Post Response ] [ Return to Index ] [ Read Prev Msg ] [ Read Next Msg ]

IDS Forum

Re: Insert into an indexed table is really SLOW!

Posted By: Art Kagel
Date: Tuesday, 7 July 2009, at 7:09 p.m.

In Response To: Re: Insert into an indexed table is really SLOW! (Richard Kofler)

You are right. That's twice my nice MS Calculator got the wrong answer to
the same problem. Probably wasn't 71000 pages but 710000 pages... cheching
with a REAL calculator... ya 712,872 pages. <sigh>

Creating the table with a large extent size or adjusting hte next size would
certainly help. The OP doesn't have the memory for 500,000 buffers. He
has about 50MB free memory available so that's 25,000 buffers roughly. Lots
of CLEANERS wouldn't help, here's why:

- Only a single chunk so chunk writes will only use one cleaner anyway

- If the OP increases the number of LRUs it might help LRU writes, but

with only one session writing there's no contention for LRU queues so even

though the OP has AUTO_LRU_TUNING enabled the engine will not increase the

number of LRUS beyond the 8 that are configured and so 8 CLEANERS are

sufficient.

Art

Art S. Kagel
Oninit (www.oninit.com)
IIUG Board of Directors (art@iiug.org)

Disclaimer: Please keep in mind that my own opinions are my own opinions and
do not reflect on my employer, Oninit, the IIUG, nor any other organization
with which I am associated either explicitly or implicitly. Neither do
those opinions reflect those of other individuals affiliated with any entity
with which I am affiliated nor those of the entities themselves.

On Tue, Jul 7, 2009 at 6:44 PM, Richard Kofler <richard.kofler@chello.at>wrote:

> Hi Art,
>
> sry must put this straight.
> 6 mio recs at 10 per page (which the OP don't have, it is only 8 per page)
> - if pages are 2 KB - is 600,000 pages and he need:
> 1) extent size more like 1,000,000 than 16 and NEXT SIZE like 500,000
> 2) a bufferpool in the 500,000 range plus LRUs ~ 50 plus CLEANERS
>
> and so on and so forth
>
> This table will be > 1 GB and the ONCONFIG setup is a lab setup used
> to demo the stores demo DB.
>
> Now back to the OP:
> you should either disable the index and reenable after load, and if
> this is not possible, then load into a staging table and use fragmentation
> for this table.
> I think you have a reason to start the VIO table. Do you want a
> filtering load to clean the data?
> When you done with staging, set PDQPRIORITY and merge the rows into
> your table.
>
> In either case, you cannot expect rotational media to be fast enough
> when you use near to zilch buffers. You already notices that you
> are more or less waitiung for I/O to finish (I/O bound, WIO in sar)
> all the time.
> Disks are slow (~12 ms / write) and Solid State Disks, which makes
> this faster will cost the price of you machine times 1,500
> (www.tms.com). In your case SSDs are *not* the way to go.
>
> What you also must do is to reconfigure the BT Cleaner so, that it kicks
> in only after the load is finished. Now yout threshold is at 5,000
> -> make this 100,000 and watch, if it kicks in. If so, double to
> 200,000 and so on.
>
> dic_k
>
> Art Kagel schrieb:
> > That calculation of the number of pages being added to the table was
> > bothering me, and with good reason. The 6millions rows of 240 bytes each
> > don't take up 3000 pages, they take up a bit over 71,000 pages! All the
> > more reason to max out the buffer cache!
> >
> > Art
> >
> > Art S. Kagel
> > Oninit (www.oninit.com)
> > IIUG Board of Directors (art@iiug.org)
> >
> > Disclaimer: Please keep in mind that my own opinions are my own opinions
> and
> > do not reflect on my employer, Oninit, the IIUG, nor any other
> organization
> > with which I am associated either explicitly or implicitly. Neither do
> > those opinions reflect those of other individuals affiliated with any
> entity
> > with which I am affiliated nor those of the entities themselves.
> >
> > On Tue, Jul 7, 2009 at 1:42 PM, Art Kagel <art.kagel@gmail.com> wrote:
> >
> >> I think that you have two problems:
> >>
> >>
> >> 1. Asking the engine to do the index IO at the same time as the table
> >> data IO all to the different sections of the same single spindle is just
> >> impractical.
> >> 2. You only have 1000 buffers configures so the engine is constantly
> >> having to write those pages to disk and wait for the IOs to complete.
> >>
> >> Suggestions? Obviously I think you need additional spindles, but that
> >> aside, add more buffers, many more buffers. You have 6 million rows to
> load
> >> at 240 bytes each so the data will require almost 3000 pages plus the
> >> indexes so if you make the cache 4000 pages you may get the entire load
> into
> >> memory at the same time. Honestly though, even 4000 cache pages is
> small,
> >> that's not much of a live working set. The machine has 2GB of memory
> with
> >> over 50MB free. IDS is currently using only 38MB of memory. If you
> >> increase the buffer cache from 1000 pages to 10000 pages you will add
> only
> >> 20MB to the server's memory footprint and all of your operations will be
> >> faster and smoother.
> >>
> >> Another thing, are you using COOKED (filesystem) chunks or RAW? If
> >> filesystem chunks then you should enable DIRECT_IO (which is currently
> >> disabled in the ONCONFIG file) so that the engine can use KAIO threads
> for
> >> IO instead of AIO VPs. KAIO is faster and more efficient in system
> >> resources used. Also, if filesystem chunks what kind of filesystem? EXT4
> >> is VERY BAD for database chunks and EXT3 isn't much better. If you want
> to
> >> maximize your IO performance you should put the chunks in RAW space, or
> use
> >> a COOKED (block) device rather than a filesystem, or at least put them
> in a
> >> non-journaled filesystem like EXT2.
> >>
> >> FYI: Linus Torvalds says that EXT4 (and EXT3 with write-back enabled) is
> >> not safe, database performance issues aside!
> >>
> >> Art
> >>
> >> Art S. Kagel
> >> Oninit (www.oninit.com)
> >> IIUG Board of Directors (art@iiug.org)
> >>
> >> Disclaimer: Please keep in mind that my own opinions are my own opinions
> >> and do not reflect on my employer, Oninit, the IIUG, nor any other
> >> organization with which I am associated either explicitly or implicitly.
> >> Neither do those opinions reflect those of other individuals affiliated
> >> with any entity with which I am affiliated nor those of the entities
> >> themselves.
> >>
> >>
> >>
> >> On Tue, Jul 7, 2009 at 1:06 PM, MIKE DUNHAM-WILKIE
> > <mike@barrodale.com>wrote:
> >>> I am running into performance problems inserting into an indexed table,
> >>> using
> >>> Informix 11.50. The following timings for loading into
> >>> compressed/uncompressed, indexed/non-indexed tables using
> >>> dbload/HPL-Deluxe
> >>> more illustrate the problem. The number of rows inserted is 6 million.
> >>>
> >>> compressed/indexed/DBLoad or HPL/h:mm:ss.d
> >>> Y/Y/DBLoad/3:47:39.9
> >>> Y/N/DBLoad/0:10:58.1
> >>> N/Y/DBLoad/3:31:13.4
> >>> N/N/DBLoad/0:22:46.6
> >>> Y/Y/HPL/3:26:17.6
> >>> Y/N/HPL/0:08:31.7
> >>> N/Y/HPL/3:28:16.1
> >>> N/N/HPL/0:18:45.7
> >>>
> >>> Why are the inserts into the indexed tables so much slower? I would
> expect
> >>> some performance hit of course, but not this much. I want the indexes
> left
> >>> intact during the load, hence my use of HPL in Deluxe mode rather than
> >>> Express
> >>> mode.
> >>>
> >>> I'm using pretty much "out of the box" configuration parameters:
> >>>
> >>>
> ------------------------------------------------------------------------
> >>> onstat -c:
> >>>
> >>> IBM Informix Dynamic Server Version 11.50.UC4 -- On-Line -- Up 4 days
> >>> 17:59:
> >>> 49 -- 38212 Kbytes
> >>> Configuration File:
> >>> /opt/informix/informix.11.50_UC4/etc/onconfig.bcslinuxprod
> >>> ROOTNAME rootdbs
> >>> ROOTOFFSET 0
> >>> MIRROR 0
> >>> MIRRORPATH /opt/IBM/informix/tmp/demo_on.root_mirror
> >>> MIRROROFFSET 0
> >>> PHYSFILE 200000
> >>> PLOG_OVERFLOW_PATH /opt/IBM/informix/tmp
> >>> PHYSBUFF 128
> >>> LOGFILES 49
> >>> LOGSIZE 10000
> >>> DYNAMIC_LOGS 2
> >>> LOGBUFF 64
> >>> LTXHWM 70
> >>> LTXEHWM 80
> >>> CONSOLE /work1/stat/bcslinuxprod_console.log
> >>> TBLTBLFIRST 0
> >>> TBLTBLNEXT 0
> >>> TBLSPACE_STATS 1
> >>> DBSPACETEMP temp1db
> >>> SBSPACETEMP temp1sb
> >>> SBSPACENAME temp1sb
> >>> SYSSBSPACENAME temp1sbifmx
> >>> ONDBSPACEDOWN 2
> >>> NETTYPE ipcshm,1,50,CPU
> >>> LISTEN_TIMEOUT 60
> >>> MAX_INCOMPLETE_CONNECTIONS 1024
> >>> FASTPOLL 1
> >>> MULTIPROCESSOR 0
> >>> VP_MEMORY_CACHE_KB 0
> >>> SINGLE_CPU_VP 0
> >>> CLEANERS 8
> >>> AUTO_AIOVPS 1
> >>> DIRECT_IO 0
> >>> LOCKS 20000
> >>> DEF_TABLE_LOCKMODE page
> >>> RESIDENT 0
> >>> SHMBASE 0x44000000L
> >>> SHMVIRTSIZE 32656
> >>> SHMADD 8192
> >>> EXTSHMADD 8192
> >>> SHMTOTAL 0
> >>> SHMVIRT_ALLOCSEG 0,3
> >>> SHMNOACCESS
> >>> CKPTINTVL 300
> >>> AUTO_CKPTS 1
> >>> RTO_SERVER_RESTART 0
> >>> BLOCKTIMEOUT 3600
> >>> TXTIMEOUT 300
> >>> DEADLOCK_TIMEOUT 60
> >>> HETERO_COMMIT 0
> >>> TAPEBLK 32
> >>> TAPESIZE 0
> >>> LTAPEBLK 32
> >>> LTAPESIZE 0
> >>> BAR_DEBUG 0
> >>> BAR_MAX_BACKUP 0
> >>> BAR_RETRY 1
> >>> BAR_NB_XPORT_COUNT 20
> >>> BAR_XFER_BUF_SIZE 31
> >>> RESTARTABLE_RESTORE ON
> >>> BAR_PROGRESS_FREQ 0
> >>> BAR_BSALIB_PATH
> >>> BACKUP_FILTER
> >>> RESTORE_FILTER
> >>> BAR_PERFORMANCE 0
> >>> ISM_DATA_POOL ISMData
> >>> ISM_LOG_POOL ISMLogs
> >>> DD_HASHSIZE 31
> >>> DD_HASHMAX 10
> >>> DS_HASHSIZE 31
> >>> DS_POOLSIZE 127
> >>> PC_HASHSIZE 31
> >>> PC_POOLSIZE 127
> >>> STMT_CACHE 0
> >>> STMT_CACHE_HITS 0
> >>> STMT_CACHE_SIZE 512
> >>> STMT_CACHE_NOLIMIT 0
> >>> STMT_CACHE_NUMPOOL 1
> >>> USEOSTIME 0
> >>> STACKSIZE 32
> >>> ALLOW_NEWLINE 0
> >>> USELASTCOMMITTED NONE
> >>> FILLFACTOR 90
> >>> MAX_FILL_DATA_PAGES 0
> >>> ONLIDX_MAXMEM 5120
> >>> MAX_PDQPRIORITY 100
> >>> DS_MAX_QUERIES
> >>> DS_TOTAL_MEMORY
> >>> DS_MAX_SCANS 1048576
> >>> DS_NONPDQ_QUERY_MEM 128
> >>> DATASKIP
> >>> OPTCOMPIND 2
> >>> DIRECTIVES 1
> >>> EXT_DIRECTIVES 0
> >>> OPT_GOAL -1
> >>> IFX_FOLDVIEW 0
> >>> AUTO_REPREPARE 1
> >>> RA_PAGES 64
> >>> RA_THRESHOLD 16
> >>> EXPLAIN_STAT 0
> >>> IFX_EXTEND_ROLE 0
> >>> SECURITY_LOCALCONNECTION
> >>> UNSECURE_ONSTAT
> >>> ADMIN_USER_MODE_WITH_DBSA
> >>> ADMIN_MODE_USERS
> >>> SSL_KEYSTORE_LABEL
> >>> PLCY_POOLSIZE 127
> >>> PLCY_HASHSIZE 31
> >>> USRC_POOLSIZE 127
> >>> USRC_HASHSIZE 31
> >>> STAGEBLOB
> >>> OPCACHEMAX 0
> >>> ENCRYPT_HDR
> >>> ENCRYPT_SMX
> >>> ENCRYPT_CDR 0
> >>> ENCRYPT_CIPHERS
> >>> ENCRYPT_MAC
> >>> ENCRYPT_MACFILE
> >>> ENCRYPT_SWITCH
> >>> CDR_EVALTHREADS 1,2
> >>> CDR_DSLOCKWAIT 5
> >>> CDR_QUEUEMEM 4096
> >>> CDR_NIFCOMPRESS 0
> >>> CDR_SERIAL 0
> >>> CDR_DBSPACE
> >>> CDR_QHDR_DBSPACE
> >>> CDR_QDATA_SBSPACE
> >>> CDR_MAX_DYNAMIC_LOGS 0
> >>> CDR_SUPPRESS_ATSRISWARN
> >>> DRAUTO 0
> >>> DRINTERVAL 30
> >>> DRTIMEOUT 30
> >>> HA_ALIAS
> >>> DRIDXAUTO 0
> >>> LOG_INDEX_BUILDS
> >>> SDS_ENABLE
> >>> SDS_TIMEOUT 20
> >>> SDS_TEMPDBS
> >>> SDS_PAGING
> >>> REDIRECTED_WRITES 0
> >>> FAILOVER_CALLBACK
> >>> TEMPTAB_NOLOG 0
> >>> ON_RECVRY_THREADS 1
> >>> OFF_RECVRY_THREADS 10
> >>> DUMPSHMEM 1
> >>> DUMPGCORE 0
> >>> DUMPCORE 0
> >>> DUMPCNT 1
> >>> ALRM_ALL_EVENTS 0
> >>> STORAGE_FULL_ALARM 600,3
> >>> RAS_PLOG_SPEED 14900
> >>> RAS_LLOG_SPEED 742
> >>> EILSEQ_COMPAT_MODE 0
> >>> QSTATS 0
> >>> WSTATS 0
> >>> JVPJAVALIB /bin
> >>> JVPJAVAVM jvm
> >>> AUTO_LRU_TUNING 1
> >>> ROOTPATH /opt/IBM/informix/demo/server/online_root
> >>> MSGPATH /work1/stat/bcslinuxprod_online.log
> >>> TAPEDEV /dev/null
> >>> LTAPEDEV /dev/null
> >>> DBSERVERNAME bcslinuxprod
> >>> DBSERVERALIASES
> >>> SERVERNUM 0
> >>> ALARMPROGRAM /opt/IBM/informix/etc/alarmprogram.sh
> >>> DRLOSTFOUND /opt/IBM/informix/etc/dr.lostfound
> >>> BAR_ACT_LOG /work1/stat/bar_act.log
> >>> BAR_DEBUG_LOG /work1/stat/bar_debug.log
> >>> SYSALARMPROGRAM /opt/IBM/informix/etc/evidence.sh
> >>> DUMPDIR /work1/stat
> >>> JVPJAVAHOME /opt/IBM/informix/extend/krakatoa/jre
> >>> JVPHOME /opt/IBM/informix/extend/krakatoa/
> >>> JVPPROPFILE /opt/IBM/informix/extend/krakatoa/.jvpprops
> >>> JVPLOGFILE /opt/IBM/informix/demo/server/jvp.log
> >>> JVPCLASSPATH
> >>> /opt/IBM/informix/extend/krakatoa/krakatoa.jar:/opt/IBM/informix/ex
> >>> tend/krakatoa/jdbc.jar
> >>> ROOTSIZE 200000
> >>> BUFFERPOOL
> >>> size=2K,buffers=1000,lrus=8,lru_min_dirty=50.000000,lru_max_dirt
> >>> y=60.000000
> >>> VPCLASS cpu,num=1,noage
> >>> BTSCANNER num=1,threshold=5000,rangesize=-1,alice=6
> >>> UPDATABLE_SECONDARY 0
> >>> --------------------------------------------------------------------
> >>>
> >>> Here is my table definition:
> >>>
> >>> --------------------------------------------------------------------
> >>> create table "mdwilkie".noaa_sst_conventional_hpl_comp_24000000
> >>> (
> >>>
> >>> type_of_observation integer,
> >>>
> >>> source_of_observation integer,
> >>>
> >>> dt datetime year to second,
> >>>
> >>> latitude float,
> >>>
> >>> longitude float,
> >>>
> >>> sst float,
> >>>
> >>> reliability integer,
> >>>
> >>> solar_zenith_angle float,
> >>>
> >>> satellite_zenith_angle float,
> >>>
> >>> analyzed_field_sst float,
> >>>
> >>> internal_error float,
> >>>
> >>> solar_azimuth_angle float,
> >>>
> >>> climatological_sst float,
> >>>
> >>> beginning_row integer,
> >>>
> >>> beginning_column integer,
> >>>
> >>> avhrr_ch_1_average float,
> >>>
> >>> avhrr_ch_2_average float,
> >>>
> >>> avhrr_ch_3_average float,
> >>>
> >>> avhrr_ch_4_average float,
> >>>
> >>> avhrr_ch_5_average float,
> >>>
> >>> space_view_ch_1 float,
> >>>
> >>> space_view_ch_2 float,
> >>>
> >>> space_view_ch_3 float,
> >>>
> >>> ch_4_blackbody_temperature float,
> >>>
> >>> ch_5_blackbody_temperature float,
> >>>
> >>> year_of_observation integer
> >>> ) extent size 16 next size 16 lock mode page;
> >>>
> >>> start violations table for
> >>> "mdwilkie".noaa_sst_conventional_hpl_comp_24000000
> >>>
> >>> using noaa_sst_conventional_hpl_comp_24000000_vio,
> >>> noaa_sst_conventional_hpl_comp_24000000_dia;
> >>>
> >>> create index "mdwilkie".noaa_sst_conventional_hpl_comp_24000000_dt_idx
> >>>
> >>> on "mdwilkie".noaa_sst_conventional_hpl_comp_24000000 (dt)
> >>>
> >>> using btree in miscdbind;
> >>> create index
> >>> "mdwilkie".noaa_sst_conventional_hpl_comp_24000000_latitude_idx
> >>>
> >>> on "mdwilkie".noaa_sst_conventional_hpl_comp_24000000 (latitude)
> >>>
> >>> using btree in miscdbind;
> >>> create index
> >>> "mdwilkie".noaa_sst_conventional_hpl_comp_24000000_longitude_idx
> >>>
> >>> on "mdwilkie".noaa_sst_conventional_hpl_comp_24000000 (longitude)
> >>>
> >>> using btree in miscdbind;
> >>>
> >>>
> >>>
> >
>
> --------------------------------------------------------------------------------
> >>> Note that my default dbspace (miscdbtab) is different from my index
> >>> dbspace
> >>> (miscdbind), but both are stored on the same disk (the only disk on my
> >>> linux
> >>> box). Also I'm using just 1 CPU.
> >>>
> >>> Here are the last few lines from the log file:
> >>>
> >>>
> >>>
> >
>
> -----------------------------------------------------------------------------
> >>> 09:41:42 Maximum server connections 4
> >>> 09:41:42 Checkpoint Statistics - Avg. Txn Block Time 0.000, # Txns
> blocked
> >>> 1,
> >>> Plog used 26253, Llog used 7
> >>>
> >>> 09:42:56 Checkpoint Completed: duration was 2 seconds.
> >>> 09:42:56 Tue Jul 7 - loguniq 1610, logpos 0x26466a4, timestamp:
> 0x7c24d7c1
> >>> Interval: 12113
> >>>
> >>> 09:42:56 Maximum server connections 5
> >>> 09:42:56 Checkpoint Statistics - Avg. Txn Block Time 0.000, # Txns
> blocked
> >>> 2,
> >>> Plog used 26253, Llog used 7
> >>>
> >>> 09:44:09 Checkpoint Completed: duration was 3 seconds.
> >>> 09:44:09 Tue Jul 7 - loguniq 1610, logpos 0x264e4b0, timestamp:
> 0x7c2697f5
> >>> Interval: 12114
> >>>
> >>> 09:44:09 Maximum server connections 5
> >>> 09:44:09 Checkpoint Statistics - Avg. Txn Block Time 0.000, # Txns
> blocked
> >>> 1,
> >>> Plog used 26253, Llog used 8
> >>>
> >>> 09:45:34 Checkpoint Completed: duration was 0 seconds.
> >>> 09:45:34 Tue Jul 7 - loguniq 1610, logpos 0x2655560, timestamp:
> 0x7c285148
> >>> Interval: 12115
> >>>
> >>> 09:45:34 Maximum server connections 5
> >>> 09:45:34 Checkpoint Statistics - Avg. Txn Block Time 0.000, # Txns
> blocked
> >>> 1,
> >>> Plog used 26253, Llog used 7
> >>>
> >>> -----------------------------------------------------------------------
> >>> My linux box has the following description:
> >>>
> >>> CPU: Intel Pentium D, 3.0 GHz, 2MB Cache, 32 bit
> >>> RAM: 2 GB
> >>> OS: Fedora Core 7
> >>>
> >>> Here are the first few lines of output from "top":
> >>>
> >>> -----------------------------------------------------------------------
> >>> top - 09:50:39 up 11 days, 23:10, 12 users, load average: 21.83, 21.41,
> >>> 21.35
> >>> Tasks: 230 total, 1 running, 229 sleeping, 0 stopped, 0 zombie
> >>> Cpu(s): 2.8%us, 2.3%sy, 0.0%ni, 40.5%id, 54.2%wa, 0.0%hi, 0.2%si,
> 0.0%st
> >>> Mem: 2066880k total, 2015236k used, 51644k free, 14972k buffers
> >>> Swap: 2031608k total, 196k used, 2031412k free, 1559480k cached
> >>>
> >>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> >>> 3982 informix 5 -10 59888 45m 43m S 5 2.2 39:00.35 oninit
> >>> 3996 root 6 -10 59868 6464 5124 D 1 0.3 1:21.52 oninit
> >>> 9939 informix 15 0 2340 1088 788 R 1 0.1 0:00.06 top
> >>> 3988 root 6 -10 59876 7440 6096 D 0 0.4 1:49.16 oninit
> >>> 3991 root 6 -10 59876 6916 5572 D 0 0.3 1:38.22 oninit
> >>> 3992 root 6 -10 59876 6680 5336 D 0 0.3 1:33.69 oninit
> >>> 3994 root 5 -10 59868 6508 5168 D 0 0.3 1:26.88 oninit
> >>> 3995 root 5 -10 59876 6772 5428 D 0 0.3 1:24.17 oninit
> >>> 3997 root 6 -10 59868 6276 4936 D 0 0.3 1:18.85 oninit
> >>> 4001 root 5 -10 59868 6312 4972 D 0 0.3 1:10.82 oninit
> >>> 4003 root 5 -10 59876 6388 5044 D 0 0.3 1:08.10 oninit
> >>> 4004 root 6 -10 59868 6048 4708 D 0 0.3 1:06.35 oninit
> >>> 4006 root 6 -10 59876 6356 5012 D 0 0.3 1:03.41 oninit
> >>> 4007 root 6 -10 59868 6320 4980 D 0 0.3 1:01.70 oninit
> >>>
> ------------------------------------------------------------------------
> >>>
> >>> The machine appears to be heavily I/O bound.
> >>>
> >>> I don't have the option of splitting table and index onto different
> disks,
> >>> disabling the index during the load, or adding more CPU's (I'm not sure
> if
> >>> this would make a difference anyway). Do I have any other options? Are
> >>> there
> >>> some specific configuration parameters that I should try adjusting?
> >>>
> >>> Many thanks,
> >>>
> >>> Mike
> >>>
> >>>
> >>>
> >>>
> >
>
> *******************************************************************************
> >>> Forum Note: Use "Reply" to post a response in the discussion forum.
> >>>
> >>>
> >
> > --00504502e501f7a8e1046e2131ed
> >
> >
> >
>
> *******************************************************************************
> > Forum Note: Use "Reply" to post a response in the discussion forum.
> >
> >
>
> --
> Richard Kofler
> SOLID STATE EDV
> Dienstleistungen GmbH
> Vienna/Austria/Europe
>
>
>
> *******************************************************************************
> Forum Note: Use "Reply" to post a response in the discussion forum.
>
>

--00504502e501eb00e6046e25b77a

Messages In This Thread

[ View Thread ] [ Post Response ] [ Return to Index ] [ Read Prev Msg ] [ Read Next Msg ]

IDS Forum is maintained by Administrator with WebBBS 5.12.