Marcus: apparently we did our job, but probably not as fast as the
situation required.
Please request an update on the PMR. I believe there are conditions to
request a fix. The bug was identified and to the best of my knowledge was
just fixed. If that's the case a fix could be created relatively fast if
you explain the urgency.
And for the record, your case was handled by an excellent resource from
tech support. I'd say too much time was spent between the PMR opening and
the bug creation, probably due to the slowness of the reproduction case
(which was speed up by the enginner in charge). After that the bug fix
should ideally be faster, but I've seen worst cases.
Bare in mind you should not consider this as an official IBM response. Go
through the proper channels please. I can't and I won't interfere with
normal tech support proceedings.
Hopefully with a fix you should go back to the initial situation, knowing
that the problem was diagnosed and fixed. It should also be included in
next fixpacks of 11.70 and 12.10.
Regards.
On Thu, Aug 28, 2014 at 4:12 PM, Fernando Nunes <domusonline@gmail.com>
wrote:
> Two notes:
>
> 1- With the PUT clause for SmatBlob spaces you can include several dbspaces
> and the engine will allocate the SLOBs in a round robin fashion. It's
> transparent to the application.
> 2- I can lookup the information, but I'd prefer if you could send me the
> PMR. "no answer" is not an option. I'm in a dificult position here because
> I don't want to look like I don't belive you, and the option is to think
> IBM technical support didn't do what you pay them (us) to do. So assuming
> you provided all the information and even apparently a test case, "no
> answer" would be a reason to complain.
>
> The way you describe it makes us think that you're in deep trouble, going
> through a lot of work and problems just because we din't do our job. If
> that's the case that's not acceptable.
> To be fair, many times there are misundertandings in the PMRs. And
> personally I have very good things to say about technical support (and
> before you say, yeah, but you're an IBMer, let me just be clear I had my
> share of disagreements with technical support guys and some of them monitor
> this list and can confirm...). But in the end, problems must be solved as
> much as possible and this doesn't seem an impossible thing to solve (those
> are the one time issues when no evidences are collected or the ones
> collected arent' enough to generate a test case).
>
> With all this in mind, I'm not your best bet for an answer... BLOBs, SLOBs
> etc.are not my speciality (if I can say I have one...). I have a customer
> with more than 6TB of smartblobs in their instance... no hiting any
> llimits.... It just scares me it keeps growing.... forever... so everything
> becomes a challenge.... backups, hdr setup upgrades etc. They use the PUT
> with 6 or more dbspaces.... most of the smart blobs are stored in a single
> table.
>
> Regards.
>
> On Thu, Aug 28, 2014 at 3:03 PM, Art Kagel <art.kagel@gmail.com> wrote:
>
> > Marcus: All of the BLOB/CLOB records in a table do not have to go to the
> > same sbspace! You could create multiple sbspaces and have your
> > applications place the individual documents into multiple sbspaces using
> > some schema of your own just like fragmentation. That will eliminate the
> > problem you ran into originally!
> >
> > Art
> >
> > Art S. Kagel, Principal Consultant
> > ASK Database Management
> >
> > Blog: http://informix-myview.blogspot.com/
> >
> > Disclaimer: Please keep in mind that my own opinions are my own opinions
> > and do not reflect on the IIUG, nor any other organization with which I
> am
> > associated either explicitly, implicitly, or by inference. Neither do
> > those opinions reflect those of other individuals affiliated with any
> > entity with which I am affiliated nor those of the entities themselves.
> >
> > On Thu, Aug 28, 2014 at 8:43 AM, Marcus Haarmann <
> > marcus.haarmann@midoco.de>
> > wrote:
> >
> > > Fernando,
> > >
> > > Search for " informix.LO_hdr_partn" in IIUG and you will find my post
> > > there.
> > > We opened a Bug with IBM through our distributor, sent tons of data
> > > (oncheck
> > > took a day to produce 200GB output),
> > > but no answer.
> > > The initial setup did run for two years and suddenly we got SQL errors
> > -136
> > > (no more extents).
> > > The only place to see extent problems was in meta data of SBlob, which
> > > reached
> > > the maximum number of pages
> > > for some chunks.
> > > We reproduced the error situation and generated an af file with the
> > > stacktrace, initiated by detection of the -136 SQL error.
> > > Instance was alive all the time, not crashing, but inserts were not
> > > possible
> > > any more.
> > > We suspect that the error was occuring because some single documents
> had
> > > been
> > > deleted before,
> > > which might have been located in a full chunk. The resulting free space
> > was
> > > about to be re-allocated,
> > > but the meta data could not be extended since maximum number of pages
> was
> > > reached, which resulted in the error.
> > > We have also seen, that deleting data will not free pages in the meta
> > data
> > > partition (at least onstat -d does not show that).
> > > But nobody could give us a clue what we had done wrong (to make sure we
> > > would
> > > not run in the same situation
> > > when setting up a new instance). So we decided to go "back to the
> roots"
> > > and
> > > use bytes instead of blob.
> > > Now we are stuck again ...
> > >
> > > Marcus Haarmann
> > > Geschäftsführer
> > >
> > > Midoco GmbH
> > > Otto-Hahn-Str. 12
> > > 40721 Hilden
> > >
> > > Tel. +49 (2103) 28 74 0
> > > Fax. +49 (2103) 28 74 28
> > > www.midoco.de
> > >
> > > Member of Pisano Holding GmbH
> > > Better Travel Technology
> > > www.pisano-holding.com
> > >
> > > Amtsgericht Düsseldorf - HRB 51420 - USt.ID DE814319276 -
> > > Geschäftsführer: Steffen Faradi, Marcus Haarmann, Jörg Hauschild -
> > >
> > > ----- Ursprüngliche Mail -----
> > >
> > > Von: "Fernando Nunes" <domusonline@gmail.com>
> > > An: ids@iiug.org
> > > Gesendet: Donnerstag, 28. August 2014 14:19:50
> > > Betreff: Re: Performance problem during log backup [33629]
> > >
> > > I'd go back to the beginning.... An error inserting data... from what I
> > > understand it caused an AF.
> > > What did IBM have to say about that? You mean the instance was crashing
> > > frequently during INSERTs?
> > > I find it hard to believe that you go through so much trouble to solve
> a
> > > bug (an AF is almost certainly a bug unless there are hardware errors
> or
> > > severe administration errors (like messing with storage etc.)
> > >
> > > Regards.
> > >
> > > On Thu, Aug 28, 2014 at 2:08 PM, Art Kagel <art.kagel@gmail.com>
> wrote:
> > >
> > > > Hmm, using JSON/BSON in Informix MIGHT be a work around. If you put
> the
> > > > document data into a BSON column in a relational table (so v12.10
> > > > preferably .xC4 or later for ease of use) the engine will shift the
> > BSON
> > > > column to Smartblob space automagically. Because the engine is the
> only
> > > > one dealing with the smartblob space maybe it will work without
> getting
> > > the
> > > > errors you were getting with BLOB columns. JSON/BSON in Informix can
> be
> > > > accessed via SQL and the Informix client protocols or via MongoDB's
> > wire
> > > > listener protocol (so any development tool that can access MongoDB).
> > Just
> > > > a thought.
> > > >
> > > > Note that in MongoDB JSON documents are limited to 16MB, JSON columns
> > in
> > > > Informix can be up to 4GB each.
> > > >
> > > > Art
> > > >
> > > > Art S. Kagel, Principal Consultant
> > > > ASK Database Management
> > > >
> > > > Blog: http://informix-myview.blogspot.com/
> > > >
> > > > Disclaimer: Please keep in mind that my own opinions are my own
> > opinions
> > > > and do not reflect on the IIUG, nor any other organization with
> which I
> > > am
> > > > associated either explicitly, implicitly, or by inference. Neither do
> > > > those opinions reflect those of other individuals affiliated with any
> > > > entity with which I am affiliated nor those of the entities
> themselves.
> > > >
> > > > On Thu, Aug 28, 2014 at 7:46 AM, Marcus Haarmann <
> > > > marcus.haarmann@midoco.de>
> > > > wrote:
> > > >
> > > > > Hi Art,
> > > > >
> > > > > Sounds good, but we run into other problems:
> > > > > 1) Maximum number of pages will be reached too soon depending on
> page
> > > > size
> > > > > (2k
> > > > > ~ 2mio rows)
> > > > > We want to store > 200mio rows
> > > > > 2) Fragmentation cannot be used since this is growth edition. We
> > would
> > > > > have to
> > > > > buy an Ultimate just
> > > > > for the document stuff.
> > > > >
> > > > > So, SBlob not possible, Blob not possible, Blob in TBLSpace not
> > > possible.
> > > > > Other ideas to store such an amount of binary data within the DB ?
> > > > >
> > > > > A more complex solution could be a cluster filesystem and just
> > > > remembering
> > > > > the
> > > > > file path
> > > > > in the db, which would obviously not create any problem ...
> > > > >
> > > > > I think we have to evaluate other products because of the
> limitations
> > > in
> > > > > blob
> > > > > storage.
> > > > > Maybe we will investigate in document oriented databases like Mongo
> > or
> > > > > similar.
> > > > > But the conversion to a non-SQL DB is complex and application will
> > > need a
> > > > > significant
> > > > > change.
> > > > >
> > > > > Marcus Haarmann
> > > > >
> > > > > ----- Ursprüngliche Mail -----
> > > > >
> > > > > Von: "Art Kagel" <art.kagel@gmail.com>
> > > > > An: ids@iiug.org
> > > > > Gesendet: Mittwoch, 27. August 2014 23:10:25
> > > > > Betreff: Re: Performance problem during log backup [33606]
> > > > >
> > > > > Marcus:
> > > > >
> > > > > The fix is to move the BYTE and TEXT columns from a blobspace to IN
> > > TABLE
> > > > > and place the tables into a dbspace with a pagesize that is
> > appropriate
> > > > to
> > > > > the average blob sizes so little space is wasted. Then the logs
> will
> > > > > contain the blob pages, you will be able to use hdr, and the logs
> > will
> > > > back
> > > > > up faster because the backup will not have to go find the blob
> pages.
> > > You
> > > > > will have to increase the logical log file sizes and the number of
> > logs
> > > > > because the blobs will now be logged, but that's a small price.
> > > > >
> > > > > Art
> > > > >
> > > > > Art S. Kagel, Principal Consultant
> > > > > ASK Database Management
> > > > >
> > > > > Blog: http://informix-myview.blogspot.com/
> > > > >
> > > > > Disclaimer: Please keep in mind that my own opinions are my own
> > > opinions
> > > > > and do not reflect on the IIUG, nor any other organization with
> > which I
> > > > am
> > > > > associated either explicitly, implicitly, or by inference. Neither
> do
> > > > > those opinions reflect those of other individuals affiliated with
> any
> > > > > entity with which I am affiliated nor those of the entities
> > themselves.
> > > > >
> > > > > On Wed, Aug 27, 2014 at 4:41 PM, Marcus Haarmann <
> > > > > marcus.haarmann@midoco.de>
> > > > > wrote:
> > > > >
> > > > > > Hi experts,
> > > > > >
> > > > > > we were forced to create a new instance for our document storage
> > > > > > (originally
> > > > > > set up as table with blob column,
> > > > > > using a smart blob storage). The new instance is using byte
> columns
> > > > (yes
> > > > > I
> > > > > > know we cannot use HDR ...).
> > > > > >
> > > > > > The old instance was stuck because of an undefined error on
> > inserting
> > > > new
> > > > > > data
> > > > > > and nobody was able to solve
> > > > > > the situation.
> > > > > > The error was related to the meta data, looking at the af file.
> > > > > > Now, we converted everything to byte content, cause we do not
> > really
> > > > > need a
> > > > > > smartblob functionality
> > > > > > (other than HDR, we are using mirroring now to cope with hardware
> > > > > errors).
> > > > > > Took a while
> > > > > > to copy the data from blob to byte.
> > > > > > When the instance was copied, we had logging turned off, to copy
> > the
> > > > data
> > > > > > with
> > > > > > higher throughput.
> > > > > > The data was copied with a JAVA program, which inserted the
> records
> > > > with
> > > > > 10
> > > > > > parallel threads on the
> > > > > > target machine (since no internal cast from blob to byte is
> > > available).
> > > > > > Now, the machine has taken over the load and we have switched on
> > > > logging.
> > > > > >
> > > > > > The new situation is that whenever a logfile gets full (like
> every
> > 15
> > > > > > minutes), the backup (ontape) starts
> > > > > > writing the header of the logfile and then stops. About 20
> minutes
> > > > later
> > > > > > the
> > > > > > logfile will be written.
> > > > > > Then the next logfile is already full and backup starts again.
> > > > > > Normally, on our other instances, writing logfiles is a process
> > > running
> > > > > for
> > > > > > some seconds, not several minutes.
> > > > > > Inserting new data in this state is veeery slow, can take minutes
> > for
> > > > one
> > > > > > row.
> > > > > >
> > > > > > We have now turned off log files for the moment, and added a
> > separate
> > > > > > instance
> > > > > > to collect the traffic
> > > > > > because the instance is basically unusable in this state.
> > > > > >
> > > > > > Also, the smaller instance with comparable setup (same table,
> also
> > > byte
> > > > > col
> > > > > > for document content),
> > > > > > tends to have the same behaviour (though we can write a logfile
> > there
> > > > in
> > > > > > about
> > > > > > 3-4 minutes at the moment).
> > > > > >
> > > > > > The hardware is not very fast, cause normally the documents
> (mostly
> > > > > > compressed
> > > > > > pdf) should be written once,
> > > > > > never changed and are retrieved again by primary key on demand
> > > (number
> > > > of
> > > > > > written documents 100 compared to 10
> > > > > > read documents).
> > > > > > Initially we had the instance running with smart blobs, and
> memory
> > > > usage
> > > > > > was 500mb. Storage size with smart blobs was around 2.5TB.
> > > > > > We ran into several problems:
> > > > > > - Backup Level 1 blocked instance for 1.5h at night (fixed bug in
> > > > > FC8W...).
> > > > > > - finally SQL Error on insert of new rows, an unclear situation
> > which
> > > > IBM
> > > > > > was
> > > > > > not able to reproduce by now.
> > > > > > We changed the application to be able to address two instances.
> > > > > > Since there was no hint from IBM how to setup the database to not
> > run
> > > > > into
> > > > > > the
> > > > > > situation again,
> > > > > > we decided to use "old" byte columns instead of blob columns.
> > > > > >
> > > > > > The setup now is as follows:
> > > > > > IDS 11.70FC7GE on Linux, Double processor Opteron, FC 8GBit to
> SAN,
> > > Raw
> > > > > > Devices on lvm partitions
> > > > > > Three tables in the database, biggest one around 200mio rows. In
> > > total
> > > > > > around
> > > > > > 350mio records.
> > > > > > Chunk layout: 1x rootdbs 100GB (for table), 6x blobdbs 500GB (for
> > > > bytes).
> > > > > > Allocated memory 5.5GB (SHM 256mb, add 64mb, buffers 80000x2k)
> > > > > >
> > > > > > What we found out so far:
> > > > > > Written logfiles are significantly bigger than logsize (probably
> > > > because
> > > > > > the
> > > > > > byte content is not in the logs
> > > > > > and is read separately).
> > > > > > While logfiles are backed up, we encounter a extreme read
> activity
> > on
> > > > the
> > > > > > blob
> > > > > > chunks (onstat -g iof).
> > > > > > Ok, this is not a very common setup for a relational database, we
> > are
> > > > > > currently evaluating other databases
> > > > > > which might do a better job on this amount of documents.
> > > > > > I do not like that personally, since our administration is used
> to
> > do
> > > > > > basically everything with Informix.
> > > > > > But the question is:
> > > > > > What is generally wrong with the setup ?
> > > > > > Why is logfile writing taking that long ?
> > > > > > Why does the big instance block without any error message when
> > > > inserting
> > > > > > new
> > > > > > rows ?
> > > > > > What can we tune to make it work ?
> > > > > >
> > > > > > Maybe somebody can give us a hint. The intended setup for us
> would
> > > be a
> > > > > HDR
> > > > > > instance pair,
> > > > > > using Smart Blobs, Level 0 Backup on sundays, Level 1 backup +
> > > > Logbackup
> > > > > > during the week.
> > > > > > Since there are only a couple of tables involved, maybe an ER
> setup
> > > > with
> > > > > > bytes
> > > > > > could be a solution,
> > > > > > but we are sort of desperate because each try to store such an
> > amount
> > > > of
> > > > > > documents in an Informix
> > > > > > instance was not very successful so far.
> > > > > >
> > > > > > Thanks in advance for your input.
> > > > > >
> > > > > > Marcus Haarmann
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
>
> *******************************************************************************
> > > > > > Forum Note: Use "Reply" to post a response in the discussion
> forum.
> > > > > >
> > > > > >
> > > > >
> > > > > --001a11c34b18f4e3870501a2d8d6
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
>
> *******************************************************************************
> > > > > Forum Note: Use "Reply" to post a response in the discussion forum.
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
>
> *******************************************************************************
> > > > > Forum Note: Use "Reply" to post a response in the discussion forum.
> > > > >
> > > > >
> > > >
> > > > --001a11c34b18e6a7de0501af6229
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> >
> >
>
> *******************************************************************************
> > > > Forum Note: Use "Reply" to post a response in the discussion forum.
> > > >
> > > >
> > >
> > > --
> > > Fernando Nunes
> > > Portugal
> > >
> > > http://informix-technology.blogspot.com
> > > My email works... but I don't check it frequently...
> > >
> > > --001a11c2d676c17dc30501af8ca0
> > >
> > >
> > >
> > >
> >
> >
>
> *******************************************************************************
> > > Forum Note: Use "Reply" to post a response in the discussion forum.
> > >
> > >
> > >
> > >
> >
> >
>
> *******************************************************************************
> > > Forum Note: Use "Reply" to post a response in the discussion forum.
> > >
> > >
> >
> > --089e0160a638dc9f620501b025ad
> >
> >
> >
> >
>
> *******************************************************************************
> > Forum Note: Use "Reply" to post a response in the discussion forum.
> >
> >
>
> --
> Fernando Nunes
> Portugal
>
> http://informix-technology.blogspot.com
> My email works... but I don't check it frequently...
>
> --bcaec51869dc0c3af40501b1203e
>
>
>
> *******************************************************************************
> Forum Note: Use "Reply" to post a response in the discussion forum.
>
>
--
Fernando Nunes
Portugal
http://informix-technology.blogspot.com
My email works... but I don't check it frequently...
--047d7b10ce914554560501bb06f4