Tip:
Highlight text to annotate it
X
Welcome to DB2 for z/OS best practices
webinar. My name is Sheryl Larsen, and I’m going to be going through How DB2 Performance Structures Improve
Performance. <
Slide 2, 00:13> I have had a couple of decades experience of playing with customers SQL workloads.
In fact, I have probably designed a few thousand performance structures in my past life.
My current title, I’m now with IBM, I’m their new world-wide DB2 for z/OS evangelist. And I am very happy to be here today to try and update
you on what are all your performance options that you have. These are all the bells and whistles and knobs
that you get to turn in order to make something perform better. I’m still the president of the
midwest user group based in Chicago. I heavily participated in the first decade of IDUG in
terms of a board member and a planning committee member. And at one point became a hall of fame speaker and hall of fame
volunteer. I’m also on the board of my ala mater Northern Illinois University Computer Science Alumni
Council. I do that so I keep those professors up-to-date in terms of what IBM is doing in terms of technology.
So they can better improvise it into their curriculum. <
Slide 3, 01:24> When we go to, oh, disclaimer wise everything here is based on customer
experience, your mileage may vary.
So on slide 4 what I have done, I only have a short little window so what I am not going to
talk about in terms of performance is the schema design and that is are your third normal form,
are you pre-joining, are you rolling things up. I’m not going to cover partitioning, how you are going to slice that data.
And I’m not going to cover because sometimes you don’t have a clustering decision to make based on your partitioning
scheme unless you are using DPSI indexes. So all those could be an hour talk.
So what I will cover are these items called performance structures. So base table indexes.
You are allowed 225. Index on expression. This is anything that you are doing especially aggregating columns.
And you need to filter on that aggregated columns, the solution for that is index on expression.
And materialized query tables, also know as MQTs. These are the answers to queries.
So these are materialized result sets that get poured into a real lockable, loggable table at which at that point you can put 225 indexes on
there. Then there is a subcategory called appliances. These are things that you actually plug in, network attached.
zAAPs have actually been moved into zIIPs, it is really just one appliance called a zIIP.
But I am showing here in case you have heard of them or seen literature on zAAPs.
And then new as of a few years ago with IBM’s acquisition of Netezza is a new appliance that you plug in,
it is network attached and it basically is another access path the optimizer has to choose from.
And it is based on moving your data or copying your data over to the appliance where it gets highly compressed and situated so
nicely with its own storage and memory. And it forms when you move your data there or copy your data there an accelerated query table.
And once it becomes an AQT it is now a performance structure available to the optimizer.
So let’s go to slide number 5 and take a look at what a typical customer may start
with for example. So on your right hand side, third normal form typically is the schema for OLTP.
Why you have high update for that, the update percentage, 5, 10, percent. Where web it is well below 5 percent.
So it is a lot less update. But still needs to be third normal form. When customers are starting,
needing to answer business questions that are different. So not customer based, but instead product based.
That should change some things in your structure. But with the picture that I am showing has this
high overlap in terms of its schema. And that is because some customers were in a hurry. I had a client who had 8 50 million row tables
and they were getting pummeled by these end users and decided you know we are just going to make a copy of it in third normal form
and we are going to give them their own subsystem. And there they can just go crazy. And so they had to separate it.
And they didn’t take the time to figure out what those new questions were going to be so they just copied the indexes too.
So they really didn’t make a performance decision, they just copied what they had.
And that is just a fine place to start. But certainly probably not where you want to be, where you want to end up.
So if we look to the right hand side, that is going to be where you should be. So if we start with the schema,
so instead of it highly overlapping it is wildly different. Why? You are going to make different decisions.
You are going to possibly de-normalize, possibly pre-aggregate. And when we go up to the top and see the indexes,
they should be radically different. Why? Because the business questions are radically different.
And that means their lead column in the index should be different. I start with product, I don’t start with customer as my lead column.
If I move over to the middle of the screen you see MQTs and I’m showing them in different shapes and sizes.
They can be 7 table joins with 300 columns that you need in those 7 table joins. And it can be scrunched down to 3 table join
and the 15 different aggregate that you want from it. It is really any result set that you want to dream up and solidify in its own table.
If we go all the way to the far right you’ll see a really skinny MQT. And really why it is a skinny one in the web and OLTP environment
is because this needs to be real time. And what makes an MQT high maintenance is all the change that has to be made to it
and that either needs to be near real time or some refresh cycle. Many production OLTP environments just cannot afford that.
But here is what mitigates that amount of change. Let’s say you do need the 7 table join. When DB2 does a 7 table join it
has to choose a join technique. Let’s say it is nested loop. It has to walk through all 7 tables to get that join done. What would be better?
Let’s get the join done already. But let’s not bring all the columns necessary along, just yet.
Let’s simply use the keys, the primary keys and the foreign keys, that is what is going to be in the MQT.
That is what makes the skinny MQT. That way, guess what? How do you make a join go faster? Get it already done.
It can go to that MQT it now knows where all those intersections are and it can go get the drilled data and the base tables.
So MQTs, I’m showing them more on one side than the other and because they are harder to maintain.
New as of version 9 is index on expression. So any of those where clauses that you have were you are aggregating columns
some how, you are using some scalar type function maybe. You can now make those indexable stage 1 by
coding index on expression. I should also mention that there are a few special ones that IBM as of DB2 11 is also
making you not have to do index on expression. So there may be as you migrate to DB2 11 some that you don’t need anymore.
And then the final performance structure, the accelerated query table. It really is as simple as plugging it in.
especially if you talk to sales people for the IBM DB2 Analytics Accelerator. It is pretty darn easy and you’re up in hours.
Which is a sales person talking. But when it does come up live, and you copy or move your data over there,
you really can take a step back and look closer at your performance structures that you were using. For example,
what if you had a ton of MQTs and a ton of indexes, but you still were not getting your questions answered very fast.
It could be that once you copy your data over there, that you can get rid of a lot of those performance structures.
Because one thing that the accelerator doesn’t have are indexes. It has only one access path and that is a table space scan.
But it has hardware and software optimized to the point where it is going to be highly accelerated from a factor of one to ten up to
2000 times faster. And that is a game changer. So in terms of performance structures you have all these choices. <
Slide 6, 09:54> So let’s look at designing each an every one of these. What goes into base table indexes
for example, AQTs, and MQTs. <
Slide 7, 10:05> So if we look at slide number 7. One query at a time, if you had all day to come up with an
index for your 3000 queries running in a workload. This is what you would go through.
So I picked an interesting one. I said select C7, C8, C12, C2, C3 and then I purposely sequenced the where clauses in the order
that I was going to propose an index just so you could correlate the two. But it really doesn’t matter how you type those in.
But for a walkthrough I got C4 equal ‘L’ and then C5 equal ‘99’, equal on column 6 in on C1, a not like on C2, and a C3 between.
And a C9 equals a case expression. It is kind of a psuedo code case expression, I didn’t have room to type the whole thing in.
And so if we walk through the 4 points of filtering, and I have them noted here. The first is where I can apply roughly 43 conditions
can be applied at that point. And if I look up in the manual, in the DB2 11 documentation it would say that equals, in’s, equals
and in’s can be used in the first point of filtering. So the one, two, three, four, the first four conditions get applied first.
Well doesn’t it makes sense that those should be the first 3 columns of your index? Because you know what those are.
Well absolutely and when designing an index for a particular query, that is where you start. And you want to sequence them
most restrictive to least restrictive if you can. Then the next thing you look at. Then the next thing that happens is stage 2.
Not stage 2, is step number 2. That is where the not like and between will be picked up. So there are 32 conditions that can be used in
the first point and then roughly 43, it fans out to 43 conditions at the second point. So that is where the other 2 are picked up.
C9 equals a case expression and DB2 10 will have to be the fourth point of filtering. But in DB2 11 there is a special type of case
documented in the manuals that could be moved into the second point of filtering. So that is pretty nice.
So when I look at my index design, I am seeing column 4, column 5, column 6. So then my steps when building my index is
when equal predicates first and then you see my second step sequencing columns next.
So I got an order by, and I think that is blacked out. But let’s say it is order by column, 1, 2, 3.
Then in my index I would put column 1, 2, 3. Then why not move my column 9 condition,
let’s put column 9 in the index. Now I have in the first part of my index, everything that I am filtering, lives in that index.
And then when I look back at the select clause that means I have already got 7 and 8. I already have C1, C2, C3 but what I don’t
have is 7 and 8. There is only two. Why not add those two to the table. So 7 and 8. So this is a wide index. I am a huge fan of fat indexes.
In fact the number of indexes, the average number of indexes I’ve seen on a lot of systems is usually in the double digits.
In fact at an insurance company in a claims table I’ve seen 33 indexes. And some are thin and some are extremely wide.
Each index represents a different way to see the data. So that is what base table indexes are for. How do you want to see the data.
How do you want to display it. How do you want to sequence it. How do you want it filtered. That is what these index structures
do. <
Slide 8, 14:22> So on slide number 8 let’s talk about another structure, materialized query tables.
They are like I said, lockable, logable, DB2 table. What can you put into it? You can insert the results of the query.
Well this is technology we actually had in version 1. Because in version 1 we could say insert into this table the results from the
select statement. But now it is a create statement. And what you get with this is a whole lot more technology
than what you had in version 1. What you get with this is the optimizer can be aware of that answer living and breathing.
Because it is a really lockable, logable, table there is information metadata in the catalog about it.
Now there are all kind of ways to maintain this. You can do a refresh table. But most sites out there are doing user
managed and they’re using replication. And that is because that is really fast. A couple times a second you can keep
those MQTs up-to-date. And in DB2 LUW they have that additional option of ‘immediate’ and that’s not there for Z just yet. <
Slide 9, 15:39> So then the question is, alright I know I can do this structure. Then we go to slide number 9.
How do I decide what to put in it? What query should I do? Should I make a huge one, 225 table join?
With every aggregate that I may need and put a pile of indexes on that one MQT? Well that might make some sense.
Because you don’t want to overdue the MQTs, every MQT that could provide a result set would have to be tested so you want to add
them like you add base table indexes. Not wildly. So how about the opposite. Let’s say you are feeling very wealthy,
you have a data warehouse, you’re Bill Gates. You are going to have an MQT for every query executing.
And an index to go with it. Well not everyone has that budget. So what is typical is a more optimal MQT strategy. What is most popular?
What do you do a lot of times? That is, expensive, let’s put it that way. It is expensive to
do. I will give you some example going forward. In the one I am going to show you something that is expensive is recursive SQL.
Another thing, what about what filters are you doing? Again it is a popularity contest. And when it is not a popularity contest
it is really a mission critical contest. If something needs to be sped up and gone faster, an MQT can do that, then you build it.
And then data translations. At lot of times your companies are putting very interesting amounts of information into memory,
into that buffer pool. And before it comes out it has a lot of massaging to be done. That tends to make it more expensive.
If you can get those put into an MQT then all that translation stuff is already done and you just scoop up your result.
So that should give you some ideas. <
Slide 10, 17:43 > So let’s go to slide 10 and go through an example. Let’s use an org chart. So recursive
SQL is good for imploding and exploding hierarchical information stored in a relational database. And let’s use IBM as an example.
We have as of a certain time 427,000 employees I can proudly say. And our CEO Ginni Rometty I’m depicting on the graphic on
the right. And believe it or not, we are not such a steep organization. I think when I count on my org chart,
I’m maybe 10 or 11 steps from Ginni. But it is a very wide spread out as you can imagine with that many people.
So this if you have never seen a recursive syntax before. The WITH statement is basically a common table expression.
And what that means is that it is a definition. Nothing happens until you go to the bottom of the statement and hit the select.
Once you are executing the statement it executes the top query block first and that one executes only once.
And you will see there, I’m saying boss equals ‘Ginni’ and I’m selecting the boss and the manager and who they manage and then
also a number ‘1’ in the column called ‘N’. You do that a lot in recursion, it does a couple of things,
it gives you metadata, to tell you what level you are on for instance of the org chart. And it also gives you a way to stop something from
going into an endless loop. So that executes once. That information is gathered and it is thrown into behind the scenes a temp table
which is called ‘ORG_CHART’. And if we go to the second query block you’ll see that I reference ‘ORG_CHART’.
Well sure enough everything that was just scooped up and put in there is now going to be joined back to its original table.
And an interesting join condition where people that I manage is also a boss. Where the boss also has managers.
This is the children’s children, basically. And my ‘N’ less than 500,000, I know for instance that I shouldn’t go beyond that so in case I
have any data integrity problem. Like some has a person that they manage and then they manage someone who is above that.
That would put you in an endless loop. So that is why that ‘N’ comes in handy. To make sure I don’t go into an endless loop.
So when that bottom one executes it really is getting the children’s children. So it goes down the hierarchy of who reports to who,
who reports to that person, who reports to each one of those sprungs on your org chart.
And as you can imagine executing this, it is going to be kind of expensive. <
Slide 11, 20:35> So if we look at slide 11. You’ll see there is a way to make this go faster.
And don’t quote me on this but we don’t have that many org changes by the minute. So there is a good chance that we could run this
and then keep it and really it is not stale until there is a change we need to make. So this is a create statement for your MQTs.
See it is that same query that I put in there and it just says deferred. I’m going to refresh this once a day, being soon enough.
It may even be once a quarter as I noticed this is my first year with IBM they seem to do things about at that pace.
So nothing really happens when I do the create statement. When I do the refresh statement that’s when I can execute the query
and load that MQT and if I like, I can also put an index. So I put index on boss.
So let’s go to slide number 12 and you’ll see here is why not just query, the general query
build the entire hierarchy can go fast but any portion of that hierarchy can be made faster.
And that is because the DB2 optimizer is really smart and this is where the new technology
is. It takes a look at your query, it sees that the where clause is not where boss equals ‘Ginni’ it is where boss equals ‘Beth’.
Which is Beth Smith our new general manager over our information management. And sure enough her hierarchy is not going to
be as big as Ginni’s hierarchy. So what the optimizer does, it does a re-write, says you know what, I can see,
I’m a dynamic application, right. I can see, it has to be dynamic. I can see that MQT over there has my result in it.
So I’m going to get rid of what I have already done. Which is that recursive SQL, it is already done.
And I’m just going to use the filter that I need. So sure enough I replace that whole statement with select from ORG_LIST where
boss equals ‘Beth’. Sure enough, I optimize that query, I throw that other one away. And it looks for an index structure, finds boss.
Probes right to ‘Beth’ which points straight down to everyone that reports to ‘Beth’. And as the slide depicts, 20 to 100 times faster.
Any query that can be done in advance is going to be faster. So that is how this magic works. And coming up with that sweet spot of
MQTs it is about as difficult as coming up with your base table index design for an entire workload.
So let’s go through the base table indexes and you will get an idea of the other steps you’ll need to take for the MQTs as well.
So we are getting into designing indexes for a workload. Now there are a lot of applications
out there that have, they’re packages. And I say they are packages they have the most indexes on the planet.
So some of those packages I have seen as high as having 120 indexes on one table. And that is a lot of different ways to look at a table.
But it is because of those packages that there are a plethora of tools now out there to help you to do this very time
intensive index design process. And here is why. If you were to take it from just A B C. What will you do? <
Slide 14, 24:13> On slide 14 I’m showing you the cheat sheet. So this is the cheat sheet behind all those
precious tools. They are looking at column usage. So for each table, so if you were to try to map this out in a conference
room, it would take up all the paper and you wouldn’t be able to see anything in the conference room other than the paper.
So for each table and every column, how is that column being used? And I’m just showing things like order by, group by,
if it is a local filter, if it is a join condition, what that filter is, and certainly if I am selecting that column, in addition to the where clause.
And this is what you have to do for every single transaction, and what are transactions?
They could have multiple queries. So you can see how fast this could just explode. So here is why you need the tools.
So let’s walk through what you really would need to do with this. Or what the tools do behind the scenes. <
Slide 15, 25:18> First of all on slide 15 what I am showing you is, they’ve got allow you to put some
weighting factor on your transactions. Because the most popular is not necessary the most critical.
So you really need to bubble up within that workload, the most critical to the top. And what is most critical?
Well if you look in your workload manager rules you’ll see what do you have favored to get most of the resources?
That’s going to be your most critical part. <
Slide 16, 25:50> So the way I’ve instructed a lot of tool vendors is on slide 16 I’m showing really basically
how you figure out how to optimize what you are loading into your index design tool. Which is copy your frequency report,
and now do a different sequence, not by frequency. Peel out using your workload manager goals, what is most important.
And then weight them by order of magnitude more, so multiply them by 10. Giving a 10 percent more frequency.
That is going to bubble all those queries up to the top. Now you got a better sequence.
Now that’s what you are going to pour into your index design tool. <
Slide 17, 26:31> Now if we turn to slide 17, you’ll see a lot of tools not only give you the metadata about
what their tool generates. But they also generate the full DDL and alter statements to get you to that sweet spot of your index
design. And that is because, as I’m showing with the circles, it is pretty obvious when you see larger numbers, that those belong.
They’re screaming “pick me”, “pick me”, I belong in an index. And you may have decided that there is an index in column 1-2-3
for years but you really didn’t know that you were doing 23,000 filters on column 5 and 7,000 filters on column 7.
That workload was new. New as of a couple of years ago and it is not being done on any index structure right now.
So this is how those decisions can be made across a workload. So it is not as easy. You don’t tackle it one at a time.
But this is what it takes not only for base table indexes but also for MQTs. Not only it is what’s most critical, gets the MQT.
What’s most critical, base table, gets the base table indexes. And there is where you start. <
Slide 18, 27:44> Now the reason, we move on to AQTs. The reason IBM invested a huge amount of money
into the acquisition into the company called Netezza. It was to acquire another performance structure.
And this was an appliance. A pluggable appliance. And it had grown up for 10 years when IBM bought it by satisfying data
warehouse customers. So it was a data warehouse machine. And it’s sweet spot is complex queries.
And so when IBM picked it up it was already well on its way. It had a pretty good SQL base in terms of its
portfolio. It didn’t match exactly DB2’s portfolio. But every version, we are on version 4 right now, every version brings more and
more of compatible SQL to DB2’s SQL. <
Slide 19, 28:38> So if we go to slide 19, I’m going to walk you through a particular companies use of an AQT
and the sweet spot it found within its operational data store workload. So this was a, I can’t say the name of the company, it was
a credit card company. You can imagine during the Christmas time which is when they brought me in to do the analysis.
They are pretty busy. On an average day they do 10 million inserts a day to what I call this downstream data warehouse.
It is also called an operational data store. So prior to this our real DB2 tables with referential integrity.
There is also feeds from another relational DBMS and that is what makes this ODS. What they had was all kinds of web requests
coming in. They had gifted developers coding beautiful SQL statements and some of them would get put into production
and none of its filters was in a singe index structure. So what I’m showing is fat indexes, like I said, I’m a big fan of big fat indexes.
Load up the columns, load up the columns. Any filter you are doing should be on a index structure somewhere.
Now it can be repeated multiple times too, it just depends on what you need. But at least it should be someplace once.
And the problem they were having as you can see with the squiggly lines is some of those transactions, they were all mobile
transactions were not getting to the mobile device at all. Who knows what day they were going to come back without a
performance structure. Because the base, there was 3 billion lines of code that it had to go through, comb through, to actually answer
those web requests. When I did some analysis I found that they could certainly make some indexes fatter.
But because replication was such a CPU hog. Because of the 10 million a day that it had to
replicate. It was a little too fickle in terms of adding one more index was not an option. It was not an option.
But these web developers, they were answering use case scenarios. Saying I can code that, I can make that
happen, I can list it that way and everything works really good in test. So here is what it came down to. <
Slide 20< 31:21> So on slide number 20. They had a certain web page for example,
and every company has something like this. It’s got a web page and it has a certain number of column values on it.
And they are parameters and they are entry points. They’re input. And your end user can input any one of them at time and any portion
of one at any time in any combination at any time. So sure enough they had 10 indexes on their billions of rows and
many, if not the majority of the questions they were asking could not be satisfied on those 10 indexes.
What they had was a search problem. They couldn’t afford a structure, they couldn’t afford anything, not even index on expression.
So I did some math for them. I said here is what you have if you turn to page 21.
Here is what you have. If you have 5 attributes and there are 2 dates, 2 different dates. Say one is begin date, one is end date,
or expiration date. However you want to do that. Let’s multiply these combinations. So if I really have 5 different things
and I can choose one date or another. That is 5 times 5 is 25 times 2 is 50. To answer just the questions from this screen alone would
be 50, 5-0 indexes. And those 40 (50) were not an option. Not even one was an option. So this is what I called unlimited search.
They were looking for a huge amount and the reality was they wanted to add the attributes. They were not happy with just 5.
They want this up to 25. And that puts them well over the 225 base table indexes. So, every site has this problem.
Every site has this scenario in their workload somewhere. And this is the appliance that is the savior.
So if you go to slide 22. You see I’m still showing my fat indexes there.
Well I have an option once I copy my 3 billion rows over of maybe I get rid of every index except for the referential integrity indexes.
Because as I found out they filter a good million bad credit card numbers a day. That get entered by hand, by humans.
Right that is what RI does for you. So I’m not only making my searches finally finish. I could also be saving some DASD space on top of it.
So that is a scenario that I like to show you. There are many, many, more scenarios for what you put on there.
Like I said, it grew up from data warehouse and that type of processing is its sweet spot. <
Slide 23, 34:26> Slide number 23. I’m showing that accelerator.
Here is where I like to put it in perspective. DB2 has grown up and gotten better and better in workloads.
It gets a smarter optimizer every year. It gets more metadata to look at. It gets code refreshes which makes things go faster.
Well as with DB2 10, DB2 11 brings amazing CPU reductions for query workload and one of the sweet spots is right in the middle
of this chart. It happens to be a semi-complex query. Well in DB2 10 it may be swiftly off-loaded. It may qualify for that sweet spot.
And follow the rules of is my data over there. And is it going to go far? Is it going to look at a lot? And is it going to need a lot of
processing? And aggregation? Then I should be shipping over there. Versus, DB2 11 code could possibly change,
not a decision made there, but where queries go. DB2 11 being faster may keep a little bit more on your box and
may off-load a little bit less. Or you start putting workloads that weren’t already there that are the sweet spot
and take them off of workloads on other, let’s say, on other platforms. So either way, the optimizer is deciding what part of your
workload goes where and you’ve got sweet spots both places, fortunately. So this is the mix of my operational
information with my analytical information and DB2 decides what appliance to actually apply it on.
When I go to the next slide. I’m showing you a little picture of the hardware.
There are many other presentations out there that can go through all that. I just wanted to give you an idea of where these AQTs,
accelerated query tables, live. They live on hardware. And they are very spoiled there.
They have their own memory, they’ve got everything there. They have their own CPU, in fact, everything is highly redundant and it is
highly compressed. And you can read about that in many other places. <
Slide 25, 36:47> If we go to the next slide you’ll see really what I’m talking about in terms of what the
optimizer accepts and off-loads to the actual appliance. To that hardware. So if I have put for example my 3 billion row monthly table
over there, the whole thing, and I write a query for instance of doing a distinct for instance.
And I’ve got where clauses and I’m doing my group by. So I’ve got some local filtering and it is a single table.
So we are keeping it simple. First thing that happens is the opposite of what happens when you bring data into the buffer pool.
Right when you bring data into the buffer pool, the first thing that happens you are doing before you do that is filtering.
Well that doesn’t happen in the appliance. The appliance instead initiates a high speed table scan.
Grabs all your compressed data, first thing it does is it uncompresses it. Next thing it does, and this is all on hardware,
it squeezes it down to exactly what columns you need. Right, 100 percent table scan is what is sitting there.
So it squeezes it down to your select list and on the third step, it finally does your filtering.
So much, much later than the optimizer but it can afford to do that because this is all done in hardware.
Then it moves over to the CPU and there is where the CPU gets to crunch, and crunch and do your aggregate
and joins and aggregates. So that is where the group by and sums and order by are handled there.
And then you get your result. Ten times faster, two thousand times faster, somewhere in between.
So that is where the magic is and there is a lot more details behind that.
But I’ll wrap it up on slide 26. And say this client is very, very happy now with an ODS with
100 percent migrated to this sweet spot of using AQTs. And if you think about unlimited fetching.
The need for unlimited fetching which is coming from “hey I want to look at my data so many ways” and beyond 225 base table
indexes. And beyond any number of MQTs that I can dream up. That’s when you can justify an appliance.
Most clients out there have tools to assist, like the query builders,
I’ve been using for as long as I can remember, IBM’s Data Studio. And it has been called different tools in the past. I use that,
it has been free for a long time. But the index, actual, pour the workload in, kick out index suggestions or alter statements in DDL.
That usually costs money. But it is a good idea to learn how to do one query at a time so that you can make suggestions on indexes.
Lot of shops should be able to justify the analytics accelerator and my motto is every DB2 subsystem should have at least 2.
One for test, one for production. I know we are a hardware company so I have to mention this.
And that is because what company isn’t doing in their workload somewhere analytical questions on the transactional data.
And I will just end you with that thought. Other reasons to be able to justify this, is having a production environment,
so this is in terms of best practices. Some sites have 100 percent of their production replicated, mirrored into test and kept alive.
This doesn’t have to, I go to bigger shops and they say there is no way could possibly happen here.
So I say there is this technology called profiling. You could at least make it appear that you are identical so that you can play with
your performance structures. Because that is where you need to make sense of it. That is where you need to turn all the knobs
that you possibly can and optimize your environment using all the performance structures that can make your life better.
And to use them to the best of DB2’s capability. Thanks for listening and happy development. <
41:33>