Igor Kromin |   Consultant. Coder. Blogger. Tinkerer. Gamer.

We use partitioning in at least one of the databases at work, changing to partitioned tables has allowed us to keep the system running within our imposed SLAs. Recently I've started to wonder however, what happens when we start hammering individual partitions more than others. The data distribution would be skewed and the benefits of partitioning lost. I decided to put together a bit of SQL and then use gnuplot to show me how well our data is distributed.

These results are from a development environment where I was testing out my scripts, but it clearly shows how some partitions can get filled at a much quicker rate than others. Each of the bars is an individual partition, the height of the bar represents the number of rows of data in that partition.

The SQL behind this graph is very simple...adjust the like statement to suit. In my case I get data for multiple tables and then filter it later in a shell script.
select up.table_name, up.partition_name, up.num_rows
from user_tab_partitions up
where table_name like 'MY_TABLES_%'
order by up.table_name, up.partition_name asc;

I export the result of this query to a file called export.d. This has to be a tab-delimited file that doesn't use quotes around each of the data values. The data looks something like this...
MY_TABLES_TBL1 SYS_P42665 822568
MY_TABLES_TBL1 SYS_P42666 394797

This is then processed by a shell script.

To make the graph, I used gnuplot with the following shell script to generate the image...
 Bash Script
function plot {
grep $1 $2 > $gp_file
gp_95pct=`cat $gp_file|awk '{print $3}' |sort -n|awk 'BEGIN{i=0} {s[i]=$1; i++;} END{print s[int(NR*0.95-0.5)]}'`
gp_99pct=`cat $gp_file|awk '{print $3}' |sort -n|awk 'BEGIN{i=0} {s[i]=$1; i++;} END{print s[int(NR*0.99-0.5)]}'`
set terminal svg size 800,400 noenhanced font "Verdana,10"
set output "export_$gp_table.svg"
set title "Table Partitions Row Sizing: $gp_table"
set ylabel "Data Rows"
set grid y
set format x ''
set style fill solid 1.0
set palette defined (0 "red", 1 "#FFA500", 2 "#555577")
unset colorbox
plot "$gp_file" using 0:3:(\$3 > $gp_99pct ? 0 : (\$3 > $gp_95pct ? 1 : 2)) with boxes palette notitle
rm $gp_file
plot MY_TABLES_TBL1 export.d

What this bash script does is define a function called plot, then it defines the gnuplot executable and calls the plot function passing in the name of the table to filter by and the name of the data file.

Inside the plot function, I use grep to get the data for the table that's been passed in. Then awk is used to calculate the 95th and 99th percentile values for the data points in the filtered file. These are used for colouring the bars. 95th percentile bars are orange and 99th are red, others are blue-gray.

This is all followed by calling gnuplot to generate the bar graph, and then the filtered file is cleaned up.

It's all quite simple, it did take me a long time to get the syntax right for gnuplot however. In my actual script I also have a loop to process all of the tables in the exported file all in one go.


Have comments or feedback on what I wrote? Please share them below!
comments powered by Disqus
Other posts you may like...