Quantcast
Channel: SQL Server Reporting Services, Power View forum
Viewing all articles
Browse latest Browse all 10045

A General Question About Best Approach to Creating a Report W/A Large Number of Records

$
0
0

I'm gonna try to be specific, but I'm afraid this is kind of a generalized questions to begin with.  I'd rate my SSRS proficiency as somewhere around upper-novice to lower-proficient developer.  I have a pretty good grasp on most of the basics and can even manage a trick or two in developing reporting solutions. Currently, I'm stumped however.

I need to create a report that will summarize student suspensions for each school in my district (approximately 160 schools), on the same page.  Our schools are grouped into 6 areas, so basically I have the the 6 areas as column groups, and the schools as row groups.  The initial creation of the report worked great, as they just wanted to see suspensions per schools.  I was able to group the mdx data coming in so that only 160 rows of data were being processed (1 for each school) and the report came back in under a second.  

After seeing this, everybody got excited and now they wish to have filters placed on the report so that they can see the suspensions broken down by Race, Gender, and other labels (Is the kid on Free/Reduced lunch? Limited English? etc.).  So I have multiple filters I have to incorporate.  All these labels are (obviously) already attached to each student and are attributes in the Student dimension, so I thought, no big deal, I'll just load in each student and the respective attribute flags, which is about 100,000 records total.  Unfortunately, this has really put a hurtin' on the report -- it's taking about 75 seconds to run this way.  About 30 seconds of this runtime seems to be from pulling in the data from the mdx query itself.  I noticed that each time I added a new attribute field from the student dimension, it would add about an extra 4-5 seconds of processing time.

Next, I tried to create calculated members in the cube itself for each student attributes: total number of males/females suspended, total number of asians/hispanics/whites suspended, total number of "Free/Reduced Lunch" suspended, etc.  This let me get the number of rows back down to a manageable size and the report speeded back up to about 2-3 seconds.  I thought I was home free, but then I started testing and discovered another issue.  If somebody filters to see, say, Asian Females on Free/Reduced Lunch -- it's possible that a student is going to show up three times because they could be Asian, Female, and on Free/Reduced Lunch - so there's no way to filter that out, I think.

I really don't see anyway around pulling in each individual kid, which takes me back to about 100,000 rows.  Am I missing something? Is there a better way to do this?  Is there a trick to organizing my cube or the dimension attributes that I don't know about?  Why would adding an attribute from a record that is already "accounted for" add so much extra time onto the processing?  Is it just a known fact that trying to process so many records will not work?

I've included a screenshot below to give you an idea of what my cube looks like, and the rediculous amount of calculated summary measures i've created trying to fix this...


Viewing all articles
Browse latest Browse all 10045

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>