ssas processing problems

By now, some of you may have come across this error when processing your dimensions:

File system error: a FileStore error from WriteFile occurred. Physical file: \\?\some path to a file. Logical file:….

At the core of this error is the fact that the asstore, bstore, kstore and string.data files for your dimension is hitting the 4GB limit and it cannot no longer be written to by SSAS.

image 

This is a limit that has to be worked with and there are several ways to manage to that limit.  To understand how to work with this limit, it is important to know how you may be running into this limit, and it happens in one of two ways:

  1. 1. Exceeding the limit via one of your temporary string stores during processing of your dimension object
  2. 2. Exceeding the limit via one of the final string stores of the dimension object
  3.  

It is important to understand which of these two scenarios is causing your issue as the resolutions for each are different.  In the first scenario where it is a temporary store problem, there are several options to streamline the processing of your dimension and get around the this error.

First Option: Check your the value on your dimension for the attribute ProcessingGroup (as show in the image below).  By table loads the entire dimension table into memory via a table scan query – if you have a large table then this could be your issue. Changing to ByAttribute will allow the dimension processor to issue distinct queries by attribute and, so long as you do not have an attribute that goes over 4GB (large string values, etc), then you this is your potential resolution.

image

Third Option: There are a couple of threads out there and one or two posts that mention modifying the PropertyBufferRecordLimitEnabled in the msmdsrv.ini file.  This is one potential way of resolving your issue, particularly if, either due to string keys or ways in which your attribute hierarchies are built (or, for shame, not built), then changing this setting to 1 will push down the size of the temporary records buffer, making it less likely that you will hit the 4GB limit for temporary files during processing.

image 

Fourth Option: Check your hierarchies – this is probably the most important!  Processing time and memory usage during processing of dimensions depends on the number of numbers and the number of attribute relationships.  During processing of a particular attribute, all dependent attributes are held in memory – if you no attribute relationships, then all members must be kept in memory.  If you can build cascading relationships, then the dependent pair is processed and can keep therefore keep moving incrementally and efficiently.

Now, if you are facing the second scenario where it is a physical limit of the final output, not the temp processing object, your options are, unfortunately, few.

In this case, only one comes to mind (for an easy resolution):

Your One Option: If you have been processing your dimension using Process Update Occasionally process your dimension by using Process Full to clear out old data in any of the offending files. In a process update, the string data files are appended to versus being overwritten to improve processing time and thus can grow out of hand if frequently processed this way.  You can also handle this by doing a Process Clear and forcing a new slate for the dimension files.

If that is not your issue, the unfortunately you will have to look at several things:

  1. 1. Your chosen data type for the offending member – unicode types are easy targets, and, in some cases, may have easy resolutions 
  2. 2. Is this a necessary member?  You may find that you do have the option to remove this member from the cube and make it available via drillthrough or report actions on the cube that pull the data from the relational source
  3. 3. ROLAP – not the most appealing, but until the 4GB limit is removed, and if that member is necessary, this may be your only choice

 

Some useful references:

Leave a Reply