BIM is one of the biggest catch phrases in the Construction Industry in Australia with many players claiming to have BIM Capabilities. Over the last couple of months I have been thinking quite intensively about this. There are organisations that have created systems to determine BIM Capability, but some of the areas in their rating system I don’t believe really are a good measure of your BIM capability. How is a capability to set up IT infrastructure a rated item? I am asking myself how do you confirm the quality of your BIM deliverables? And how can I set up systems that enable Fulton Trotter to meet current and future BIM deliverables.

The quality of your model becomes more critical the more you depend on it to generate more of your deliverables and you start collaborating with others using OPENBIM standards. We are currently using our model to produce the majority of our drawings with little 2D intervention. We are also using applied data to create the majority of the notes on our drawings, and form the basis of our specification. This Lonely BIM solution requires a high quality model to meet our internal standards alone. We currently have a manual process for checking the quality of the data and a review of the drawings, which remain as the deliverable. This process is time consuming and can still miss little notes that have the potential to be very costly during construction if there is a mistake.

A few months ago I started undertaking some testing on 5D BIM processes with a couple of Quantity Surveyors. One particular QS used Solibri Model Checker prior to putting the file into CostX to determine the quality of the model. Although it was a test file that was put together in haste to undertake the testing, SMC came back with some interesting results in its Rule Based Model Quality Checking. There were a number of issues with my model, and in essence it demonstrated the potential for IFC files to go out the door having been checked over manually to not be up to standard. I have spoken at length to the QS about this process and my opinion is that they shouldn't have to undertake that test. Shouldn’t it be the responsibility of the sender to undertake the checks and confirm that the model is up to standard? Where does the line of BIM Quality lie? Does a sender of information as part of their Capability have a process in place to mitigate the number of non-conformances of a model? How many errors in a model are acceptable? Throughout the collaboration process a model will never really be in a finished state. You would expect some errors. These errors would progressively be resolved as the model is further resolved. The biggest challenge is determining the line in the sand of when an error is acceptable in the process.

Over the last couple of years we have used Solibri Model Viewer to check our models leaving the office are a reasonable representation of the geometry created in our authoring software ArchiCAD. We handled coordination within ArchiCAD, as the projects that we collaborated with consultants in IFC were small and manual coordination in ArchiCAD as achievable. This year we have begun working on a significantly larger project and it made sense to undertake the coordination in clash detection specific software. We chose Solibri Model Checker to undertake this task, as it not only can perform clash detection but Model Quality Checking also.

Fulton Trotter now relies heavily on the data and geometry in the model as a deliverable. We can use customized Rules within SMC to perform all of our data audits. Meaning not only the quality of our models for documentation deliverables should be of a higher quality but our models that we use to collaborate with others will also be of a higher quality and more reliable.

The timing of the purchase of SMC is perfect for us as a practice as one of the New Features in ArchiCAD 18 is inbuilt support of BCF files through enhancements in the Mark Up Tool. This means ArchiCAD users can undertake model audits in Solibri and export the results directly into a Graphisoft supported feature rather than a ‘plug in’ solution.

Shouldn’t it be the responsibility of the sender to undertake the checks and confirm that the model is up to standard? Where does the line of BIM Capability lie? Does a sender of information as part of their Capability have a process in place to mitigate the number of non-conformances of a model? How many errors in a model are acceptable? Throughout the collaboration process a model will never really be in a finished state. You would expect some errors. These errors would progressively be resolved as the model is further resolved. The biggest challenge is determining the line in the sand of when an error is acceptable in the process.

With the purchase of SMC in the last month it has reinforced my thinking that having a quality BIM process can't be achieved without a solution like Solibri Model Checker to validate the BIM. All parties that issue files have a quality-checking component in their BIM process, especially as your collaborating partners are going to rely on the information you send out. The challenge will be determining or informing your team where the areas are non conforming yet or have a system in place to explain what areas to expect to comply at particular stages of the project. Once a standard is developed then ‘BIM Competence’ could be determined by checking the quality of the deliverable. I am convinced that Solibri Model Checker is the best solution to undertake quality-checking task. I even suggest Autodesk user obtain a license of SMC to perform these IFC based checking.

I hope to write in more detail in the coming months as I spend more time pulling the software apart and finding ways to get more from this incredible product. 

Share: