Object’s unique coding – challenges and how to overcome them?

Raising the digitalisation level requires changes.

No change comes without additional hassle and effort.

Objects’ unique coding gives many possibilities but also has requirements to fulfil.

In my last article, I presented the benefits of lifting your model to a higher level and assigning each object a unique classification code.

Today, I want to focus on the flip side – the challenges and why it does not come seamlessly. Thanks to that you have a full picture for evaluation.

Challenges occur in each project phase, so I’ll cover them in order. In the end, I’ll give some tips on how to manage them and my personal opinion about the whole process.

Table of Contents

Challenges during design

First challenge is about creating valuable classification data.

Detailed design is the time when we enrich models with most of the data and when changes occur constantly. This is also a stage when the designers create a full classification code – just before model development is ready for construction.

It is also the stage where the design software shows their absolute lack of supporting users in creating quality data.

Harsh words. Let me elaborate on that.

I know two approaches to creating classification codes:

  1. Create a full classification string as one object property.
  2. Split the code into multiple logical pieces (defined by object class, system, type, instance, etc.) and create a composite property.

Both of them fail miserably when employed in the process.

Not because the methods are flawed. But because it is too easy to make an error.

The values of these properties must be manually filled in (error-prone) or via script (complicated). An auto-generate value is not possible inside the software stack.

In the end, designers, many of whom are also working for the first time with objects’ unique coding, have to either write long property codes by hand or follow complicated script routines developed by somebody who might be just partially involved in their project.

All that resulted in an errorful design and IFC export. Which leads us to the next challenge.

Challenges during coordination

IFC models are exported and wait for the clash control and data validation before they move on to the construction site. Bim coordinators’ job is to align the model with the requirements set in EIR. Objects’ unique codes and classifications are another data set that has to be checked and managed. Hence challenges.

More data validation

Having more required data fields means incorporating them into the BIM coordination process. Whereas validating data might also not be as straightforward as it seems. Many popular BIM coordination software are great for checking clashes, but lack the basic functionality for validating data (yes, I’m talking about you Navisworks).

Since the classification values are neither numerical data nor logical, but text, it is not so straightforward to check whether they are correct. The easiest would be to create a clever REGEX rule, but I haven’t yet seen the data validation software supporting that.

Therefore, you have to create a set of data validation rules that ensure you cover every possibility.

Maintaining data quality

One successful data check won’t suffice. Codes are complex and because of that it is hard to maintain objects’ unique coding throughout the long design and construction process.

I see three data quality attributes (learn what it is) that may be especially challenging to maintain in the long run: format, consistency and uniqueness.

Format

The unique codes usually are complicated and long. In the table below, I have shown some examples of the type element – air handling unit. Bear in mind that this is only for type objects – not instances – and it is already pretty complicated. Remember also, that BIM modelling software comes with property fields as free-text fields with no spelling or value control. Just think how many mistakes can be made in the values format.

Classification name Classification number
Uniclass 2015 Ss_65_80_37: Air handling units.
Omniclass 23 37 13 13: Modular indoor air handling units.
Norwegian TFM 360.001: Ventilation system.
IVZ.001: Air handling units (Components).
Swedish Coclass VVS.30.AC: Air handling units (Functional System). KP.CD.AC: Air handling units (Components).
CCI Classification M-31.60.10: Air handling units (Mechanical Systems → HVAC Systems → Air Distribution Equipment).

Example classification values for an air handling unit. As I don’t know all of them, ChatGPT helped me with examples, so errors may occur.

Consistency

Data consistency means that the same data represents information across systems. Let’s take an example classification value M-31.60.10 from the table above. It describes an air handling unit and this code has to be exactly the same in every model, FM software, database, system schema and maintenance documentation. Also, the same number is written on the element’s nameplate fixed on the unit.

And since we have so many different and separated silos of information, it is difficult to manage them and have everything in sync all the time.

unique coding joining building, model and database
This is consistent data - the same value represents the object in the model, in a database and on-site.

Uniqueness

Thousands of elements across multiple models have to have a unique instance throughout years of the construction process. Unfortunately, modelling software can’t check whether each element has a unique classification value. So we have to create separate processes and solution to check and fix that. Even so – errors happen. It’s a lot of objects, in many models with hundreds of stakeholders.

Challenges during coordination

In the construction phase codes are already created and validated. So contractors use them for multiple purposes:

  1. Ordering materials
  2. Checking elements requirements
  3. Delivering MOM documentation (one set of documents can be assigned to various object codes)
  4. Creating nameplates to stick on the element on-site

I see two main challenges with using unique object coding on the construction site:

  1. Misspelling or misplacing nameplate with code
  2. Late design changes

The first one is an outcome of a lack of understanding of the coding. When I was walking on the construction site, I encountered that the nameplate is either installed on a different element than it should be, or it is some shortened version of what it should be. Or is missing at all.

The building owner requires the correct nameplate on specified elements, so in the end, this is yet another point on the checklist for the Site Manager to validate while accepting the scope of work from the contractor.

However, by far the biggest challenge in this phase is late design changes. This influences each step of the construction process.

  1. Orders are performed by the classification number – if a number is deleted or changed – it can result in incorrect materials arriving on site.
  2. Products are assigned to the codes. Therefore, product acceptance is happening via classification codes. If an element changes the code – it drops out of the loop causing delays.
  3. Documentation is connected to the location, type and instance codes – changing or deleting the code result in missing MOM documentation.
  4. Lastly the most cumbersome – if the change happens so late, that the element is already mounted on-site with a nameplate – another has to be produced and installed one more time. And this is a lot of additional work!
incorrect syntax for object unique code
Incorrect syntax of Norwegian TFM-classification used on a construction site (should be =381.502-RFA.470T/001). Since it is very complicated errors can easily happen.

Existing solutions

I have complained a lot now. These are challenges, but the industry is trying to tackle them. By employing correct tools and setting up good processes we can minimise the burden.

Design

We can’t do much with the modelling software itself. But we can try to support or enhance their capabilities. Employing third party solution can boost the inabilities to control the quality of the classification properties.

The main challenge is to create valid and unique classification codes. There are some solutions to that. I see two viable:

  1. Creating a script that codes types according to their category and then gives each instance within the type a unique number.
  2. Using a separate database software to create and manage codes (with uniqueness constraints) and send the codes to the modelling software. Via API or other imports (depending on the chosen software).
Employing a database solution in your design workflow can help with keeping values unique.

Coordination

To minimise the additional time spent on checking object coding I would recommend using as many templates as possible and setting up automated checking processes.

Also there are possibilities to monitor the data consistency by connecting coding database and IFC models into one dashboard. Might be challenging to have a live connection with all sources, but it might be possible to feed PowerBI with regularly generated Excel exports.

Construction

The biggest fear and disruptor are changes. We have to regain control over changes processes to the critical values.

With that in mind I would suggest a process change – after the design and code are made, set up a rigid changes regime and allow only a few people in an organization to change what has been established.

And I am aware that this would be a bottleneck for implementing important and justified changes. But by having that constriction we could narrow down the changes to bare necessity. And I reckon that late changes are the biggest threat to projects schedule and bottom line.

My opinion

Having said all that about possibilities (previous entry) but also challenges and existing solutions, I allow myself some personal reflections.

I think that both available software solutions and understanding of the matter, are not ripe enough to reap the benefit unique coding gives us in facility management.

To put it more in the details:

Software solutions

To manage the construction process a team needs 50 or more different software, half of them often crosses with one another. To manage objects’ coding you have to ensure that all pieces are in sync all the time.

That is impossible.

Most of the software don’t talk to one another – you are stuck with Excel exports and imports. That requires a heap of scripts or a full-time position that only does the manuall syncing.

In my opinion, the software we use in our industry is not good enough when it comes to data management and data interoperability.

Human factor

Unique coding is a process that has been required by some building owners for only a couple of years. It is not the industry standard. If it’s not the required standard, many don’t bother to be on the technological edge and simply are not accustomed with the unique coding and strict processes it requires.
It may also be that I just don’t know a better process to manage all that in a more seamless way. If you know – let me know by commenting or sending us an email!

Did you like that post ? Share it with others !

We spend a lot of time and effort creating all of our articles and guides. It would be great if you could take a moment to share this post !

Share:

Comments:

Subscribe
Notify of
guest
0 Comments
Oldest
Newest
Inline Feedbacks
View all comments

Author:

Download BIM CASE STUDIES:

After reading this guide you will learn:

  • How BIM is used on the biggest projects in Norway
  • What were the challenges for the design team and how were they solved
  • What were the challenges on the construction site and what was our approach to them

Newest articles: