So, you’ve got a great project, likely something in the construction domain. You’ve got to the point when your customers or boss want to have “BIM somehow integrated”. You’ve researched and found this great IFC standard, which all the BIM tools are supposed to export, great, you won’t have to handle hundreds of different file formats! Thank God for standards! IFC, despite being approved and published as an ISO standard is also free and public! That’s what we developers like!
You probably found the documentation for the IFC standard and realised that it defines over a thousand entities (which you might perceive as classes), hundreds of defined types (which you would see as data structs) and uses the STEP21 serialisation format (not widely used but interesting and efficient). Now what? You don’t know where to start implementing something like this when it should just be one of many features in your great product. As a .NET developer you started to look for a library which could do this for you, maybe you found xbim Toolkit. It doesn’t have the best documentation, but looking at a few code examples, it seems to do the job. Congratulations! You saved yourself several years of development!
Now, let’s look at the path you’ve likely gone through if your product is cloud-based. You probably started in your web application and put in 10 lines from the examples page to convert from IFC to wexBIM file. You put another 10 lines of JavaScript or TypeScript to your front end and you’ve got proof of concept within the hour! Time for a coffee before you show this nice 3D view to the rest of the team and the boss. Lovely! So, we’ve solved BIM integrated, right?
Maybe, until you try it with a 500MB model. You tried this on your powerful dev machine, and it all worked…but you noticed the fans went on. You run it again and see that processing IFC geometry consumes pretty much all the CPU. That is great, right? It gets there quickly. But hold on – your current deployment only has 1 core and 1.75GB RAM. So, it will clearly not run as smoothly as your dev beast. You deploy your code, and the cracks start to appear. You’re getting client-side timeout exceptions, because the endpoint doesn’t return the result quick enough. Calling other endpoints in your application returns code 503 (Service Unavailable) as the web application doesn’t have any CPU left to handle the requests. You’ve hit a problem.
Assuming you are using some managed platform for your application (MS Azure, AWS, Google Cloud etc.), you just configure your web app to scale up (more resources) so that single model processing is faster and scale out (more instances serving the same app) so when one instance is busy processing the IFC model, other instances can handle regular API requests. This is great for a demo but you can already see that you’ll be paying a lot more for cloud resources whilst only using them a fraction of the time. You may configure (or implement) some form of auto scaling, but you can see that processing models inline in your main API web application just doesn’t fit in very nicely. You don’t want all this complexity right in the core of your project just for the sake of BIM. It must be offloaded.
Now you need to choose from a plethora of computing services ranging from plain managed virtual machines (VM) through AWS Lambda or Azure Functions to managed Kubernetes service. You’ll need a queuing mechanism (AWS SQS, Azure Queue Storage, RabbitMQ, Apache Kafka etc.) and you’ll need to develop some kind of state persistence to keep track of the progress. You may want to use some events mechanism for continuous progress reporting and you’ll end up developing some kind of envelope to handle infinite and re-entrant executions as well as system failures. You’ll probably want to plug in the centralised logging and alerts to see what is happening (Elasticsearch, Graylog, Seq, App Insights etc.). You may end up with something like this:
While still running the same simple 10 lines of the code in its core, this has evolved in a rather complex service on its own. It requires maintenance, documentation, infrastructure scaffolding, monitoring…the list goes on.
Going through this process (more than once in fact) led us to build Flex Flow. You go back to the simplicity of your original proof-of-concept code, we take the maintenance burden off you. You can let us worry about model processing failures and system efficiency. Set up a few simple calls to our REST API and forget the complexities. You will have more time to concentrate on your business and when something fails we’re on hand to fix it. So your new, simply process, might be something like this:
Flex flow can actually perform many other automated operations and tasks. It’s heavily inspired by continuous integration (CI) processes and tooling well known from the SW development. You can define the pipeline, inputs, custom scripts etc. to run automated validation, classification, data extraction, clash detection and other useful tasks. But, as discussed in this article, you can easily get started with simple conversion from IFC to 3D online visuals.
So, if you think you might want to give our Flow API a try or chat to us about an integration with your current software get in touch with martin.cerny@xbim.net