There are a lot of moving parts to robotic additive manufacturing (AM), literally and figuratively. Path planning, motion control, and the complex dynamics of material deposition all have to come together to make a part. As you might imagine, bringing all these elements in sync as a user requires expertise and often a lot of trial and error. I know this all too well from personal experience…
Ohh, and that’s usually when it happens. A singularity, and not the one where the robot becomes sentient, the one where the kinematics break down and the robot is stuck. If you are in AI safety I should apologize for a misleading title. Anyway, I digress. My point is that robotic AM is complex and it is easy to miss something.
To solve this challenge, we have spent the last year developing new artificial intelligence (AI) based tools to automate and augment the often tedious task of robotic programming. Today we are starting to roll out the first embedded models in AdaOne, trained on 3D geometry data, to help users identify efficient path planning strategies for additive manufacturing and machining.
Using AI where it makes sense
It’s important to think carefully about where AI can add real value in manufacturing today. As an example, the potential of using real-time feedback loops have been discussed since before I started working in additive almost 10 years ago. Still, we’re not quite at a point where we can fully audit and certify real-time AI systems, especially in high-stakes applications like often the case with Wire Arc Additive Manufacturing (WAAM), where certification is critical. In this case, there is still a threshold to overcome.
AI-driven decisions need to be fully transparent and auditable. That’s why we believe the best place for AI is in assisting users during the CAM process—helping to identify potential issues with 3D models, tool paths, and process parameters before they become problems in production. The user experience of CAM software up until recently has largely been designed by experts, for experts. If we think about the increasingly complex machines of the future, it will become impossible to always have an expert on hand to operate them.
Analyzing geometries and optimizing paths
Over the last year, we have trained several diffusion models based on 3D geometry data with the aim to assist user decision making. We are now excited to put them to the test.
Diffusion models make it possible to learn functions on 3D shapes using neural networks. They are particularly nice because they can handle both different surface representations (e.g. meshes, tool paths, or point clouds) and varying shape resolutions. When we use them together with more traditional computational geometry algorithms, for example curvature analysis or part segmentation, we can automate a significant part of the path planning process.
To start with, I’d like to mention two new experimental features in AdaOne which I am particularly excited about.
- Automatic part type detection. The first new feature automatically detects different part types directly on import — whether it’s a propeller, a curved pipe, or something else entirely — and then suggests an optimal path planning strategy. It even works on model assemblies.
- Automatic part segmentation and slicing. The second feature is a new experimental slicing method we’ve decided to call Abanico (Spanish for “folding hand fan”). It performs a geometric analysis of the part geometry to automatically split it into multiple pieces and perform optimal path planning on each. Depending on the part geometry, the result will look more or less like a fan, hence the name.
These two features go a long way to remove friction during the first steps of path planning. Operators can focus their time on final adjustments instead of trial and error during slicing. Now, combine this with our existing interactive real-time reachability and singularity check and it’s possible to cut down programming time to just a few minutes.
A glimpse into the future
We are obviously excited to see how far we are able to take AI augmentation within AdaOne and offer a more intuitive and seamless user experience. On that path there are many possibilities to explore.
A key challenge for us has always been the sheer variety of hardware and process combinations in robotic additive manufacturing. One user will have a Fanuc with a CEAD extruder, another a KUKA with a Fronius head, and a third might have decided on a dare that regular printing is for rookies and is now printing upside-down. The one size fits all model doesn’t really apply and that also goes for the embedded AI models. If we can allow continuous model improvement, AdaOne will be able to evolve and adapt to these diverse setups over time. The software could adapt to new geometries, user preferences, and unique manufacturing conditions.
We are currently focusing a lot of effort on in-process monitoring for traceability purposes — mostly for metal AM. Today, it basically means looking at data to make sure everything happened according to plan. As I mentioned earlier, it feels a bit premature to correct processes in real time when we don’t yet fully understand what’s happening. A natural step would be to predict and prevent potential manufacturing errors by analyzing the tool path ahead of time. Once we master the processes themselves, AI-driven real-time supervision could assist in certifying the production process by ensuring full traceability and quality control during manufacturing.
These are just two of many areas we are working on.
What really excites me is not necessarily the “in your face AI” but finding areas where it can quietly elevate the user experience. I think Bilbo Baggins put it nicely: all that is gold does not glitter.
Emil Johansson is the Chief Product Officer of ADAXIS. He has been working with robotic additive manufacturing for the last 9 years and is in a constant pursuit to make industrial robotics more easy to use.