The traditional manual separation of risers from large casting parts presents significant challenges in industrial foundries. This labor-intensive process not only exposes workers to hazardous environments involving heat, fumes, and physical strain but also suffers from inconsistent quality, low production efficiency, and suboptimal surface finish on the cut plane. The inherent geometric variability and complex surfaces of sand-cast or investment-cast components make automated handling particularly difficult. To address these issues, this article delves into a robotic cutting system guided by 3D machine vision, focusing on the critical technologies of visual detection and trajectory planning specifically tailored for large casting parts.

The core objective is to replace human-dependent operations with an intelligent automated cell capable of identifying, locating, and precisely cutting risers from castings. The proposed methodology leverages three-dimensional point cloud data acquired from a 3D industrial scanner. This data forms the digital twin of the physical casting part, enabling precise geometric reasoning. The process involves sophisticated point cloud pre-processing to isolate the casting part from its background, a novel point cloud registration technique to locate the cutting features, and a spatial arc-based trajectory generation algorithm to command the robotic arm. This integrated approach aims to achieve high-precision cuts, improved surface quality, and complete automation, thereby enhancing safety and productivity in foundry operations.
1. System Architecture for Vision-Guided Casting Cutting
The automated cutting system is an integration of sensing, computation, and actuation modules. Its primary function is to perceive the unstructured environment containing a large casting part, compute the precise cutting path for its riser, and execute the cut with a robotic manipulator. The overall workflow and data flow are encapsulated within a cohesive system architecture.
The physical system comprises several key components: a six-degree-of-freedom (6-DOF) industrial robotic arm, an end-effector mounted plasma or gas cutting torch, a high-accuracy 3D area scan camera (such as a laser stripe or structured light projector), and a central industrial PC (IPC) for control and processing. The 3D camera is fixed in a position overlooking the work cell (eye-to-hand configuration) to capture the complete scene containing the casting part. The IPC is connected to both the robot controller and the 3D camera.
The logical workflow begins with data acquisition. The 3D camera scans the work area, generating a dense point cloud representing the surfaces of all objects within its field of view, including the target casting part, the riser to be removed, support structures, and the workbench. This raw scene point cloud is transmitted to the IPC. Subsequently, a series of algorithms process this data. First, pre-processing techniques filter out noise and segment the point cloud to isolate the cluster corresponding to the casting part. Next, a critical step involves registering a pre-defined 3D CAD model or a template point cloud of the ideal casting part onto the scanned data. This registration calculates the precise position and orientation (pose) of the real casting part in the camera’s coordinate system. Using pre-defined markers on the template, the 3D coordinates of the cutting points on the riser are thus identified in the camera frame. A pre-calibrated hand-eye transformation matrix then converts these camera-frame coordinates into the robot’s base coordinate frame. Finally, a trajectory planning algorithm uses these transformed cutting points to generate a smooth, spatially accurate arc path for the robot’s end-effector, which is dispatched to the robot controller for execution. The system architecture ensures a seamless flow from perception to action, central to automating the riser removal process for casting parts.
2. Vision Guidance: From Point Cloud to Cutting Pose
The vision guidance pipeline is the “eyes” of the system. Its accuracy directly determines the success of the cutting operation. This process involves transforming raw sensor data into actionable robotic commands through calibration, filtering, segmentation, and model matching.
2.1 Hand-Eye Calibration: Bridging Vision and Robotics
A fundamental prerequisite for any vision-guided robot system is hand-eye calibration. It establishes the precise geometric relationship between the camera’s coordinate frame and the robot’s base coordinate frame. This static transformation allows the translation of any point measured by the camera into a location the robot can understand and reach. For an eye-to-hand setup where the camera is stationary, the calibration solves for the transformation \( \mathbf{X} \) in the following equation:
$$ \mathbf{P}_{robot} = \mathbf{X} \cdot \mathbf{P}_{camera} $$
where \( \mathbf{P}_{camera} \) is the homogeneous coordinates of a point in the camera frame, and \( \mathbf{P}_{robot} \) is its corresponding coordinates in the robot base frame. The transformation matrix \( \mathbf{X} \) is a 4×4 matrix encompassing rotation and translation:
$$ \mathbf{X} = \begin{bmatrix} \mathbf{R} & \mathbf{t} \\ \mathbf{0} & 1 \end{bmatrix} $$
This matrix is obtained by using a calibration target (e.g., a checkerboard or a sphere) placed in the scene. The robot moves its end-effector (or a calibration pin) to multiple known positions \( \mathbf{P}_{robot}^i \), and the camera observes the target at those positions, calculating \( \mathbf{P}_{camera}^i \). A set of these correspondences is used in algorithms like Tsai-Lenz or a least-squares solver to compute the optimal \( \mathbf{X} \). Accurate calibration is non-negotiable for precise cutting of casting parts.
2.2 Casting Part Point Cloud Acquisition and Pre-processing
The raw 3D scan of the work cell is a cluttered point cloud. The first task is to extract the point cloud belonging solely to the casting part. This involves several sequential steps:
- Plane Segmentation & Removal: Algorithms like RANSAC (Random Sample Consensus) are used to identify and remove the dominant planar surface, which is typically the worktable or floor.
- Pass-Through Filtering: A spatial filter is applied to retain only points within a predefined volume of interest (VOI) where the casting part is expected to be placed, discarding distant background points.
- Statistical Outlier Removal: This filter analyzes the local neighborhood of each point. Points with an average distance to their neighbors beyond a standard deviation threshold are classified as noise (e.g., dust, sparks) and removed, smoothing the point cloud.
- Euclidean Clustering: The remaining points are segmented into distinct clusters based on spatial proximity. The largest cluster, corresponding to the main body of the casting part, is selected for further processing. This step successfully isolates the casting part from potential smaller debris or fixtures.
- Adaptive Curvature Downsampling: To improve computational efficiency for subsequent matching without losing critical shape features, the point cloud is downsampled. An octree-based method divides the space, and within each voxel, surface normals are estimated. A curvature-based criterion retains points in regions of high shape variation (like the riser-base junction) while sparsely sampling flat areas.
The result is a clean, feature-preserved, and manageable point cloud \( \mathbf{S} \) of the target casting part, ready for the localization step.
2.3 Three-Point Template Point Cloud Registration (TTPCR)
Locating the exact cutting points on the riser requires matching the scanned casting part \( \mathbf{S} \) to a known reference model. This is achieved through point cloud registration. We propose a Three-Point Template Point Cloud Registration (TTPCR) method for robustness and accuracy.
First, an accurate 3D model (template point cloud \( \mathbf{M} \)) of the casting part without the riser (or with the riser in its nominal state) is created offline. Three non-collinear key points \( \mathbf{p}_1, \mathbf{p}_2, \mathbf{p}_3 \) are manually selected on this template, precisely defining the desired cutting contour on the riser. For a cylindrical riser, these points would lie on the circular cut line.
The registration process finds the optimal rigid transformation \( \mathbf{T} \) that aligns the template \( \mathbf{M} \) to the scene data \( \mathbf{S} \). This is typically a two-stage process:
- Coarse Registration (Feature-based): Algorithms like FPFH (Fast Point Feature Histograms) are used to extract local geometric features from both \( \mathbf{M} \) and \( \mathbf{S} \). Corresponding features are matched, and a coarse alignment is estimated using methods like RANSAC or SAC-IA (Sample Consensus Initial Alignment).
- Fine Registration (ICP): The Iterative Closest Point (ICP) algorithm refines the coarse alignment. It iteratively minimizes the distance between corresponding points in \( \mathbf{M} \) and \( \mathbf{S} \), converging to a precise transformation \( \mathbf{T} \).
Once \( \mathbf{T} \) is computed, the transformation maps the template to the scene: \( \mathbf{M}’ = \mathbf{T} \cdot \mathbf{M} \). More importantly, the cutting points on the actual casting part in the camera frame are found by applying the inverse transformation to the predefined template points:
$$ \mathbf{q}_i = \mathbf{T}^{-1} \cdot \mathbf{p}_i, \quad i = 1,2,3 $$
where \( \mathbf{q}_i \) are the 3D coordinates of the cutting points in the scanned scene point cloud \( \mathbf{S} \). These points \( \mathbf{q}_1, \mathbf{q}_2, \mathbf{q}_3 \) are the foundational inputs for trajectory planning. Their positional accuracy is paramount for a clean cut on the casting part.
3. Robotic Trajectory Planning for Spatial Arc Cutting
The risers on casting parts are typically cylindrical or have a circular cross-section at the cut line. Therefore, the optimal cutting path is a circular arc in 3D space. The trajectory planning module takes the three identified cutting points \( \mathbf{P}_1, \mathbf{P}_2, \mathbf{P}_3 \) (already transformed to the robot base frame via hand-eye calibration and TTPCR) and generates a sequence of poses (position + orientation) for the robot’s end-effector that follows this arc smoothly and with the correct torch orientation.
3.1 Spatial Arc Generation from Three Points
Given three points \( \mathbf{P}_1, \mathbf{P}_2, \mathbf{P}_3 \) in space that are not collinear, a unique circle (and its containing plane) can be fitted. Let \( \mathbf{O} \) be the center of this circle and \( R \) its radius. The unit normal vector \( \mathbf{n} \) of the plane is given by:
$$ \mathbf{n} = \frac{(\mathbf{P}_2 – \mathbf{P}_1) \times (\mathbf{P}_3 – \mathbf{P}_1)}{\|(\mathbf{P}_2 – \mathbf{P}_1) \times (\mathbf{P}_3 – \mathbf{P}_1)\|} $$
We then define two orthonormal vectors \( \mathbf{a} \) and \( \mathbf{b} \) that span the plane of the circle. One can be defined from \( \mathbf{P}_1 \) to the center \( \mathbf{O} \), normalized:
$$ \mathbf{a} = \frac{\mathbf{P}_1 – \mathbf{O}}{\|\mathbf{P}_1 – \mathbf{O}\|} $$
The second is obtained via the cross product, ensuring a right-handed coordinate system within the plane:
$$ \mathbf{b} = \mathbf{n} \times \mathbf{a} $$
Any point \( \mathbf{P}(\theta) \) on the circular arc between \( \mathbf{P}_1 \) and \( \mathbf{P}_3 \) passing through \( \mathbf{P}_2 \) can be parameterized by an angle \( \theta \):
$$ \mathbf{P}(\theta) = \mathbf{O} + R \cos(\theta) \mathbf{a} + R \sin(\theta) \mathbf{b} $$
The limits \( \theta_{min} \) and \( \theta_{max} \) corresponding to \( \mathbf{P}_1 \) and \( \mathbf{P}_3 \) are calculated using the arc-tangent function to ensure the correct quadrant. By discretizing \( \theta \) from \( \theta_{min} \) to \( \theta_{max} \), we obtain a sequence of \( N \) via points \( \{ \mathbf{P}_k \}, k=1…N \) that define the cutting path’s position.
3.2 End-Effector Orientation (Tool Pose) Planning
For a proper cut, the orientation of the cutting torch relative to the casting part surface is crucial. Typically, the torch axis should be perpendicular to the local tangent of the cut path and may have a predefined tilt angle. For each via point \( \mathbf{P}_k \) on the arc, we construct a local coordinate frame attached to the end-effector.
We define the approach vector \( \mathbf{\hat{a}} \) as the direction from the current point \( \mathbf{P}_k \) towards the circle center \( \mathbf{O} \), pointing inward towards the material to be cut:
$$ \mathbf{\hat{a}} = \frac{\mathbf{O} – \mathbf{P}_k}{\|\mathbf{O} – \mathbf{P}_k\|} $$
The normal vector \( \mathbf{\hat{n}} \) is simply the constant plane normal computed earlier, which is the desired cutting direction (torch axis). For a perpendicular cut, \( \mathbf{\hat{n}} \) is the Z-axis of the tool.
The sliding/orientation vector \( \mathbf{\hat{s}} \) is then given by the cross product, completing the right-handed triad:
$$ \mathbf{\hat{s}} = \mathbf{\hat{n}} \times \mathbf{\hat{a}} $$
Thus, the rotation matrix representing the tool orientation at point \( \mathbf{P}_k \) in the robot base frame is:
$$ \mathbf{R}_{tool}^k = \begin{bmatrix} \mathbf{\hat{s}} & \mathbf{\hat{n}} & \mathbf{\hat{a}} \end{bmatrix} $$
This matrix is often converted to a more compact representation like Euler angles (Roll, Pitch, Yaw) or a quaternion for commanding the robot. The full pose command for the robot at each via point is therefore \( \{\mathbf{P}_k, \mathbf{R}_{tool}^k\} \). This method ensures the torch maintains the correct orientation relative to the curved surface of the riser on the casting part throughout the cut.
4. Experimental Validation and Performance Analysis
The proposed system and algorithms were rigorously tested through simulation and physical experiments to validate their performance in handling the variability and demands of cutting risers from casting parts.
4.1 Simulation Experiments in RoboDK
Prior to physical implementation, the entire workflow was simulated using RoboDK software. A 3D CAD model of a representative large casting part with a riser was imported. The point cloud processing and TTPCR algorithm were emulated to output the three cutting points. These points were fed into the arc trajectory planner, which generated robot motion targets within the simulation environment.
Experiment 1: Trajectory Planning for Risers of Different Radii. To test robustness, risers with diameters of 80mm, 60mm, and 40mm were modeled. For each, 60 repeated simulation runs were conducted. The quality metric was the maximum positional deviation between the planned arc trajectory and the theoretical perfect arc defined by the three points. The results are summarized below:
| Riser Diameter (mm) | Max. Deviation (mm) | Min. Deviation (mm) | Conclusion |
|---|---|---|---|
| 80 | 1.04 | 0.83 | The algorithm successfully generated accurate cutting paths for various riser sizes, with all errors below the typical industrial tolerance (<1.5mm) for rough cutting. |
| 60 | 1.16 | 0.85 | |
| 40 | 1.25 | 0.96 |
Experiment 2: Trajectory Planning for Different Casting Part Orientations. The 80mm riser model was placed in three distinct poses: a reference pose, rotated -70° about the X-axis, and rotated 60° about X and 20° about Y. Again, 60 runs per pose were performed.
| Cast Part Orientation | Max. Deviation (mm) | Min. Deviation (mm) | Conclusion |
|---|---|---|---|
| Reference Pose | 1.01 | 0.86 | The vision-based localization (TTPCR) and trajectory planning algorithm proved insensitive to part orientation, maintaining sub-1.3mm accuracy. |
| Rotated -70° (X) | 1.02 | 0.86 | |
| Rotated 60°X, 20°Y | 1.03 | 0.87 |
4.2 Physical Cutting Experiments with ABB Robot
A physical test cell was established with an ABB IRB 6640 robot, a plasma cutter, and a Mech-Mind 3D camera. A large low-alloy steel casting part with a riser diameter of 140mm was used. Dozens of cutting trials were performed.
Trajectory Accuracy: The executed robot path was logged and compared to the theoretical path. The maximum observed deviation was less than 1.3 mm, confirming the simulation results and the overall accuracy of the hand-eye calibration and TTPCR processes.
Cut Quality Assessment: The quality of the cut surface was quantitatively evaluated and compared against traditional manual cutting performed by five experienced workers. Key metrics were measured:
| Performance Metric | Proposed Robotic System | Average Manual Cutting | Improvement |
|---|---|---|---|
| Cut Kerf Width/Gap | ≤ 5 mm | ~8 mm | ≥ 37.5% reduction |
| Surface Roughness (Ra) | ~10.2 μm | 35-46 μm | ≥ 70.8% reduction |
| Average Roughness Depth (Rz) | ~55 μm | 160-350 μm | ≥ 65.6% reduction |
The robotic system demonstrated superior and consistent cut quality. The reduced kerf minimizes material loss. The dramatically lower surface roughness (Ra and Rz) signifies a much smoother cut surface on the casting part, which can significantly reduce post-processing time and cost, such as grinding or milling. This level of consistency is unattainable with manual operations.
5. Conclusion
This article presented a comprehensive and practical solution for automating the riser cutting process for large casting parts. The core of the system is a vision-guided robotic cell that integrates 3D point cloud perception with intelligent trajectory planning. The proposed Three-Point Template Point Cloud Registration (TTPCR) method provides a robust mechanism to accurately locate cutting features on variable casting parts despite surface imperfections and geometric deviations inherent in the casting process. Coupled with a spatial arc interpolation algorithm for trajectory generation, the system calculates smooth and precise cutting paths.
Extensive simulation and physical experiments validated the system’s performance. The robotic cutting achieved a positional accuracy better than 1.3 mm. More importantly, it delivered drastic improvements in cut quality over manual methods: a reduction in kerf width by at least 37.5%, a reduction in surface roughness (Ra) by over 70%, and a reduction in average roughness depth (Rz) by over 65%. These improvements translate directly into economic benefits through material savings, reduced post-processing labor, and enhanced consistency.
The methodology satisfies the stringent requirements of foundry cutting operations and possesses significant industrial application value. It effectively addresses the safety, efficiency, and quality limitations of manual riser removal, paving the way for wider adoption of automation in the casting industry for processing large and complex casting parts.
