Optimization of Aerospace Cylindrical Casting Process Using Machine Learning

In the production of aerospace casting parts, thin-walled cylindrical components, such as cabin bodies, are commonly manufactured using aluminum-silicon alloys due to their lightweight and high-strength properties. However, the uneven thickness distribution in these castings aerospace components often leads to challenges in solidification feeding, resulting in defects like shrinkage porosity. Low-pressure casting is a widely adopted method for such applications, as it allows for controlled filling and solidification under pressure, minimizing oxidation and improving mechanical properties. Despite its advantages, optimizing process parameters to reduce defects remains complex, traditionally relying on iterative experimental trials or numerical simulations. In this study, we explore the integration of numerical simulation with data-driven machine learning techniques, specifically Gaussian process regression and genetic algorithms, to predict and minimize shrinkage porosity in aerospace casting parts. Our approach aims to enhance the efficiency of process design while ensuring high-quality outcomes for castings aerospace applications.

The aerospace industry demands high-integrity components, where defects like porosity can compromise structural performance. Low-pressure casting involves pressurizing molten metal in a crucible to fill a mold through a riser tube, enabling smooth filling and directional solidification. However, parameters such as pouring temperature, mold temperature, filling time, and holding pressure significantly influence defect formation. Numerical simulation based on mechanistic models, including momentum and energy conservation equations, has been employed to predict temperature fields, fluid flow, and solidification patterns. For instance, the Niyama criterion is often used to assess shrinkage porosity, defined as:

$$ Ny = \frac{G}{\sqrt{\dot{T}}} $$

where \( G \) is the temperature gradient (°C/cm) and \( \dot{T} \) is the cooling rate (°C/min). A lower Niyama value indicates a higher risk of porosity. While numerical simulations provide detailed insights, they are computationally intensive, requiring hours per simulation run. To address this, we developed a machine learning framework using Gaussian process regression as a surrogate model, combined with genetic algorithms for optimization. This data-driven approach leverages a small set of simulation data to predict porosity volumes rapidly, facilitating efficient parameter design for aerospace casting parts.

Our study focuses on a cylindrical thin-walled aerospace casting part made of AC-42100 aluminum-silicon alloy, with a height of 730 mm, outer diameter of 380 mm, and average wall thickness of 10 mm. The alloy’s chemical composition and thermophysical properties, such as thermal conductivity, density, specific heat, and solid fraction, are critical inputs for simulations. The low-pressure casting system includes a bottom-fed gating arrangement with multiple sprue channels and chill plates to promote directional solidification. Key process parameters, including mold temperature, pouring temperature, filling time, and holding pressure, were varied within specified ranges to investigate their effects on porosity. We conducted numerical simulations using a volume-of-fluid method to model filling and solidification, with mesh discretization of approximately 1.8 million elements. The heat transfer coefficient between the casting and mold was set to 750 W/(m²·K).

For machine learning, we employed a Gaussian process regression model, which is a non-parametric Bayesian approach, to predict porosity volume based on the input parameters. A Gaussian process is defined by a mean function and a covariance function, and it assumes that any finite set of observations follows a multivariate Gaussian distribution. Given a dataset \( D = \{(\mathbf{x}_i, y_i)\}_{i=1}^n \), where \( \mathbf{x}_i \in \mathbb{R}^4 \) represents the input vector (e.g., pouring temperature, mold temperature, filling time, holding pressure) and \( y_i \) is the porosity volume, the Gaussian process model provides a probabilistic prediction. The covariance matrix \( K \) is constructed using a kernel function, such as the squared exponential kernel:

$$ k(\mathbf{x}_i, \mathbf{x}_j) = \sigma_f^2 \exp\left(-\frac{1}{2l^2} \|\mathbf{x}_i – \mathbf{x}_j\|^2\right) + \sigma_n^2 \delta_{ij} $$

where \( \sigma_f^2 \) is the signal variance, \( l \) is the length scale, \( \sigma_n^2 \) is the noise variance, and \( \delta_{ij} \) is the Kronecker delta. The predictive distribution for a new input \( \mathbf{x}_* \) is given by:

$$ p(y_* | \mathbf{x}_*, D) = \mathcal{N}(\mu_*, \sigma_*^2) $$

with

$$ \mu_* = \mathbf{k}_*^T K^{-1} \mathbf{y} $$

$$ \sigma_*^2 = k(\mathbf{x}_*, \mathbf{x}_*) – \mathbf{k}_*^T K^{-1} \mathbf{k}_* $$

where \( \mathbf{k}_* \) is the vector of covariances between the new input and the training data. To optimize the process parameters, we integrated genetic algorithms, which simulate natural evolution through selection, crossover, and mutation operations. The fitness function was defined as the minimization of predicted porosity volume. The initial population size was set to 50, with 40 generations for evolution.

We generated a full factorial design of 81 parameter combinations based on three levels for each of the four factors. From these, an L9 orthogonal array with 9 samples was used as the training dataset for machine learning, while the remaining 72 samples served as the test set. The input parameters were standardized using z-score normalization to improve numerical stability:

$$ x’_{ij} = \frac{x_{ij} – \mu_j}{\sigma_j} $$

where \( \mu_j \) and \( \sigma_j \) are the mean and standard deviation of the j-th parameter. Similarly, the output porosity volumes were normalized. The machine learning program was implemented in Python, leveraging libraries such as scikit-learn for Gaussian process regression and DEAP for genetic algorithms.

The numerical simulations revealed that the filling process was stable, with the mold cavity completely filled in approximately 7.1 seconds, and solidification completed after 4193.5 seconds. The temperature distribution and solidification patterns indicated directional solidification from the top and outer regions toward the bottom and inner sections, facilitated by the gating system and chill plates. Porosity defects were primarily observed in the sprue channels and minor areas of the cylindrical body, with volumes ranging from 1.76 to 2.98 cm³ depending on the parameters. The orthogonal experimental analysis showed that holding pressure had the most significant influence on porosity, followed by pouring temperature, mold temperature, and filling time. The optimal parameter combination from orthogonal design was holding pressure of 220 kPa, pouring temperature of 740°C, mold temperature of 275°C, and filling time of 12 seconds.

Table 1: Orthogonal Experimental Design and Simulated Porosity Results
Sample No. Pouring Temperature (°C) Mold Temperature (°C) Filling Time (s) Holding Pressure (kPa) Simulated Porosity Volume (cm³)
1 700 250 8 180 2.9752
2 700 275 10 200 2.3011
3 700 300 12 220 1.9563
4 740 250 12 200 2.0792
5 740 275 8 220 2.1323
6 740 300 10 180 2.1679
7 780 250 10 220 2.6294
8 780 275 12 180 1.9617
9 780 300 8 200 2.5956

The machine learning predictions using Gaussian process regression and genetic algorithms showed strong agreement with numerical simulation results. The surrogate model achieved a prediction standard deviation of approximately 0.001474, with a relative error of 1.9% across the test set. The 95% confidence interval for predictions was within ±0.0029, indicating high reliability. For example, as holding pressure increased, porosity volume decreased, consistent with the physical understanding that higher pressure enhances feeding during solidification. Similarly, lower pouring temperatures and shorter filling times reduced porosity tendencies. The optimization via genetic algorithms identified an improved parameter set: pouring temperature of 762.34°C, mold temperature of 288.04°C, filling time of 10.08 seconds, and holding pressure of 196.57 kPa, which resulted in a porosity volume of 1.7607 cm³—significantly lower than the best orthogonal design outcome.

Table 2: Comparison of Numerical Simulation and Machine Learning Predictions for Selected Cases
Sample No. Pouring Temperature (°C) Mold Temperature (°C) Filling Time (s) Holding Pressure (kPa) Simulated Porosity Volume (cm³) Predicted Porosity Volume (cm³)
1 700 250 8 180 2.9752 2.9752
2 700 250 8 200 2.5961 2.5956
3 700 250 8 220 2.0675 2.1323
4 700 250 10 180 2.1076 2.1679
5 700 250 10 200 2.2295 2.3011
6 700 250 10 220 2.6636 2.6294
7 700 250 12 180 1.9827 1.9617
8 700 250 12 200 2.0237 2.0792
9 700 250 12 220 2.0209 1.9563

The efficiency of machine learning in predicting porosity was remarkable, with each prediction taking approximately 0.613 milliseconds, compared to 3 hours for a single numerical simulation. This speed advantage enables rapid exploration of the parameter space, making it suitable for real-time optimization in industrial settings for aerospace casting parts. The Gaussian process model effectively captured the nonlinear relationships between process parameters and porosity, as evidenced by the low prediction errors. Moreover, the genetic algorithm efficiently navigated the multi-dimensional space to find optimal solutions, demonstrating the synergy between data-driven and evolutionary approaches.

In the context of castings aerospace applications, the integration of machine learning with numerical simulation offers a pathway toward digital twin technologies, where virtual models continuously update based on real-time data. This can lead to smarter manufacturing processes, reducing defects and improving yield. For instance, the optimized parameters from our study not only minimized porosity but also maintained the structural integrity of the cylindrical aerospace casting part. Further enhancements could involve incorporating additional factors, such as alloy composition variations or mold material properties, into the machine learning model.

In conclusion, our study demonstrates that machine learning, particularly Gaussian process regression and genetic algorithms, can effectively optimize low-pressure casting processes for aerospace casting parts. The data-driven approach provides accurate predictions of shrinkage porosity, with significant efficiency gains over traditional numerical simulations. By identifying optimal parameters that surpass orthogonal design outcomes, this methodology supports the production of high-quality castings aerospace components. Future work will focus on expanding the model to include microstructural predictions and integrating it with real-time monitoring systems for adaptive control in manufacturing environments.

Scroll to Top