Beschreibung
The integration of generative artificial intelligence (AI) in sustainable process design has gained substantial traction, with AI increasingly employed to generate innovative solution ideas. However, the efficacy of these AI-generated ideas requires rigorous evaluation to ensure their usefulness, feasibility, novelty, and sustainability. This study examines the reliability of AI evaluations by comparing them with human expert assessments. Advanced generative AI models were utilized to produce design ideas and evaluate them using AI-driven metrics aligned with human evaluation criteria. Concurrently, a panel of domain experts assessed the same ideas based on predefined criteria. The comparative analysis identifies both areas of alignment and divergence between AI and human evaluations, providing valuable insights into the strengths and limitations of AI in early-stage process design. The findings highlight AI’s potential to facilitate sustainable innovation while underscoring the necessity for thorough validation of AI-generated assessments. This research advances AI evaluation methods and provides a framework for integrating AI effectively in sustainable process design.