Introduction: The integration of artificial intelligence (AI) in higher education presents a dichotomy: a potential tool for enhancing learning or a risk to the development of fundamental cognitive skills. This study investigates AI's impact on university students' higher-order thinking skills (HOTS) during complex problem-solving. While AI offers personalized and efficient solutions, concerns persist regarding its potential to hinder critical thought and independent learning.
Methods: A mixed-methods approach was used to explore this issue. A quasi-experimental design compared a control group of university students solving complex problems without AI to an experimental group utilizing AI tools. Data were collected through pre- and post-test assessments of HOTS, a survey on AI usage, and qualitative semi-structured interviews. Epistemic Network Analysis (ENA) was employed to visualize and quantify the cognitive structures students used during the problem-solving process.
Results: Quantitative analysis revealed that the AI-assisted group demonstrated a statistically significant difference in certain HOTS metrics compared to the control group, though the nature of this difference was complex and depended on the type of task. Qualitative data from interviews highlighted students' perceptions of both the benefits and drawbacks, with many expressing concerns about "AI dependency." The ENA provided visual evidence of differing cognitive approaches, suggesting that AI use may alter the interconnectedness of students' thought processes. Ethical concerns around data privacy, algorithmic bias, and the lack of transparency in AI decision-making were prominent themes.
Discussion: The findings suggest that AI's role is not a simple binary of "facilitator or hindrance." Its impact is nuanced, potentially enhancing some aspects of problem-solving while hindering others, particularly when students become overly reliant on AI-generated solutions. Effective integration of AI requires a pedagogical shift towards scaffolding its use to support, rather than replace, cognitive effort. The study underscores the urgent need for robust ethical frameworks and participatory design in AI development for education to mitigate risks and ensure a human-centric approach.