October 2024

Generative AI in Engineering Capstone Projects: Challenges & Opportunities

   

By: Qusay H. Mahmoud
Faculty of Engineering and Applied Science
Ontario Tech University
Qusay.mahmoud@ontariotechu.ca

 

The rise of generative AI tools like ChatGPT and others has dramatically transformed many aspects of engineering education, including capstone projects. These final-year courses, a cornerstone of engineering education, test students' abilities to apply their accumulated knowledge to open-ended real-world challenges. While AI tools offer students new ways to brainstorm, generate content, and enhance their productivity, these tools also present unique ethical challenges for originality and academic integrity.

In Fall 2023, I coordinated an engineering capstone course for Electrical and Software Engineering students where generative AI played an unexpected role. Some student teams incorporated AI-generated content in their project reports, raising questions about how to responsibly integrate such tools into academic work. This article reports on my experience managing AI usage in a capstone course, explores the ethical dilemmas it presents, and offers a roadmap for addressing these challenges in the future.

Challenges

Generative AI tools assist students with generating content, rephrasing text, brainstorming ideas, and even refining the tone of their writing. But when used without disclosure or transparency, these AI tools can undermine the core purpose of a capstone project for demonstrating independent and team creative problem-solving and critical thinking skills. Based on my experience dealing with AI-generated content in last year’s course as documented in the article AI Generated Content in Academic Coursework https://drqwrites.medium.com/ai-generated-content-in-academic-coursework-a-case-study-3c42125a9c00, the key challenges include:

  1. Academic Integrity: With tools like ChatGPT offering seamless content generation, it is possible for students to submit work that was significantly assisted by AI without properly crediting the tool. This threatens to erode academic integrity, especially if institutions and their course syllabi lack clear policies and guidelines on how AI should be used and cited.

  2. Ownership: The boundary between human-written and AI-generated content can become ambiguous, leading to concerns about who truly "owns" the work being submitted. The collaborative nature of many capstone projects makes it even harder to track the extent of AI use within team-based efforts. For example, a team in my course submitted a report with approximately 36% of its content flagged AI-generated by Turnitin. This caused confusion about ownership, as it was unclear whether the AI or the students were responsible for key sections. Clear guidelines on AI use and shared responsibility are essential to maintain ownership integrity.

  3. Unintended Team Dynamics: Some teams were unaware that members had used AI tools, leading to discrepancies in contributions and a lack of transparency within the team. This raises concerns about fairness, as team grades might not reflect the true effort of all individuals involved. In another case, a team’s report contained approximately 25% AI-generated content, but only one member was aware of it. During a follow-up meeting, it was revealed that the student had used AI to rewrite several sections of the report but hadn’t informed the rest of the team. This led to a deeper discussion about transparency, ethical collaboration, and the need for clear communication among team members.

  4. Detection Complexities: AI content detection tools, such as Turnitin and others, have introduced new layers of complexity in monitoring student submissions. These tools sometimes misidentify original work as AI-generated or fail to flag content cleverly masked through paraphrasing tools. In one instance, a team whose report was flagged with 9% AI-generated content claimed that none of the members had used AI. After investigating the report, I concluded that the flagged content was false positive. This scenario underscores the limitations of detection tools and the importance of approaching AI-detection results with caution.

Opportunities

Despite the challenges I discussed, generative AI presents numerous opportunities for enhancing capstone projects. AI tools can serve as valuable aids for content creation, helping students organize their thoughts, generate ideas and improve their writing. However, the key is to establish a framework where students can use AI without undermining the integrity of their work. The following five opportunities outline how AI can foster creativity, critical thinking, transparency, and institutional progress, all while maintaining the integrity of student work and helping shape the future of academic policies.

  1. Educating Students on Responsible AI Use: Just as we teach students to responsibly use scientific literature or data analysis tools, we must guide them in using AI tools effectively and responsibly. This includes understanding the ethical implications of AI use and knowing when it is appropriate to employ such tools.

  2. Enhancing Creativity and Problem-Solving: When used correctly, AI can foster creativity in capstone projects. One team in my course used ChatGPT to generate multiple design options for their capstone project. The AI helped them brainstorm potential solutions and allowed them to explore various design pathways quickly. The students used these AI-generated ideas only as starting points, ultimately relying on their own expertise to finalize their project. This demonstrated a productive and ethical use of AI that enhanced the team’s creative process. By encouraging students to treat AI as a collaboration tool to support their thinking, rather than a replacement for it, educators can help students maintain ownership over their projects.

  3. Fostering Critical Thinking: Consider a capstone team using AI to analyze a large dataset. The AI tool can help identify patterns and potential correlations that the students may not have considered. By incorporating AI-driven insights into their project, the students refine their analysis and produce more accurate conclusions. AI tools can elevate the technical rigor of capstone projects, provided the students maintain ownership of the final analysis.

  4. Encouraging Transparency: Transparency is key. Students must feel comfortable disclosing when they use AI, knowing that this practice will be seen as responsible and ethical. Clear guidelines, coupled with open discussions on AI’s evolving role in academia, are essential for building trust and promoting integrity.

  5. Shaping Future Policy: The use of generative AI in academia is still relatively new, and institutions must continue to develop adaptive policies that reflect this changing landscape. This includes refining the role of detection tools, updating academic integrity guidelines, and providing ongoing education for both students and faculty.

Guidelines

From the outset of the course, I set clear guidelines regarding the use of generative AI. These guidelines emphasized transparency, ethical practices, and the need for proper citation when AI tools were used. These guidelines aimed to ensure that students understood the appropriate contexts for AI use while maintaining academic integrity. The guidelines are structured around the following three core principles: ethical use and citation of AI-generated content, cautious interpretation of detection results, and a proactive approach to handling AI involvement.

  1. Ethical Use and Citation: In one instance, a team used ChatGPT to brainstorm potential solutions for their capstone project. They generated a list of ideas that they refined and adapted to fit their design goals. When submitting their report, the team disclosed their use of AI in the early brainstorming stages and properly cited the generated content. This transparency ensured that their work was seen as an ethical integration of AI, allowing them to benefit from the tool while maintaining academic integrity.

  2. Interpreting Detection Results: AI detection results varied from 9% to 81% AI-generated content in reports. The issue of false positive underscores the limitations of such tools and the importance of approaching AI-detection results with caution. To handle these inconsistencies, I considered a threshold of up to 10% AI content as a potential false positive. For content above this threshold, I worked with students to investigate the extent of AI involvement, emphasizing fairness and transparency. My approach was not about catching students out but rather about educating them on the nuances of AI and its role in their work.

  3. Proactive Approach: Rather than adopting a punitive approach across the board, I categorized the extent of AI usage and applied proportionate responses. Teams that disclosed AI use and cited it properly faced no penalties. Teams with high percentages of AI content but no disclosure were required to either accept grade deductions or/and write a reflective report on academic misconduct. The reflection on academic misconduct exercise aimed to reinforce the importance of academic integrity while offering the team a chance to reflect on the ethical implications of AI use. Their reflection highlighted how the tool helped them generate ideas but ultimately led to a deeper understanding of the need for original, independent work.

Balancing Innovation with Integrity

Generative AI is here to stay, and it will continue to influence how students approach their academic work. In capstone projects, which represent the culmination of a student’s engineering education, it is vital to strike a balance between embracing AI’s innovative potential and ensuring that students maintain ownership and integrity in their submissions.

The examples from my capstone course illustrate the balance educators must strike between embracing the benefits of generative AI and ensuring that students uphold the integrity of their work. By fostering transparency, encouraging ethical AI use, and addressing potential misuse through education rather than punishment, we can ensure that AI becomes a valuable tool in engineering education, enhancing student learning without compromising academic standards.

 

Qusay H. Mahmoud is a professor of software engineering and the associate dean for experiential learning and engineering outreach at Ontario Tech University, Canada. He previously served as the founding chair of the Department of Electrical, Computer, and Software Engineering at the same institution. Dr. Mahmoud has held leadership roles in national and international organizations, including serving as Secretary, Vice Chair, and Chair of the Canadian Heads of Electrical and Computer Engineering (CHECE), and representing CHECE (in 2015) on the Board of the American Electrical and Computer Engineering Department Heads Association (ECEDHA).