MSc Information Technology
Generative AI Governance Framework for the Software Development Lifecycle
Role: Researcher and Framework Designer
Project Overview
This research project explores how Generative Artificial Intelligence can be responsibly integrated into the Software Development Lifecycle (SDLC). While AI-powered development tools are rapidly transforming how software is built, many organisations struggle with issues related to governance, quality assurance, compliance, and developer well-being.
The goal of this project was to design and evaluate a governance-centric framework that enables organisations to adopt Generative AI tools while maintaining software quality, regulatory compliance, and healthy developer workflows.
The research combines theoretical analysis with practical experimentation through a simulated development environment where AI-assisted tools were applied across different stages of the software development lifecycle.
Problem
Generative AI tools such as GitHub Copilot and AI-powered assistants are rapidly being integrated into software development environments. These tools can generate code, automate documentation, assist with testing, and accelerate many development tasks.
However, organisations adopting these tools often face several challenges:
• AI-generated code may contain errors, security vulnerabilities, or hallucinated outputs
• Development teams lack clear governance structures for AI-assisted workflows
• Intellectual property and data privacy risks may arise from AI-generated outputs
• Developers must spend additional time validating and reviewing AI suggestions
• Increased reliance on AI tools may introduce cognitive strain and workflow disruption
While AI tools promise increased productivity, the absence of structured governance can create risks related to software quality, compliance, and developer well-being.
The central challenge explored in this research was how organisations can adopt Generative AI responsibly while maintaining strong development governance and human oversight.
Research Aim
The aim of this project was to design and evaluate a governance-centric framework that enables responsible integration of Generative AI tools into the Software Development Lifecycle.
The framework focuses on balancing four key dimensions:
• developer productivity
• software quality
• organisational compliance
• developer well-being
Research Questions
The study explored four key questions:
What opportunities and risks emerge when Generative AI tools are integrated into the Software Development Lifecycle?
What governance and quality assurance mechanisms are required to support responsible AI-assisted development?
How does AI-assisted development influence developer experience and cognitive workload?
Can a structured governance framework help organisations adopt AI development tools more responsibly?
My Role
Researcher and Framework Designer
In this project, I designed the governance framework, implemented a simulated development environment, and evaluated how Generative AI tools behave within structured development workflows.
My responsibilities included:
• Conducting literature review on AI-assisted development practices
• Designing the Governance-Centric Generative AI Framework (GCGF)
• Developing a lightweight prototype environment for experimentation
• Integrating governance artefacts into development workflows
• Evaluating productivity, code quality, compliance, and developer experience
Research Methodology
The research followed a Design Science Research approach, which focuses on designing and evaluating practical solutions to real-world problems.
The project combined:
• Literature review of Generative AI in software engineering
• Design of governance artefacts and framework components
• Implementation of a simulated development environment
• Analytical evaluation of AI-assisted development outcomes
A lightweight prototype application was created using a Flask-based environment, allowing AI tools to be tested within a controlled development workflow.
The Governance-Centric Generative AI Framework
The core output of this research was the Governance-Centric Generative AI Framework (GCGF). The framework introduces governance mechanisms at multiple stages of the development lifecycle. Key components include:
Governance Policies
Defines rules for how AI tools should be used within development workflows.
Risk Management Framework
Tracks potential risks associated with AI-generated outputs and tool usage.
Quality Assurance Gates
Ensures that AI-generated code is validated through testing and review before deployment.
Role Responsibility Matrix
Defines accountability across developers, reviewers, and project stakeholders.
Evaluation Dashboard
Measures productivity, quality outcomes, compliance adherence, and developer experience.
Implementation
To evaluate the framework, a simulated development environment was created.
The prototype environment included:
• A lightweight Flask application
• AI-assisted code generation using GitHub Copilot
• Automated CI testing pipelines
• Governance checkpoints embedded in the workflow
Throughout development tasks, governance artefacts were applied to monitor productivity, compliance, and software quality.
This simulation allowed observation of how developers interact with AI tools when governance structures are present.
Key Findings
The research produced several important insights.
Results and Impact
The framework demonstrated that Generative AI can significantly improve development efficiency when supported by structured governance.
Key Outcomes Included:
- Faster development cycles for routine coding tasks
- Improved traceability and accountability in AI-assisted workflows
- Reduced ambiguity around AI tool usage through governance policies
- Improved balance between productivity and code quality
The research showed that organisations can successfully integrate AI development tools when governance mechanisms are embedded into the development process.
Product Opportunities Identified
Beyond the academic research, the project revealed several potential product opportunities.
Key Learnings
This project reinforced the importance of treating AI adoption as a structured organisational change rather than a simple tool implementation.
Successful integration of Generative AI requires a balance between technological capability, governance structure, and human oversight.
Responsible AI systems must prioritise transparency, accountability, and human well-being alongside productivity improvements.
