Imagine relying on AI to modernize a critical system like Ubuntu's Error Tracker, only to discover that some of its code was utterly flawed. That’s exactly what happened when Microsoft’s GitHub Copilot was tasked with updating the Tracker’s Cassandra database to modern standards. While the experiment showed promise in some areas, even a seemingly straightforward task revealed that AI isn’t foolproof—some generated functions were, in the developer’s words, “plain wrong.” But here’s where it gets controversial: does this mean AI is unreliable for code modernization, or is it just a matter of refining the process?
Last week, I shared how AI was being used to breathe new life into Ubuntu’s Error Tracker (https://www.phoronix.com/news/AI-Ubuntu-Error-Tracker-Improve). The goal was clear: leverage AI to update outdated code, remove deprecated practices, and align with modern standards. Many Phoronix readers cheered the idea, seeing it as a game-changer for maintaining legacy systems. Yet, as this experiment shows, AI isn’t a magic wand—it still requires human oversight and expertise.
Canonical engineer Skia recently provided an update on this AI-driven modernization effort in the Ubuntu Foundations Team’s weekly notes (https://discourse.ubuntu.com/t/foundations-team-updates-2025-12-04/73104/6). Skia noted, “We’re now reviewing and testing Copilot’s output. It’s not a total disaster, but it’s far from perfect. For instance, Copilot didn’t have access to a real database, and I didn’t include the schema in my prompt. Some functions were outright incorrect, though thankfully, those were the minority. You can see the details in my latest pull request.”
This raises an intriguing question: if AI can’t always get it right, even with clear instructions, how much can we truly rely on it for critical tasks? And this is the part most people miss: while AI can save time and effort, it’s not a replacement for human judgment. In this case, Copilot’s output required significant manual review and correction, but it still managed to streamline parts of the process.
For those curious about the nitty-gritty—the AI-generated code, the corrections, and the lessons learned—you can explore the GitHub pull request here (https://github.com/ubuntu/error-tracker/pull/4). It’s a fascinating look at the current state of AI in software development.
But let’s spark some debate: Is AI’s role in code modernization overhyped, or are we simply in the early stages of unlocking its potential? Do you think AI will ever fully replace human developers, or will it always be a tool that requires our guidance? Share your thoughts in the comments—I’m eager to hear your take on this controversial topic!