The UK government has concluded a trial of Microsoft’s M365 Copilot, revealing a mixed picture of its impact on productivity. Users expressed satisfaction with tasks such as summarizing meetings and drafting emails, where the tool showed clear efficiency gains. However, the assessment found that more complex tasks like Excel analysis and PowerPoint creation were often completed more slowly, resulting in lower-quality outcomes.
The Department for Business and Trade provided 1,000 licenses for the trial, with most allocated to volunteers and 30% to randomly selected participants. The study found that while 72% of users were satisfied, the time savings for most tasks were minimal. For example, while email writing was faster, the gains were small, and Excel data analysis was completed more slowly, with a noticeable drop in accuracy and quality.
Participants noted that routine administrative tasks were handled more efficiently, allowing them to redirect time toward strategic work or personal development. However, the overall assessment concluded that M365 Copilot did not significantly boost productivity, prompting Microsoft to re-evaluate its claims to justify its license costs. This outcome has raised questions about the tool’s value, especially given the relatively high cost of its subscription. Microsoft has been working with customers to quantify the benefits and better justify the expense, highlighting the need for a more balanced assessment of its impact.
The trial also provided insights into user behavior, with participants using Word, Teams, and Outlook most frequently. Loop and OneNote saw much lower usage, suggesting that the tool’s integration with Microsoft’s existing suite is not fully utilized. The report emphasized that while some users found value in the tool, the overall productivity gains were not substantial enough to justify the investment for many.
Microsoft’s continued development of M365 Copilot may be influenced by these findings, as the company seeks to refine its offerings to better meet the needs of its users. The UK government’s trial underscores the importance of empirical evaluation in assessing the effectiveness of AI-driven tools in professional settings, highlighting the gap between marketing claims and real-world performance.