Microsoft has announced the imminent release of its ChatGPT-integrated AI assistant, “Microsoft 365 Copilot,” to be integrated into its office software.
Starting 1 November, the AI tool will be rolled out to users worldwide. As part of its many features, the AI can generate summaries of Microsoft Teams meetings for individuals who can’t or don’t participate.
In other words, Copilot can attend meetings for you – it might win a few customers just for that.
Beyond that, the AI has the capability to draft emails, create Word documents, draw graphs in spreadsheets, and even design PowerPoint presentations via prompts.
Microsoft’s vision for Copilot is clear. They believe it will “eliminate ‘drudgery” – essentially smoothing out the stuff people don’t want to do. As Microsoft states, “Teams users globally are in three times more meetings each week than they were in 2020.”
A potential issue is that Microsoft’s Copilot may create situations where people inadvertently interact with AI. You might attend a meeting, and someone’s AI assistant is listening in. Where does that data go, and is it safe?
Microsoft firmly states that all data processed by Copilot is secured and not repurposed for further AI training.
While many tools already provide auto-generated text notes from meetings, AI’s direct involvement poses new questions.
Additionally, some regulations, like Europe’s AI Act and China’s AI rules, mandate that people must be told if they are interacting with AI instead of humans.
Addressing this, Collette Stallbaumer, head of Microsoft 365, said, “It is a tool, and people have responsibility to use it responsibly.” Stallbaumer continued, “You only have access to data that you would otherwise be allowed to see. It respects data policies.”
She further elaborated, “I might not be telling you, when I send you that response, that I used an AI assistant to help me generate it. But the human is always in the mix and always in control.”
Questions remain, however, particularly as the EU’s AI Act firmly places responsibility for safe use on the developer’s lap.
“People have responsibility to use it responsibly” is perhaps unlikely to be a fair defense.
Microsoft’s approach to AI misuse has been meandering. For example, the company promised to pay the legal fees of anyone who gets sued using Copilot.
This somewhat acted as an admission that Copilot is indeed capable of producing copyrighted information that businesses can’t use.
As tools like Copilot firmly integrate into people’s lives and workflows, it’s challenging to predict what functions will be caught by forthcoming regulations and which will slip through the net unchecked.