Microsoft has published its updated services agreement which now includes five rules relating to artificial intelligence.
The Microsoft Services Agreement governs every interaction you have with any of its “consumer online products and services.”
The inclusion of the AI rules gives insight into how Microsoft plans to use your data but also highlights the concerns the company has around AI.
The five new AI Services rules state:
- Reverse Engineering. You may not use the AI services to discover any underlying components of the models, algorithms, and systems. For example, you may not try to determine and remove the weights of models.
- Extracting Data. Unless explicitly permitted, you may not use web scraping, web harvesting, or web data extraction methods to extract data from the AI services.
- Limits on use of data from the AI Services. You may not use the AI services, or data from the AI services, to create, train, or improve (directly or indirectly) any other AI service.
- Use of Your Content. As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.
- Third party claims. You are solely responsible for responding to any third-party claims regarding Your use of the AI services in compliance with applicable laws (including, but not limited to, copyright infringement or other claims relating to content output during Your use of the AI services).
A quick look at the first three rules in broad terms shows that Microsoft doesn’t want you to use AI to reverse engineer any of its AI tools or models.
It’s interesting that the first rule uses the example of determining the weights of Microsoft’s models.
Weights are the parameters that a machine learning model learns during training to make predictions based on input data.
Microsoft spends a lot of computing resources on training its models and doesn’t want anyone to have access to the nuts and bolts of the end result.
The fourth rule says that Microsoft will be keeping an eye on how you use its AI products to prevent “abusive or harmful uses or output” without defining what it considers abusive or harmful.
Interestingly, that rule is light on detail and doesn’t clarify its use of your data like Zoom has recently attempted to do.
The fifth rule about third-party claims basically says that if you use Microsoft’s AI to create content that breaks copyright law then you’re on your own. Which seems fair enough.
Whether you use OneDrive to store files or Xbox to play your favorite game, you’re essentially saying you’re ok with these new terms.
Microsoft joins a number of companies using AI, like Zoom and Google, that are trying to keep up with changing laws and concerns from users about how their data is used.
As publishers of AI tools try to cover themselves legally, content creators like the New York Times are changing their terms to block the use of their data to train AI models. The New York Times now specifically says that non-commercial use of its content does not include training AI models.
Expect some fancy legal footwork and lawsuits as the lawyers on both sides try to understand how we can use AI and still keep everyone happy.