Businesses are rushing to take advantage of generative AI, but surveys reveal that this comes with risks.
According to recent data from BlackBerry, a staggering 75% of businesses are either banning or contemplating restricting the use of tools like ChatGPT and other generative AI platforms.
The research indicated that 61% of the organizations contemplating such a move believe the restrictions might be permanent or long-lasting.
2,000 IT decision-makers from the US, UK, Canada, Germany, France, the Netherlands, Japan, and Australia participated.
Executives cited concerns surrounding data security, privacy, and the potential damage to brand reputation.
A clear majority (80%) felt it was an organization’s prerogative to permit or deny which applications employees utilize for work, and 74% also felt that outright bans could be overreaching.
There were positive findings, too. About 55% acknowledged AI’s efficiency gains, 52% believed in its power to foster innovation, and 51% felt it could augment creativity. A further 81% saw its potential use in fortifying cybersecurity defenses.
The US government recently announced a cyber security challenge to design AI applications for bolstering national security.
BlackBerry’s Chief Technology Officer, Shishir Singh, remarked on the findings, “Banning generative AI applications in the workplace can mean a wealth of potential business benefits are quashed.”
He added, “As platforms mature and regulations take effect, flexibility could be introduced into organizational policies. The key will be in having the right tools in place for visibility, monitoring, and management of applications used in the workplace.”
Gartner research highlights similar AI-related risks to enterprises
A recent Gartner survey of 249 senior executives revealed similar findings to BlackBerry’s research, highlighting concerns around security, intellectual property, and over-reliance on third-party technology.
- Third-party viability (67%): Executives cited concerns about the stability or reliability of external AI partners or vendors. If these third parties face challenges, it could have cascading effects on the organizations that depend on them.
- Mass generative AI availability (66%): The proliferation of generative AI tools prompts concerns surrounding intellectual property, privacy, and cyber security.
- Financial planning uncertainty (62%): Organizations might face challenges in their financial planning processes caused by changing economic conditions, regulatory changes, or market volatility.
- Cloud concentration risk (62%): As businesses increasingly rely on cloud-based solutions, there’s a growing risk associated with being too dependent on a single cloud provider.
- China trade tensions (56%): Ongoing trade tensions with China hint at potential disruptions in international trade, affecting supply chains, tariffs, and market stability.
Ran Xu, the director of research for Gartner’s risk and audit practice, highlighted that generative AI shot up in their quarterly risk survey.
Xu stated, “This reflects both the rapid growth of public awareness and usage of generative AI tools, as well as the breadth of potential use cases and, therefore, potential risks that these tools engender.”
AI tools are still predominantly developed by the same handful of companies, meaning businesses building workflows around them are vulnerable to performance fluctuations in a low-competition market. This is evident in recent debates surrounding GPT-4’s output quality, which has reportedly declined in recent months.
OpenAI admitted that performance had dropped on some tasks but is yet to disclose the technical details of why.
Fluctuations in AI performance are bad news for companies building business-critical workflows around models they don’t control.