On January 17, 2024, the Canadian Centre for Cyber Security (the Centre) published an assessment titled “The Threat from Large Language Model Text Generators” (the Assessment), which draws on resources available to the Centre by virtue of its unique role in defending the Government of Canada’s information systems and uses those resources to provide valuable insight into adversary behaviour in cyberspace. The content of the Assessment is based on information available as of June 26, 2023.

Large language models (LLMs) are a form of generative artificial intelligence (AI) that can generate complete sentences or entire documents based on user-inputted prompts. The Assessment states that LLMs represent a growing and evolving threat to Canada’s information ecosystem as generative AI has become increasingly accessible to the public, cyber threat actors, and state-sponsored actors. For the current threat landscape, the Centre identifies the following to be the “most likely threats” from LLMs:

  1. Online Influence Campaigns. Prior to LLMs, human writers were required to produce content for online influence campaigns. However, LLMs can now replace human writers to amplify misinformation/disinformation campaigns, which Canadians are predicted to be especially vulnerable to, due to their high intake of social media content.
  2. Email Phishing Campaigns. The text generated from LLMs is often nearly indistinguishable from human-written text, which allows cyber threat actors to automate the drafting of targeted phishing emails to steal sensitive information.
  3. Detection of Human vs. Machine Content. There is an increasing difficulty in detecting and removing synthetic content due to the current lack of effective detection tools and increasing availability of LLM text generators.

Conversely, the Centre assessed that the use of LLM text generators to create sophisticated/malicious code that could lead to a zero-day attack constitutes an “unlikely” threat.  Similarly, while threat actors could inject or change data used to train newer versions of LLMs to maliciously undermine the accuracy and quality of the generated data (i.e., poisoning datasets), the Centre deems such threats to be “very unlikely” due to the large size and proprietary nature of the datasets.

Lastly, for organizations using LLM text generators, the Assessment identified certain associated risks that should be kept in mind, including: (i) data governance breaches, where unauthorized use of online tools may expose information to third parties and fail organizational data governance requirements; and (ii) the leaking of protected information (i.e., individuals providing input into LLM text generators may unknowingly leak sensitive information outside of approved organizational security frameworks).

Summary By: Steffi Tran

E-TIPS® ISSUE

24 02 07

Disclaimer: This Newsletter is intended to provide readers with general information on legal developments in the areas of e-commerce, information technology and intellectual property. It is not intended to be a complete statement of the law, nor is it intended to provide legal advice. No person should act or rely upon the information contained in this newsletter without seeking legal advice.

E-TIPS is a registered trade-mark of Deeth Williams Wall LLP.