On July 14, 2023, the Canadian Centre for Cyber Security published a guidance document (the Guidance) on the potential risks associated with generative artificial intelligence (AI) and possible mitigation measures for organizations and individuals.

The Guidance describes generative AI as a type of AI tool that is fed large datasets and models features based on this information to create new content, such as texts, images, audio, or software code.  The Guidance warns that, while generative AI may have a significant impact in several industries, this technology brings its own dangers by enabling threat actors to conduct more effective cyber attacks.  Notably, the Guidance referenced the following risks:

  1. Phishing. Generative AI may improve targeted spear phishing attacks by allowing threat actors to craft the attacks more frequently, automatically, and with a higher-level of sophistication.
  2. Privacy of Data. Users of generative AI models may provide sensitive corporate data or personally identifiable information during their interactions with these systems, which can be harvested by threat actors to impersonate individuals or spread false information.
  3. Malicious Code. Technically skilled threat actors may overcome restrictions within generative AI tools and use them to create malware.  Similarly, those with less technical capability may still be assisted by generative AI to write functional malware that can be used against businesses or organizations.
  4. Loss of Intellectual Property. Sophisticated threat actors may use generative AI tools to more efficiently steal corporate data.

To mitigate these risks, the Guidance suggests that organizations take necessary precautionary steps, such as implementing authentication mechanisms (e.g., multi-factor authentication) to prevent unauthorized access to its data, keeping up-to-date with IT equipment and patches, and training employees on how to handle social engineering attacks in the workplace.  The Guidance also advises individuals seeking to protect their personal data from phishing attacks to ensure that they carefully review online content to verify the source, implement proper cyber security hygiene (e.g., use strong passwords), and limit exposure to possible compromise by reducing the amount of personal information posted online.

For organizations that are considering using generative AI, the Guidance lists key security protections that should be implemented in daily practice for the generation of trusted content, including that organizations establish generative AI usage policies, ensure that the datasets used for training their AI systems are from a trusted source, and choose vendors with robust security practices.

Summary By: Imtiaz Karamat

E-TIPS® ISSUE

23 08 09

Disclaimer: This Newsletter is intended to provide readers with general information on legal developments in the areas of e-commerce, information technology and intellectual property. It is not intended to be a complete statement of the law, nor is it intended to provide legal advice. No person should act or rely upon the information contained in this newsletter without seeking legal advice.

E-TIPS is a registered trade-mark of Deeth Williams Wall LLP.