Artificial intelligence (AI) tools, like ChatGPT, have made a dramatic entrance into the public realm. Now, virtually anyone can use AI for a wide range of tasks, from translating languages to writing code. Because of its vast capabilities, AI can change the way we work. Before diving into this exciting opportunity and using using AI at work in Canada, there are some important factors to consider whether AI is the right fit for your organization. Let’s look at eight of these in more detail.
Table of Contents
- Bias, Stereotyping, and Discrimination
- Malicious Use
- Intellectual Property
- Watch the Video
1. Bias, Stereotyping, and Discrimination
AI identifies patterns and makes predictions or classifications based on those patterns. Generative AI is a specific type of AI that draws on the data it was trained on and the information provided in a prompt (such as a question the user asks it) to produce an output. That data it draws on depends on the program. Some are trained to draw on information from the Internet. Others may be trained on a specific set of information.
Humans also make assumptions (called biases) based on the patterns we see in the world. Biases are like shortcuts in the brain that lead us to a conclusion based on a prejudice we have in favour or against a person, group, idea, or thing. Like AI, we take in information about the world around us and make predictions drawn from the patterns we identify. When these predictions are based on bias and not fact, they can lead to stereotyping and discriminatory behaviour.
Because humans have biases, someone can introduce their bias to an AI tool in the training data when programming it. The AI’s output will reflect any bias present in the data it was trained on. For example, if an AI tool for facial recognition was mostly trained on white male faces, it may struggle to correctly identify racialized persons.
Someone can also introduce their bias in the prompt they input to the AI tool. A common example is when HR uses AI to assist in hiring. If the input instructions for screening résumés includes biased criteria, then the output will reflect the bias, which could lead to unfair hiring decisions.
Generative AI has been criticized for providing factually incorrect information in its outputs. Part of the problem is that not all AI tools have been trained on current information. For example, as of early 2023, ChatGPT had only been trained on information up to 2021. This means it could not produce output that reflects changes or events since this date. Even more concerning is that AI tools can’t evaluate the accuracy of their output, and nothing prevents AI tools from providing false information—even when the training data is up to date. AI runs on predictions, like the likelihood of a particular arrangement of words, and not the contextual meaning or implications of its output or how accurately it reflects reality.
Another factor is the prompt itself. Generally, users should give more specific prompts to achieve desired results, but this isn’t always the case. Even specific prompts don’t always generate better quality or more accurate outputs. Even when the output provided is correct, it may not be comprehensible, or it may lack sufficient details and nuance to fulfil its intended purpose.
A lack of accuracy and the presence of bias could lead to unfair employment-related decisions, such as decisions about hiring, promotions, or end of employment. In all decision-making, employees should be given impartial treatment without favouritism or discrimination. Organizations need standards on how to use AI in decision-making to support fairness. For example, eliminating bias from initial prompts.
4. Malicious Use
Much of the controversy surrounding AI involves its potential to both help and harm people. Companies often use AI with the intention of achieving positive outcomes, like improving efficiency or generating new ideas. But AI also has the potential for malicious use, like creating false, misleading, or defamatory content.
At work, employees could use AI to harm others. For example, an employee could create a fake image of a fabricated conversation between them and another employee that makes it appear as if they were being harassed. With this in mind, workplace investigations must be conducted by someone who knows how to identify and detect AI-generated content. Where they cannot, they should seek assistance from a third party who can.
Before introducing a new tool, practice, or protocol at work, employers should assess it for inherit and potential risks.
5. Intellectual Property
Who really owns the output produced by generative AI tools? The answer isn’t always clear, and it may depend on several considerations, like whether the generative AI tool:
- Draws on third-party data (this data could be subject to copyright claims);
- Lists sources used to compile the information and whether these sources have ownership and copyright implications; or
A lack of clear expectations for using generative AI at work can create a risk of liability for an organization if they are infringing on others’ intellectual property. An organization can also risk having their own intellectual property infringed upon if they provide it in the input.
Using personal information in generative AI may violate privacy legislation in Canada. Existing privacy legislation requires that consent be given to collect, use, and disclose personal information (like someone’s name, age, police record checks, medical records, and employee records), except in specific circumstances as allowed by the applicable legislation. But once information is put into a generative AI tool, employers have little to no control over how the tools uses or discloses it. Even where information is de-identified, it may still be associated with the individual depending on the other information in the prompt.
The same concerns for using personal information in AI apply to inputting confidential information. Certain personal and business-related information (such as compensation, client data, and the personal information of third parties) may be protected by confidentiality agreements or policies. Generally, the company can use and disclose confidential information for purposes reasonable for the completion of job duties and responsibilities. This could include using confidential information in AI tools. However, entering confidential information into AI tools increases the possibility of this information becoming public knowledge, which could breach confidentiality agreements or policies.
Because of the various implications of AI and its vast capabilities, specific legislation in Canada regulating the use of AI is evolving with this new technology. The following legislative activities may place rules on how AI can be used:
- Quebec: Amendments were made to the Act Respecting the Protection of Personal Information in the Private Sector. Changes include setting out transparency requirements where organizations use personal information to make decisions about individuals, exclusively using automated processing, such as AI tools. These requirements come into force September 22, 2023.
- Federal: The federal government has proposed the Artificial Intelligence and Data Act (AIDA), as part of Bill C-27, which seeks to regulate the development and use of high-impact AI tools and address adverse effects associated with their use. This legislation will build on existing human rights and consumer protection laws to guard against these adverse effects, such as biased outputs. This bill is currently going through the legislative process.
These legislative developments reflect the interest in regulating the rapidly growing field of AI. Employers interested in using AI should stay informed about legislative changes.
Watch the Video
The Bottom Line: Using AI in Canada
HR professionals in Canada need to determine whether and how AI can be used in their workplace and take steps to mitigate the risks associated with its use. When it comes to implementing changes in your workplace, start a project with our team of HR consultants, who can help you achieve your goal while staying compliant.