FEATUREDLatestNews

What measures will be in place to safeguard Canadians from the potentially unsettling implications of artificial intelligence?

Canada has initiated its initial efforts to rein in the extensive capabilities of artificial intelligence, unveiling guidelines that place responsibility on developers to protect against potential risks arising from this emerging technology.

On Wednesday, Innovation Minister François-Philippe Champagne introduced a voluntary code of ethics designed to promote responsible development and utilization of generative AI systems. This code is expected to be endorsed by 12 prominent AI companies, developers, and researchers in the country.

Generative AI, exemplified by OpenAI’s widely acclaimed ChatGPT, represents a swiftly evolving and sophisticated branch of artificial intelligence. It relies on data inputs to generate content such as text, images, and sounds.

While the technology has received praise for its capacity to optimize and simplify operations across various sectors, it has also drawn global scrutiny due to several inherent risks. These include the production of intentionally misleading content, potential breaches of privacy, and the risk of biases contaminating the datasets upon which these systems rely.

After meeting with experts … we realized that while we are developing a law here in Canada, it will take time. And I think that if you ask people in this industry, they want us to take action now to make sure that we have specific measures that companies can take now to build trust in their AI products,” said at the All In artificial intelligence conference in Montreal by Champagne.

 

Following a series of consultations, the guidelines unveiled on Wednesday assign developers with key responsibilities. They are mandated to take accountability for risk management associated with their systems, establish safety assessments prior to tool deployment, and address potential discriminatory aspects.

The code additionally calls for transparency with the public to enable informed engagement with the technology, human oversight during system use (as opposed to relying solely on computers), and robust cybersecurity measures to safeguard AI tools from cyberattacks.

Signatories also commit to support the ongoing development of a robust, responsible AI ecosystem in Canada. This includes contributing to the development and application of standards, sharing information and best practices with other members of the AI ecosystem, collaborating with researchers working to advance responsible AI, and collaborating with other actors, including governments, to support public awareness and education on AI,” noted in a document.

Ottawa’s code of conduct is voluntary and lacks legal binding; however, Champagne’s office emphasizes that signatories, including BlackBerry and OpenText, have expressed a commitment to collaborate with the government in a spirit of goodwill.

Since OpenAI is under the ownership of an American company, ChatGPT is not bound by Canada’s guidelines. Instead, the company has already agreed to adhere to a comparable code established by the United States back in July.