Montreal on the Front Line of the Battle for Information Security
Artificial intelligence, or AI as it is commonly referred to, is at an important juncture in history. Technological minds such as Elon Musk have labelled the regulation of AI as one of the greatest challenges humanity faces. While our ability to automate actions through complex algorithms has opened a vast range of possibilities, there are also a series of difficulties that AI poses. Musk specifically warns against the capabilities of AI to progress beyond human intervention and act entirely independently from human control. While this doomsday projection Musk warns against seems somewhat alarmist, his concern certainly reflects the opportunity AI presents, and all the baggage which accompanies these possibilities. Montreal has been thrown to the forefront of the worldwide attempt to solve these questions AI is asking us. A wave of AI labs were established in 2017 by companies such as Facebook, Google, and Microsoft, all intent on creating AI technology with the safety of its progression at the crux of their innovation. Questions as to how we may innovate AI while still maintaining and protecting our own freedoms are as important as ever given how quickly the technology is progressing, and how threatening the technology has become. The AI labs created in Montreal may be the first step to finding some solutions to the AI problem.
Montreal’s rise to the AI world is somewhat unexpected. Typically we cite the temperate climate of Silicon Valley as the heartland of technological innovation in North America. A beautiful suburban paradise, the Valley serves as the idyllic backdrop to some of the most profitable and cutting-edge companies in the world. In contrast, Montreal poses as a frigid tundra, yet is now starting to attract a wealth of funding and talent from across the world. This is in large part due to the fastidious academic work that has been quietly churning away in the city. Universite de Montreal’s Professor Yoshua Bengio has been studying artificial intelligence since the 1980s, when the industry was seeing little investment. Since then, with the backing of UdeM, Bengio has carved out the university as a leader in deep learning technology by devoting resources to research in a growing field. In an interview Bengio gave to Forbes this past November, he notes the reason for Montreal’s appeal to companies looking to use AI;
“It started with the University of Montreal trusting our vision and allowing us to recruit more professors, in a field that was not yet popular. Then there was a snowball effect. Better researchers brought in better students and better postdocs. This meant we were able to publish better papers, so we attracted more international visibility, which meant we could recruit even better people.”
The reward for all of this academic graft is best represented in the Facebook AI lab. The new lab, which launched in September 2017, is headed by McGill professor, Joelle Pineau. The AI that Professor Pineau will help to develop is crucial to the future of information. As the internet’s largest social media platform, Facebook is in possession of one of the largest databases in the world, processing over 500 terabytes of data per day, forming detailed profiles on every one of their 2.2 billion users. Facebook owns all of this data and there are very loose guidelines which dictate how this data can be used or shared. This data played an instrumental part in orchestrating the Russian breach of the US public’s information security.
The lead up to the 2016 US Presidential Election was the first instance in which AI took a front seat in the national consciousness. After the DNC emails were hacked, a series of robotically engineered Facebook and Twitter accounts were created by Russian intelligence. The accounts were programmed through AI to generate and distribute misinformation in a coordinated manner. Using Facebook user data, automated bots emulated radical groups using algorithms and were able to reach the widest possible and most susceptible audience, further hardening their political stance. This created a torrent of misinformation which was widely spread and saturated an infinite number of conversations with information and content which was simply falsified or exaggerated. This breach of information security was so pervasive that a number of Trump campaign advisers retweeted or shared content which had originally been shared by AI bots. The threat which AI could pose stretched far beyond cybersecurity and into the realms of information security. Information security is the process of preventing the misuse, alteration, or corruption of information, and can be viewed as one of the most important matters of this era.
The issue of information security is a matter which Facebook’s AI lab will surely be working to combat given the public calamity which this most recent breach caused. Information security is a crucial part of democratic integrity, and as technology progresses, it faces further threat. Charlie Warzel interviewed information security expert Aviv Ovadya in his article, “The Terrifying Future of Fake News”. In the article, Ovadya paints a bleak picture of the future of artificial intelligence. As AI technology becomes more sophisticated, Ovadya claims, the harder it becomes to differentiate fact from fiction. Ovadya paints the picture of a dystopian reality, where AI can learn an individual’s facial features, mannerisms, and tone and create an entirely faked video of a person talking. Additionally, Ovadya believes that AI has the ability to progress and manipulate political movements through fake grassroots campaigns, entirely from the confines of social media, a scenario which Ovadya dubs “polity simulation”. These factors, collectively, Ovadya believes, could lead to serious consequences, ranging from an effect which he labels “reality apathy” (an environment where we are so inundated with misinformation that the consumer of information simply gives up on distinguishing real from fake) to all-out war. While Ovadya’s concerns are certainly alarmist, these are also legitimate worries which highlight the capabilities of artificial intelligence. These concerns will likely be met by Facebook with serious research, some of which has already started.
Last year Facebook took measures to flag posts which might be misleading, or forms of propaganda, as a way to quell the spread of misinformation. However, Facebook recently scrapped this tool only months after its launch in favour of a new tool which prioritized personal posts from friends and family on your News Feed over shared articles. Facebook believed that making the subjective distinction between what was propaganda and what wasn’t was a lot harder to moderate in comparison to allowing an algorithm to sort your News Feed. However, this change has already drawn critics. Benjamin Fung, an associate professor at McGill, believes that these new measures are insufficient to combatting misinformation. Fung, however, does believe that AI is the solution to this issue; “Social media companies can utilize AI technology to identify those bot accounts based on their activity patterns, social networks, and contents of their posts/tweets. Then these bot-like accounts can be suspended, or flagged as potential bots for other users”. This type of technology is what we can expect social media labs to create in the coming years in the race to match the hackers.
As AI becomes more capable, the potential for it being weaponized is even more frightening. One of the scariest components of Ovadya’s prediction is the fact that the technology cannot be regulated, an unstoppable force, almost certain to keep progressing. The work of the Facebook labs is crucial, as a result. The work that is done in Montreal will likely combat against the next threat to national security, a battlefield which is becoming more and more entrenched in social media. However, we cannot assume that the innovations made will absolutely protect us. Professor Fung believes that the solution to information security may be found from other constraints. “The regulation should be on the data collection and sharing, not on the AI technology”, Fung suggests that without data AI is essentially useless against information security, however, consumers are often extremely susceptible to data breaches, as Fung notes; “Suppose a hacker wants to steal a target victim’s data. It is fairly easy to trick the victim into clicking on some malicious link”. While AI technology is crucial, a more practical long-term solution may be found through securing data effectively. AI is currently in a very troubling moment. The solutions to a lot of our problems are far from obvious. However, the moment is also exciting and unbelievable innovation is on our doorstep, with Montreal taking the first stride towards this future.
Edited by Francesca Wallace