Meta’s new ‘system cards’ make Instagram’s AI algorithm a little less mysterious
The company’s latest attempt to make internal operations more transparent follows months of regulatory scrutiny over its algorithms.
On Wednesday, Facebook and Instagram’s parent company, Meta, announced during a virtual event that it was applying artificial intelligence capabilities to power a range of tasks, such as universal translation and a new generation of AI assistants, across its future metaverse platform and existing family of apps. An early digital assistant that can make recommendations and set reminders has already started rolling out on Portal, Meta’s video-chat device.
Along with its introduction of new AI-powered technology projects, Meta also made a concerted effort to illustrate how AI works behind-the-scenes with a new explanatory tool called “system cards.”
“AI powers back-end services like personalization, recommendation, and ranking,” Meta said in a blog post yesterday. “But understanding how and why AI operates can be difficult for everyday users and others. We’re aiming to change that.”
[Related: A look inside TikTok’s seemingly all-knowing algorithm]
Artificial intelligence models may be deployed for a wide range of tasks. For example, Meta’s image classification models are designed to predict the contents of an image, but they can also detect and flag harmful content or power a recommender system that shows posts a particular user might find interesting. On top of this, different models can interact and work together on tasks in any given system.
These new system cards would lay out how an AI system would use information such as an individual’s history of interactions on the app, preferences, and account settings to build out a model that can inform the order in which posts are presented to that user as they journey through the app. It’s “designed to provide insight into an AI system’s underlying architecture and help better explain how the AI operates,” Meta explained.
Meta’s pilot system card will attempt to showcase how Instagram ranks its feed. In a slideshow animation on its blog, Meta runs through the different steps the AI model goes through to order posts on a user’s feed. First, all unseen posts from accounts that a person follows go through a pre-rank filtering system that removes content in violation of Instagram’s community guidelines.
[Related: Instagram now lets you manage ‘sensitive content.’ Here’s how to use it.]
Then, a model collects attributes from the post and integrates it with information like how often you interact with an account to predict how likely you are to like, save, tap, share, or comment on that post. It assigns each post a numerical probability score based on this. Posts with higher scores appear earlier than those with lower scores, and more recent posts get a boost to the top of the timeline. The process applies to normal photo posts, videos, reels, shopping posts, and followed hashtags. A third party fact-checker will demote posts that contain “misinformation.”
The system will also apply general rules to promote a large variety of posts across media types, authors, and content, in the feed. These rules, for example, tell the system to “show no more than three posts in a row from the same account.” On the system card tool site, members of the public can run through an interactive exercise with a hypothetical user profile to see how the various components are applied in practice. The tool appears to build off a longer explanatory post that Instagram executive Adam Mosseri wrote last June which tried to shed light on how the Instagram algorithm worked.
Meta’s new initiatives into increased transparency behind the algorithms that power their apps come after months of regulatory scrutiny.
[Related: Here are all the changes Instagram promised Congress it would make]
Lawmakers have introduced a series of legislation proposals with the goal of mandating internet platforms to adequately inform users about how their personal data is used by its services. Last November, House lawmakers introduced the Filter Bubble Transparency Act that would require platforms to offer a version of their service which doesn’t select content based on personal data, Axios reported. A Senate bipartisan bill introduced in early February seeks to spur more research on the impact of various platform designs. Another Senate bill introduced this month would press social media companies to report their algorithm practices to consumer protection agencies like the Federal Trade Commission.
Meta’s flagship app, Facebook, has recently rebranded its feed, which according to The Verge, could be in response to the mounting criticism over the harmful effects its algorithmically ranked content could have.
But even Meta acknowledges that there’s only so much that system cards can do when it comes to explaining how AI systems function to a general audience. “A single system card may not be relevant in the same way to each person that sees it because we continue to test new experiences for our users,” Meta noted in the blog. Additionally, the company argues that “too much information in some of our system cards could give malicious actors enough knowledge about a system or model to reverse-engineer it,” which could compromise the security of its products and potentially harm the users.