Perceptual Ad Blockers Have Security Flaws, Too
Blocking ads is more than stopping annoying pop-ups. There's a security component as well. However, a crop of perceptual ad blockers that use machine learning have their own flaws and shortcomings.
Users want to be able to block the ads that show up on websites for more reasons than not wanting to be annoyed by the content. They also wish to protect themselves from the malicious content that may be embedded within those ads.
Typical ad blockers currently in use are based on "filter lists," which are specifics that tell the ad blockers exactly what type of content to block.
However, there are problems with this approach, notably keeping that list current and relevant. This led researchers at Princeton University to develop what they called a perceptual ad blocker over the course of 18 months.
Essentially, it looks at the web page, and deletes ads based on what it sees.
It was hoped that this kind of approach would stop the "arms race" of ad generators making minor tweaks in their ads to stop the blocking actions, which was then followed by ad blockers responding with new lists, and then ad generators making further tweaks.
(Source: iStock)
Perceptual blockers are designed to pick up on structural ad cues like a "Sponsored" link or a close button inside of a pop-up ads, as well as any legally required signifiers an ad must show. They will use these structural elements to make the blocking decision.
But security researchers from Stanford University and CISPA Helmholtz Center for Information Security have found some conceptual and practical problems with this approach. Their research paper finds that they have found ways that such a blocker not only can be definitively defeated, but may pose security problems.
As the researchers note:
"We show that perceptual ad-blocking engenders a new arms race that likely disfavors ad-blockers. Unexpectedly, perceptual ad-blocking can also introduce new vulnerabilities that let an attacker bypass web security boundaries and mount DDoS attacks."
The researchers looked at two perceptual ad blockers.
The first is named Perceptual Ad Highlighter and the other is called Sentinel. Both use neural networks to do the training needed for the blockers to learn what an ad looks like. From this training, the blockers create a model of what they expect an ad to look like. This model is then used for the "block or leave alone" decision.
To attack the perceptual blockers, the researchers first created ads that were designed to fool both of the blockers. They made changes to the form of the ads that were imperceptible to a human, but capable of fooling a machine learning algorithm.
One change they made was to alter the AdChoices logo commonly found in ads. They also found that a transparent mask laid over a website could do the trick as well. The researchers also looked at other possible attacks.
However, in the end, they found that current ways of developing perceptual ad blockers were not going to survive a determined adversary. The researchers feel that the methods that are currently in use to train such blockers are easily hoodwinked.
As the researchers summarized:
"The overarching goal of this analysis has been to highlight and raise awareness of the fundamental vulnerabilities that perceptual ad-blockers inherit from existing vision classifiers. As long as robust defenses to adversarial examples elude us, perceptual ad-blockers will be dragged into a new arms race in which they start from a precariously disadvantaged position -- given the stringent threat model that they must survive."
Related posts:
— Larry Loeb has written for many of the last century's major "dead tree" computer magazines, having been, among other things, a consulting editor for BYTE magazine and senior editor for the launch of WebWeek.
Read more about:
Security NowAbout the Author
You May Also Like