Cybercriminals Change Tactics to Outwit Machine-Learning Defense

The rise in machine learning for security has forced criminals to rethink how to avoid detection.

Dark Reading Staff, Dark Reading

December 14, 2018

1 Min Read
Dark Reading logo in a gray background | Dark Reading

Cybercriminals know that defenders have begun using machine learning to keep users safe. In response, they are changing their tactics to outwit the defenses, and machine learning that leans heavily on historic loss patterns is especially vulnerable.

According to the "Q3 2018 DataVisor Fraud Index Report," which is based on an analysis of sample data from more than 40 billion events, fraud actors have changed their tools so that they can quickly adapt to new defenses put in place by companies. In a statement accompanying the report, DataVisor puts numbers to the quick adaptation, saying, "Out of the fraud signals detected, 36% were active for less than one day, and 64% for less than one week."

In addition, sophisticated fraud actors are now more likely to make use of private domains, conduct multilayer, staged attacks, and wait for days or weeks between those stages.

Read more details here.

About the Author

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights