10 Years Of Human Hacking: How ‘The USB Way’ Evolved

After a decade of clicking without consequence, users still haven’t gotten the message about the dangers of rogue USB devices with malware hidden inside.

Steve Stasiukonis, Contributor

May 10, 2016

4 Min Read
Dark Reading logo in a gray background | Dark Reading

When my piece "Social Engineering the USB Way" ran in 2006, the intent was to create awareness about endpoint security and how plugging something into your computer could result in severe consequences. As a penetration tester, I hoped those who became victim to the exploit would learn a valuable lesson. Since then, my company has performed hundreds of similar tests, tempting users with USB devices in numerous ways. Needless to say, the results were always interesting.

As users started to become educated about rogue USB drives, we changed the rules by purchasing memory sticks branded with their company name and logo. Sometimes we attached them with a lanyard also printed with the corporate insignia. In some cases, we placed them on the desks of individual users, and in other instances, we physically mailed them to the individual. In all scenarios, users still plugged the devices in and ran whatever exploit we stored on the drive. 

Read how it all started when Steve Stasiukonis, in 2006, turned a socially-engineered thumb drive giveaway into a serious internal threat. The piece was one of the most popular reads in Dark Reading history.

When our ability to place devices in the hands of users became a problem we focused on different delivery methods. Rather than trying to drop them on their desks or mail them to users, we shipped bulk packages. In some cases, we sent hundreds of corporate-branded USB memory sticks to targeted departments within a company. We would often place a note in the package, instructing the recipient that these devices were approved by the information technology department and should be distributed to all employees. In almost every instance, they always complied with our request. 

One particular test was very interesting. Our client had been monitoring two packages that had been unopened for several weeks. We shipped one parcel to his West Coast operation and another to his office in the east. 

Our client’s frustration would soon be replaced by panic. I called him to get the status on the West Coast parcel when he explained that he could not find it. While scouring the building, he entered the commissary to see almost every employee wearing our poisoned USB devices and customized lanyards around their necks. Like a mad man, he started going from person to person, tearing them off. 

When I asked about the East Coast package, our client indicated the situation was worse. His East Coast counterpart was watching that package and noticed it had gone missing. They investigated the disappearance, and learned the marketing group had taken our poisoned parcel of USB devices to a well-known industry trade show as free swag for their booth. Fortunately, everything was collected before the attendees of the show were given any of the infected trinkets.

Our social engineering testing began to morph into a physiological experiment. As users clicked on our poisoned thumb drive content, we began displaying a command prompt indicating, “You should not have done that.” Within the prompt, we would also display their IP address, machine name, user name, and our signature logo of a skull-and-cross keyboards. The content displayed would also be sent to the IT department. Our goal was to see if they would call IT in hopes of admitting they had done something that could be a problem for their employer. 

Our results were surprising. We learned that users in a variety of company positions rarely ever called IT about our message. We also found they would repeat their actions. In one case, a VP at a major company clicked on our exploit multiple times to display our message, yet he never called IT!  When the opportunity prevailed, we asked users why they didn’t alert IT.  Only one answer seemed to be genuine. The user responded by saying, “As long as the machine seemed to be functioning fine, why should I bother calling IT?" 

Our user’s honest comment was frightening, but made sense. Why get a tongue-lashing by IT for not following policy if everything was OK? Clicking without consequence clearly has been learned by users.  Although with a plague of CryptoLocker malware becoming more and more prevalent, I suspect this behavior will be changing quickly. Unfortunately, the poor IT guys will be forced to deal with the burden of restoring users machines from backups, or converting dollars to Bitcoin to meets the capture demands.

Related Content:

 

 

About the Author

Steve Stasiukonis

Contributor

Steve serves as president of Secure Network, focusing on penetration testing, information security risk assessments, incident response and digital investigations. Steve has worked in the field of information security since 1997. As a part of that experience, Steve is an expert in social engineering and has demonstrated actual social engineering efforts involving pretexting, phishing and physically financial institutions, data centers and other highly secure operations and facilities. Steve has contributed to Dark Reading since 2006.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights