Flying Phish Hooks Schools of Employees

Penetration test proves many workers can still be easily fooled

Steve Stasiukonis, Contributor

September 11, 2008

5 Min Read
Dark Reading logo in a gray background | Dark Reading

My company, Secure Network Technologies, makes its living by testing the physical security of corporate networks. We use all sorts of social engineering attacks to show our clients their vulnerabilities, frequently disguising ourselves as copier repairmen or air conditioning technicians. Recently, however, my partner, Doug Shields, suggested we should try to “phish” our way into a client's network.

We proposed the phishing idea to some of our customers, and two of them accepted the penetration challenge. The goal was to see how many users would respond to a bogus email by clicking on a link. Permission for the pen test was granted by our clients, and the appropriate agreements were put into place.

The two customers were from very different industries with numerous users of varying computer skills, so we weren't sure how many people would read our phishing email, much less respond to it. We also were concerned that many of our messages would be caught by spam filters, which were employed by both clients.

Like many phishing scams, our plan involved crafting an email to the employees, requesting them to respond by clicking on something. We tasked our summer intern, Karl Bitz, with putting something together.

Within an hour, Karl came back with an elaborate email, crafted in HTML, that approached the employees as one of their company's benefit providers. A link in the email brought the users to a counterfeit site that requested a username and password via an online form. If they filled out the form, the employees were promised a $50 gift credit from a well known online retailer (bogus, of course).

After reviewing the proposed phish, we realized it was too good. We spoke with legal counsel at both companies and decided that its realistic appearance was an unfair advantage and that we needed to dumb it down. We were also concerned about objections that might be raised by the benefits provider. We decided to make some changes to make the phish less dicey – but it's worth noting that a real criminal wouldn't have been concerned about any of these issues.

In an effort to make the phish a bit less convincing, we decided not to register a domain name that looked close to the company’s real one (sometimes called typosquatting). We didn't use a very official looking address as the source address – we used a free email service as the sender. We debated about purchasing an SSL key, but abandoned the idea when we decided not to collect any user information.

Instead of fooling the users, we decided to teach them a lesson. If they clicked on the dangerous link, we wouldn't collect their data – we'd simply route them to a Wikipedia page that explained what phishing was. As a kicker, we added a note the bottom of the email:

  • Please note that clicking embedded links can often lead to stolen information via phishing scams. Feel free to navigate to our login page by typing the address into your browser window. The link is provided merely for your convenience. User information will never be shared with anyone outside of XYZ [the user/client's company name] or any third-party customers and providers. (C) 2008 XYZ Company.

In both of our phishing attacks, we gathered email addresses from public sources – this was more realistic than if we had used the client's own email lists. The day we launched the phish, we anticipated minimal response, if anything at all. We were wrong.

Of approximately 350 emails we sent to employees at Client A, 55 received a response. We believe the actual number of respondents might have been even higher, but the client's IT department informs us that several employees can receive mail but have no access to the Internet.

Client B's attack results were even more interesting. We sent approximately 600 emails, 450 of which were delivered (150 were rejected). Of the 450 delivered emails, 185 people clicked on the link. Six people wrote back and requested additional assistance because they kept being routed to a Wikipedia page on “phishing,” and not to the online retailer's gift card page.

The most amusing result was the email we received the following day. One of our "phish" sent a note to our free email account – apparently assuming that we were the IT department – and not only complained about the misdirected link, but also asked us to come to his location to install a common software application.

We tried to reply back to confirm, but the organization's IT department had caught on to our phishing scam and blocked our email. Not willing to give up, we phoned our desperate user and scheduled an appointment with one of our people. The next day Secure Network rookie employee Griffin Reid went into the client's building and showed up to do the software installation.

When Griffin left for our client's location, I waited nervously for the phone to ring with news he had been detained by security and that our plan had been foiled. But later that morning, I was pleased to see him walk into our office.

Within minutes of his introduction, Griffin had been left alone to resolve the laundry list of problems our user was encountering. After spending a considerable amount of time on the client’s computer and internal network – during which he could have accessed any number of files or other data – Griffin called it quits and headed back to our office.

Our simple phishing attack had turned into a potential security disaster for our client. The thought of end-users responding to questionable emails or clicking on forbidden links is pretty pale in comparison to a physical attack by someone pretending to be from the company's own IT department. It appears that our client's security people have a lot of work to do.

— Steve Stasiukonis is VP and founder of Secure Network Technologies Inc. Special to Dark Reading

About the Author

Steve Stasiukonis

Contributor

Steve serves as president of Secure Network, focusing on penetration testing, information security risk assessments, incident response and digital investigations. Steve has worked in the field of information security since 1997. As a part of that experience, Steve is an expert in social engineering and has demonstrated actual social engineering efforts involving pretexting, phishing and physically financial institutions, data centers and other highly secure operations and facilities. Steve has contributed to Dark Reading since 2006.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights