Archive

Archive for January, 2020

Cybersecurity: Train Your Employees!

January 15, 2020 Leave a comment

Cyber risk and risk mitigation are topics that are at the forefront for every person that manages a network and computer infrastructures. We are in a constant battle of wits with the “bad actors” of the cyber world that look to cause harm and mayhem for others. Sometimes the goal is to cause destruction and chaos. Think old-school malware and viruses. Sometimes the goal is to take their victims for as much money as they can by promising to resolve the situation for a steep recovery fee. Think about the original cryptolocker attacks, as well as all the variants since.

Many businesses are willing to pay these fees to avoid loss of data and loss of face with their clients. At times it appears to be more palatable to pay several thousand dollars to the bad actor than to admit they had holes in their security. Doing this gets expensive quickly as more computers within the organization become locked and have their data corrupted. I have examples I can share from various organizations I have worked with over the years. You can read about one such incident by reading my article, “System Down: The Anatomy of an “Oopsie!”.

As organizations’ networks and related systems become more complex, the job of the Systems Administrators and Cyber Security teams continue to get more challenging. With more organizations allowing remote workers, as well as international business travel becoming more common even for small businesses, the threats grow exponentially. The constantly evolving threat environment underscores why it is imperative for IT teams to continually educate themselves on the latest threats, as well as the mitigations of these threats. Businesses need to ensure that all of their security services, from firewalls to the desktop solutions and everything in between, have their subscriptions maintained and that all signatures are up to date.

But these threats don’t just impact businesses. People in their homes are becoming targets as well. Some of the threats are the same as with a business, but there are also other threats. I’ve witnessed and listened to countless stories of individuals receiving phone calls claiming to be tech support from one company or another, and a threat was reported to tech support by their computer. For a “small fee,” they will remote in and clean up the problem. The people that fall for this usually have to get a new credit card due to fraudulent charges. Also, the computer that they allowed this actor to access, which had no problems before the tech support call, has now been infected with some form of malware.

Part of the solution for businesses is cybersecurity awareness education for their staff. Employees need to understand the threats that exist in the cyber landscape. They must be made aware of how the attacks come in, what they can do, and the potential consequences to the business. Patterns in the senders writing style, as well as types of attachments, need to be discussed. The training does not need to be super detailed, down in the weeds cybersecurity education. The training must be presented in a way that individuals of all understanding levels can comprehend the topic. Management needs to stress to the attendees that this training is critical to the business, and will help protect the business as well as the individuals who receive the training. By educating the employees in the workforce, and by providing updates to this education, businesses will reduce the likelihood that a cyber attack will be successful. Firewalls, content filters, and anti-malware applications must be in place, with the subscription services actively maintained and monitored. The education of the employees provides another level of protection to the business infrastructure.

Educating a home user on cybersecurity can be more challenging. Here, the training that employees receive at the office can provide others with a basic understanding of cybersecurity. Individuals in the workforce will go home and take their training with them, providing increased awareness of cybersecurity in the home, but it shouldn’t stop there. Many security providers offer basic and free online security training. By sharing these resources with your employees, they will feel that they can share them with others, once again spreading the knowledge. Here are a couple of sources for free online cybersecurity awareness training:

ESET Cybersecurity Awareness Training

Cybrary End User Awareness

Keep in mind that security awareness is every person’s responsibility. By providing even a basic education on the threats and ways to avoid those threats, business and their employees will be better prepared to manage and mitigate the threats that inevitably make it through the defensive measures their IT teams have in place.

As always, I welcome comments and questions. Got a topic you want to see covered? Let me know in the comments!

System Down: The Anatomy of an “Oopsie!”

January 8, 2020 1 comment

You have all heard the horror stories of large organizations, and even governments, that have fallen prey to malicious actors. The massive data breaches, crippled businesses and governments, and the theft of medical records are some examples of these debilitating and embarrassing threats.

Having worked in IT since 1993, I have seen a lot. I have worked with businesses and governments, ranging in size from 3 employees to over 1000 employees. I have worked with businesses that have been hit by viruses that just a nuisance, and I have resolved situations where data got encrypted with a crypto locker variant. All of these situations occurred in businesses that were ill-prepared to deal with an imminent threat. One organization in particular leaps to mind when I think about this situation. The impact for them was especially painful in cost to recover, loss of productivity, and delay of order fulfillment.

For confidentiality, I will not be mentioning the company name or location. All I will say is that they operate in the Pacific Northwest.

This organization was a newer client at the time and had not yet agreed to implement all of the recommendations that I had proposed to them. As most businesses are, they are cost-conscious and had not budgeted for some of the changes. Their servers had been virtualized using Hyper-V on older hardware, so I was supporting one physical server and three virtualized servers.

This episode started when one of their employees disabled the anti-malware software on their computer because they thought it was causing performance issues with their virtual load testing solution. After it was disabled, this person mistyped the name of a well-known web site. That site was able to plant a RAT (Remote Access Trojan) on the computer. One more important detail: This person happened to be a local administrator on every computer in the company. After business hours, a bad actor located in another part of the world accessed this employee’s computer via the RAT. They then proceeded to disable the security solutions on every other computer in the organization. Once they accomplished this, they uploaded a file to every workstation and server in the organization. This file proceeded to encrypt all the data stored on the local drives. It then damaged the operating system in such a way that if the user rebooted to see if the problem went away, the operating system got damaged beyond repair. Since they were able to attack every computer in the organization, every bit of data on all the servers was encrypted.

By now you are probably thinking something like “Yikes! Thank goodness for disaster recovery solutions!”. That is the same thing I thought on my way into resolving this solution. And yes, thank goodness for the backups. The biggest problem we ran into with the restoration of data was performance. Their entire backup solution was cloud-based. Their internet was 50-megabit, so you’re thinking “no problem!”. That’s what I thought too. We’ll circle back to that in a few minutes.

The recovery for this client started immediately. The biggest blessing on this dark day was that I had just started an infrastructure refresh. I had just delivered a new physical server that was destined to by the new Hyper-V host. It was replacing hardware that was almost seven years old. Because I had the basic groundwork laid, I had all the new servers built and fully updated within 5 hours. This is the point where I started running into issues.

Something you may already know, but I’ll say it anyway: Not all cloud-based backup solutions are equal. This client had about 12-terabytes of data backed up to the cloud. Most of it was enormous CAD or other modeling files. As the data stared restoring to the server, we quickly maxed out the 50-megabit connection. I got the go-ahead from the owner to increase the speed to “whatever I thought was appropriate.” I called the ISP and had the bandwidth bumped to 200-megabit in less than 45 minutes. Now the frustration began in earnest. The backup solution that was in place did not list any speed limits on upstream or downstream data. There was a limit somewhere. There had to be with the poor restoration performance.  The speed never went above 56-megabit. After testing and verifying the performance of the ISP, I called the backup vendor. When I finally got through 30 minutes later, they informed me that there wasn’t a speed limit, but they had algorithms that distributed the bandwidth so that one customer could not consume the entire connection. They either had a lot of customers, or they had very limited bandwidth. Of course, they would not admit to either, and I was stuck with the miserable performance.

I ended up working with the various department heads to determine which files were critical RIGHT NOW and selectively restored those files first. They then specified a secondary level of important files. Everything left was restored last. The largest downside to this was that restoration was extremely tedious due to complex directory structures.

While the data was restoring, I started rebuilding all the computers in the organization. After the first 24-hours, I had the servers rebuilt, updated and secured, the domain and AD restored, all the workstations rebuilt, and data restoring to the shares.

All told, this project took the better part of 5 days. The majority of that was restoring the data files and fixing glitches with the permissions on shares and files. In total, there were over 90 billable hours spent on this project. The total cost in billable hours worked out to $16,650. All because one person decided to disable their security software. We worked with the client and lowered the bill to just over $11,000. They still complained, but they also realized the value of the work to their business, so eventually paid.

Lessons learned from this experience:

  • Verify performance and capabilities for cloud based backup solutions before signing up for them
  • Have a local copy of the backup date
    • Their backup solution had an unused option to backup to a local NAS
  • Don’t just list the security recommendations, but make them a key part of the presentation, repeatedly highlighting the potential issues and driving the security concerns home
  • When there is push-back on remediation suggestions, you also need to push back, so the point is made abundantly clear. Be prepared when you go into your meeting with the following information.
    • Be able to back up your assertions with actual data and examples
    • Include potential disaster remediation times and costs
    • Include the hidden costs, such as damage to the business reputation, loss of productivity, and loss of product production

This story could have had a much worse ending than it did. At the time, this was an organization of 12 people, with seven computers and three servers. Imagine the impact on a larger organization that was ill-prepared for such an event. The results could be catastrophic to the business!

As always, I welcome feedback and comments.

Zero Trust: What exactly is it?

January 5, 2020 Leave a comment

You’ve probably heard about the principle of Zero Trust, but what exactly is it? At it’s most basic, Zero Trust is a strategy that involves technologies, processes and the individuals that make use of them. Zero Trust requires strict identification of every person and device trying to access resources on a network. The principle does not differentiate between devices or people that are inside or outside the network perimeter.

The traditional paradigm for network security is the castle-and-moat approach. This defense approach made it difficult to gain access to the network from outside, but people and devices that were inside the network were automatically trusted. This approach was OK before the advent of the Cloud. As companies realized the flexibility and power of cloud services, the security paradigm had to change. Businesses no longer have data stored only within the walls of their “castle”, but increasingly have data stored in the Cloud as well. Most often, this data is a mixture of on premise (in the castle) and in the Cloud.

With this change, businesses needed to be able to authenticate individuals as well as devices before access was granted to any of the data, no matter where it was stored. This additional security has been proven to data breaches. IBM sponsored a study that demonstrated that the average cost of a data breach was over $3 million dollars. With these results, it is not a surprise that organizations are rapidly adopting a Zero Trust policy.

Another aspect of Zero Trust is the principle of least-privileged access. This means each person and device only has the access needed to perform their function, and no more. You can think of this “need-to-know” access, like in a military or spy movie. This minimizes each persons and devices access, and in so doing protects the sensitive parts of the network from access by people and devices that have no business even know the resources are there.

Another critical component of Zero Trust is having a mechanism in place to monitor and report on activities. As Zero Trust continues to evolve, these monitoring solutions have become increasingly more automated. This is especially important for larger organizations that can have thousands of employees, devices, and access requests occurring at any given moment. For smaller organizations, the alerting can be as simple as an email informing of a potential issue. For larger or more complex organizations, the best solutions typically involve a combination of an active display that is visible to key staff at all times who are visually alerted to an incident in progress. This visual alert, in conjunction with an email or SMS message to the incident response team, offers a much improved alerting mechanism for events than the tradition method of log review. The most complex environments deploy monitoring and alerting solutions that use a combination of machine learning and AI to provide a complete monitoring and alerting solution.

For more information on Zero Trust, I highly recommend this article provided by Guardicore.

As always, I value comments and feedback on the articles I write.