Archive
Cybersecurity: Train Your Employees!
Cyber risk and risk mitigation are topics that are at the forefront for every person that manages a network and computer infrastructures. We are in a constant battle of wits with the “bad actors” of the cyber world that look to cause harm and mayhem for others. Sometimes the goal is to cause destruction and chaos. Think old-school malware and viruses. Sometimes the goal is to take their victims for as much money as they can by promising to resolve the situation for a steep recovery fee. Think about the original cryptolocker attacks, as well as all the variants since.
Many businesses are willing to pay these fees to avoid loss of data and loss of face with their clients. At times it appears to be more palatable to pay several thousand dollars to the bad actor than to admit they had holes in their security. Doing this gets expensive quickly as more computers within the organization become locked and have their data corrupted. I have examples I can share from various organizations I have worked with over the years. You can read about one such incident by reading my article, “System Down: The Anatomy of an “Oopsie!”.
As organizations’ networks and related systems become more complex, the job of the Systems Administrators and Cyber Security teams continue to get more challenging. With more organizations allowing remote workers, as well as international business travel becoming more common even for small businesses, the threats grow exponentially. The constantly evolving threat environment underscores why it is imperative for IT teams to continually educate themselves on the latest threats, as well as the mitigations of these threats. Businesses need to ensure that all of their security services, from firewalls to the desktop solutions and everything in between, have their subscriptions maintained and that all signatures are up to date.
But these threats don’t just impact businesses. People in their homes are becoming targets as well. Some of the threats are the same as with a business, but there are also other threats. I’ve witnessed and listened to countless stories of individuals receiving phone calls claiming to be tech support from one company or another, and a threat was reported to tech support by their computer. For a “small fee,” they will remote in and clean up the problem. The people that fall for this usually have to get a new credit card due to fraudulent charges. Also, the computer that they allowed this actor to access, which had no problems before the tech support call, has now been infected with some form of malware.
Part of the solution for businesses is cybersecurity awareness education for their staff. Employees need to understand the threats that exist in the cyber landscape. They must be made aware of how the attacks come in, what they can do, and the potential consequences to the business. Patterns in the senders writing style, as well as types of attachments, need to be discussed. The training does not need to be super detailed, down in the weeds cybersecurity education. The training must be presented in a way that individuals of all understanding levels can comprehend the topic. Management needs to stress to the attendees that this training is critical to the business, and will help protect the business as well as the individuals who receive the training. By educating the employees in the workforce, and by providing updates to this education, businesses will reduce the likelihood that a cyber attack will be successful. Firewalls, content filters, and anti-malware applications must be in place, with the subscription services actively maintained and monitored. The education of the employees provides another level of protection to the business infrastructure.
Educating a home user on cybersecurity can be more challenging. Here, the training that employees receive at the office can provide others with a basic understanding of cybersecurity. Individuals in the workforce will go home and take their training with them, providing increased awareness of cybersecurity in the home, but it shouldn’t stop there. Many security providers offer basic and free online security training. By sharing these resources with your employees, they will feel that they can share them with others, once again spreading the knowledge. Here are a couple of sources for free online cybersecurity awareness training:
ESET Cybersecurity Awareness Training
Keep in mind that security awareness is every person’s responsibility. By providing even a basic education on the threats and ways to avoid those threats, business and their employees will be better prepared to manage and mitigate the threats that inevitably make it through the defensive measures their IT teams have in place.
As always, I welcome comments and questions. Got a topic you want to see covered? Let me know in the comments!
System Down: The Anatomy of an “Oopsie!”
You have all heard the horror stories of large organizations, and even governments, that have fallen prey to malicious actors. The massive data breaches, crippled businesses and governments, and the theft of medical records are some examples of these debilitating and embarrassing threats.
Having worked in IT since 1993, I have seen a lot. I have worked with businesses and governments, ranging in size from 3 employees to over 1000 employees. I have worked with businesses that have been hit by viruses that just a nuisance, and I have resolved situations where data got encrypted with a crypto locker variant. All of these situations occurred in businesses that were ill-prepared to deal with an imminent threat. One organization in particular leaps to mind when I think about this situation. The impact for them was especially painful in cost to recover, loss of productivity, and delay of order fulfillment.
For confidentiality, I will not be mentioning the company name or location. All I will say is that they operate in the Pacific Northwest.
This organization was a newer client at the time and had not yet agreed to implement all of the recommendations that I had proposed to them. As most businesses are, they are cost-conscious and had not budgeted for some of the changes. Their servers had been virtualized using Hyper-V on older hardware, so I was supporting one physical server and three virtualized servers.
This episode started when one of their employees disabled the anti-malware software on their computer because they thought it was causing performance issues with their virtual load testing solution. After it was disabled, this person mistyped the name of a well-known web site. That site was able to plant a RAT (Remote Access Trojan) on the computer. One more important detail: This person happened to be a local administrator on every computer in the company. After business hours, a bad actor located in another part of the world accessed this employee’s computer via the RAT. They then proceeded to disable the security solutions on every other computer in the organization. Once they accomplished this, they uploaded a file to every workstation and server in the organization. This file proceeded to encrypt all the data stored on the local drives. It then damaged the operating system in such a way that if the user rebooted to see if the problem went away, the operating system got damaged beyond repair. Since they were able to attack every computer in the organization, every bit of data on all the servers was encrypted.
By now you are probably thinking something like “Yikes! Thank goodness for disaster recovery solutions!”. That is the same thing I thought on my way into resolving this solution. And yes, thank goodness for the backups. The biggest problem we ran into with the restoration of data was performance. Their entire backup solution was cloud-based. Their internet was 50-megabit, so you’re thinking “no problem!”. That’s what I thought too. We’ll circle back to that in a few minutes.
The recovery for this client started immediately. The biggest blessing on this dark day was that I had just started an infrastructure refresh. I had just delivered a new physical server that was destined to by the new Hyper-V host. It was replacing hardware that was almost seven years old. Because I had the basic groundwork laid, I had all the new servers built and fully updated within 5 hours. This is the point where I started running into issues.
Something you may already know, but I’ll say it anyway: Not all cloud-based backup solutions are equal. This client had about 12-terabytes of data backed up to the cloud. Most of it was enormous CAD or other modeling files. As the data stared restoring to the server, we quickly maxed out the 50-megabit connection. I got the go-ahead from the owner to increase the speed to “whatever I thought was appropriate.” I called the ISP and had the bandwidth bumped to 200-megabit in less than 45 minutes. Now the frustration began in earnest. The backup solution that was in place did not list any speed limits on upstream or downstream data. There was a limit somewhere. There had to be with the poor restoration performance. The speed never went above 56-megabit. After testing and verifying the performance of the ISP, I called the backup vendor. When I finally got through 30 minutes later, they informed me that there wasn’t a speed limit, but they had algorithms that distributed the bandwidth so that one customer could not consume the entire connection. They either had a lot of customers, or they had very limited bandwidth. Of course, they would not admit to either, and I was stuck with the miserable performance.
I ended up working with the various department heads to determine which files were critical RIGHT NOW and selectively restored those files first. They then specified a secondary level of important files. Everything left was restored last. The largest downside to this was that restoration was extremely tedious due to complex directory structures.
While the data was restoring, I started rebuilding all the computers in the organization. After the first 24-hours, I had the servers rebuilt, updated and secured, the domain and AD restored, all the workstations rebuilt, and data restoring to the shares.
All told, this project took the better part of 5 days. The majority of that was restoring the data files and fixing glitches with the permissions on shares and files. In total, there were over 90 billable hours spent on this project. The total cost in billable hours worked out to $16,650. All because one person decided to disable their security software. We worked with the client and lowered the bill to just over $11,000. They still complained, but they also realized the value of the work to their business, so eventually paid.
Lessons learned from this experience:
- Verify performance and capabilities for cloud based backup solutions before signing up for them
- Have a local copy of the backup date
- Their backup solution had an unused option to backup to a local NAS
- Don’t just list the security recommendations, but make them a key part of the presentation, repeatedly highlighting the potential issues and driving the security concerns home
- When there is push-back on remediation suggestions, you also need to push back, so the point is made abundantly clear. Be prepared when you go into your meeting with the following information.
- Be able to back up your assertions with actual data and examples
- Include potential disaster remediation times and costs
- Include the hidden costs, such as damage to the business reputation, loss of productivity, and loss of product production
This story could have had a much worse ending than it did. At the time, this was an organization of 12 people, with seven computers and three servers. Imagine the impact on a larger organization that was ill-prepared for such an event. The results could be catastrophic to the business!
As always, I welcome feedback and comments.
Zero Trust: What exactly is it?
You’ve probably heard about the principle of Zero Trust, but what exactly is it? At it’s most basic, Zero Trust is a strategy that involves technologies, processes and the individuals that make use of them. Zero Trust requires strict identification of every person and device trying to access resources on a network. The principle does not differentiate between devices or people that are inside or outside the network perimeter.
The traditional paradigm for network security is the castle-and-moat approach. This defense approach made it difficult to gain access to the network from outside, but people and devices that were inside the network were automatically trusted. This approach was OK before the advent of the Cloud. As companies realized the flexibility and power of cloud services, the security paradigm had to change. Businesses no longer have data stored only within the walls of their “castle”, but increasingly have data stored in the Cloud as well. Most often, this data is a mixture of on premise (in the castle) and in the Cloud.
With this change, businesses needed to be able to authenticate individuals as well as devices before access was granted to any of the data, no matter where it was stored. This additional security has been proven to data breaches. IBM sponsored a study that demonstrated that the average cost of a data breach was over $3 million dollars. With these results, it is not a surprise that organizations are rapidly adopting a Zero Trust policy.
Another aspect of Zero Trust is the principle of least-privileged access. This means each person and device only has the access needed to perform their function, and no more. You can think of this “need-to-know” access, like in a military or spy movie. This minimizes each persons and devices access, and in so doing protects the sensitive parts of the network from access by people and devices that have no business even know the resources are there.
Another critical component of Zero Trust is having a mechanism in place to monitor and report on activities. As Zero Trust continues to evolve, these monitoring solutions have become increasingly more automated. This is especially important for larger organizations that can have thousands of employees, devices, and access requests occurring at any given moment. For smaller organizations, the alerting can be as simple as an email informing of a potential issue. For larger or more complex organizations, the best solutions typically involve a combination of an active display that is visible to key staff at all times who are visually alerted to an incident in progress. This visual alert, in conjunction with an email or SMS message to the incident response team, offers a much improved alerting mechanism for events than the tradition method of log review. The most complex environments deploy monitoring and alerting solutions that use a combination of machine learning and AI to provide a complete monitoring and alerting solution.
For more information on Zero Trust, I highly recommend this article provided by Guardicore.
As always, I value comments and feedback on the articles I write.
DISM upgrade from Server 2008 Standard to Enterprise Caused Havoc
Don’t want to read the story? Jump to the Solution!
So, I had a client that was needing to increase the amount of RAM beyond 32GB on a SQL Server. I start researching ways to make that migration and ran across the Microsoft endorsed method of using DISM to do an in-place upgrade from Standard to Enterprise or Datacenter. Well, my client is licensed for Standard and Enterprise, so this method sounded like a great way to resolve the RAM limitations of Standard edition.
It’s a fairly straight forward process. The instructions can be found here, and directly from Microsoft here. I will mention a small caveat… If your licenses are volume, then you need to use the Microsoft Public KMS key to change editions. After the upgrade, you put your key back in. The other qualifier is that the target server CAN NOT be a DC.
On with the story. All went well with the upgrade, including the insertion of the proper key and subsequent activation of Server 2008 R2 Enterprise using the clients key. Everything looked fine and seemed to be running properly. No errors in the event log, and the areas I checked out looked good.
The next morning, I get a call from my client telling me they cannot print from remote sessions (this is also an RDS server). I connect to the server and get the exact same results. An error message that ends with “Could not create print job”. A few minutes later I get a call from their controller, who is trying to do payroll, and cannot get the application to open properly. I suggest they start by contacting the vendor of the app and we’ll go from there. I continue troubleshooting the printing issue, and discover the Print Management cannot open the snap-in MMC component.
Then I get a call from the accounting vendor. We troubleshoot his app for a bit and find the it accesses some if it’s components via IIS, which looks to be running but not serving data. We decide to reboot. As soon as that reboot is complete, we no longer have even the basic RDS services. I’m down to using an alternate method to connect and admin the server. I check all the basics. Firewall is off. UAC is disabled. IP address and network settings have not changed. No events in Event Viewer. The server appears to be alive and completely healthy.
We troubleshoot a bit longer and determine that there must be some sort of connectivity or communication issue internal to the OS. I decide to work at it a bit more, but ultimately decide to engage MS Support. I also decide to work on this from the comfort of home, as it would likely be a long night.
I create the MS Support request once I get home. I know I have a while to wait (supposedly no more then two hours), so I decide to find some dinner and let my brain veg out on some TV. Two hours passes and no phone call, but I start thinking of other things to look for. A brain break will do the IT guy good sometimes.
I remember from searches earlier in the day that there was at least one person that gave up and reloaded his server from scratch. For me, that is not an option. I start making simpler queries. These lead me down the path of discovery. I see several people with this issue that found temporary fixes, as well as some apparently untested suggestions. I start researching each of these and find more useful information. With each of these threads, strings and nuggets of information, I started to formulate a solution.
It turns out that the component that was broken was licensing. Windows was reporting that it was “Genuine” and activated, but in reality the license management module thought it had an invalid key and had told all the vital components to cease their function until they were licensed again.
I want to save others from hours of time and turmoil (not to mention a hefty MS Support bill)and make sure that the complete solutions gets out to those that may need it. Here it is, as concise as possible.
First I removed any and all traces of the license keys:
slui –ckms (This clears any KMS entries)
slui –upk (The removes installed product keys)
After running these the desktop will go to BLACK and tell you that your version of windows might not be genuine. DO NOT REBOOT YET!
Next, navigate to the Microsoft Windows Validation site. This process will reinstall/repair your damaged licensing components. For me, it reinserted a generic key and validated my Windows Server 2008 R2 as Genuine.
Reboot!
After the reboot, if you look at system properties, you will likely see that Windows only has 4GB available of however much you have installed. In my case, I had 4GB of 28GB available. At this point, I clicked “Change Product Key” on the properties page and pasted in my proper key for Server 2008 R2 Enterprise.
This completed and activated and told me to reboot to activate all the features.
After the reboot, all components, applications and sub-systems were working exactly as they should.
At that moment, I finally received my phone call from MS Support, 3.5 hours after opening the support request with the promise of a call within two hours. I thanked them for the call, and informed them that the problem was already resolved without their assistance.
I don’t believe Microsoft would have come up with a solution to this problem. This took too much research and was such an odd problem that I believe they would have eventually told me to format and reload my server. I found instances of others that faced this problem, and ultimately did completely reload their servers. Hopefully I can save someone that fate with this information.
Defense in Depth: What is it? (Part 1)
Defense in Depth is a term that gets thrown around in the IT community a considerable about these days. It is not always clearly defined or explained, and may leave you with questions like those that follow. What is it? What does it cost? What impact would greater security have on my business? In this post, I will try to define what Defense is Depth is. I will tackle the other questions in additional posts.
Defined:
So, what is Defense in Depth? In a nutshell, it is a combination of security devices, software and education of end users (you and your employees). At its most basic, Defense in Depth involves a high grade firewall at the edge of your network, good anti-virus software (preferably centrally managed) on your servers and workstations, and education of the end user in safe email, internet and social networking practices. In a larger environment, there may be multiple firewalls that protect various branches of your network, as well as specialized Intrusion Prevention devices for monitoring potential points of access and and cutting off the attack before it begins in earnest.
Layer One:
The firewall that you place on your internet connection should have features such as Anti-Virus, Anti-Spyware, Intrusion Detection and Email Filtering built into it. You should also be able to define “zones” for additional security. For example, with Sonicwall and Watchguard firewalls, you can define separate a zones for your servers and place the desktops in another zone. This approach allows all data that passes from your workstations to your servers to be scanned for potentially damaging viruses and spyware. On the Sonicwall, you can also enable Intrusion Detection and Content Filtering to these zones as well as applying those filters BETWEEN the zones, again enhancing your security. As these devices are maturing, you are also able to block certain kinds of traffic that you may not want crossing your workplace network, such as the use of BitTorrent and video/music streaming.
Layer Two:
Next, you need a good anti-virus package installed on your servers and desktops. Ideally, this software will be centrally managed and able to alert you automatically if there is an infection on a system, virus definitions get to far out of date, or any other criteria that you specify. Central management alleviates the load on your internet connection for updating the client systems. Updates are downloaded to one server and then pushed to the clients, versus having every computer in your company going online to download updates and potentially choking your internet connection for a period of time. This central console also lets you define policies that you can push to the clients. You can manage exceptions, as well as block unwanted applications. A couple of example of good anti-virus software are Symantec Endpoint Protection and Avast Professional with the server component.
Layer Three:
Finally, there is the aspect of end user education. This involves learning and teaching the safe use of email systems, internet habits and social networking. In my experience, especially the last few years, the majority of viruses and spyware that I end up cleaning start with the user saying "I was just looking at something on Facebook, when…" or "I was just on Twitter and all of a sudden…". It may sound like I am advocating for the blocking of social networking sites, but the reality is that these sites are here to stay and becoming more a part of our businesses each and every day. What I advocate is education of our users. Teach them to recognize invalid links. Teach them to think about posting patterns. If the post from their friend "Bob" doesn’t look like it written by him, it is possible that "Bob" lost control of his account and a prankster created the post. The links in these kinds of posts can be anything from a benign advertisement to something that makes your account "like" the post, then automatically post to all your friends walls, sending them to sites with "questionable material", and in turn causing their computer to become infected with malware of one variety or another.
Additional Layers:
In some instances, additional layers of security and traffic inspection may be employed. This may be as simple as a packet sniffer looking for errant traffic, or a traffic recording device that records every bit of data that enters or exits the network. This traffic can be reviewed manually, or sent through a program that can report any traffic anomalies.
Conclusion Part 1:
Defense in Depth, implemented even at its most basic levels, will help to ensure the consistent computing experience of your organization and your users. Whether it is the basic three layers described above, or a more in depth implementation, Defense in Depth should be implemented in every organization. The hardware, software and training may be different in each organization, but the end result is a more secure environment for everyone.
Mac owners, beware!
Just hours after Apple updated a security update to protect Mac users against a rash of scareware attacks, a new variant began circulating that completely bypasses the malware-blocking measure.
The trojan arrives in a file called mdinstall.pkg and installs MacGuard, a malicious application that masquerades as security software the user needs to clean a Mac of some nasty…