Cybersecurity: Train Your Employees!

January 15, 2020 Leave a comment

Cyber risk and risk mitigation are topics that are at the forefront for every person that manages a network and computer infrastructures. We are in a constant battle of wits with the “bad actors” of the cyber world that look to cause harm and mayhem for others. Sometimes the goal is to cause destruction and chaos. Think old-school malware and viruses. Sometimes the goal is to take their victims for as much money as they can by promising to resolve the situation for a steep recovery fee. Think about the original cryptolocker attacks, as well as all the variants since.

Many businesses are willing to pay these fees to avoid loss of data and loss of face with their clients. At times it appears to be more palatable to pay several thousand dollars to the bad actor than to admit they had holes in their security. Doing this gets expensive quickly as more computers within the organization become locked and have their data corrupted. I have examples I can share from various organizations I have worked with over the years. You can read about one such incident by reading my article, “System Down: The Anatomy of an “Oopsie!”.

As organizations’ networks and related systems become more complex, the job of the Systems Administrators and Cyber Security teams continue to get more challenging. With more organizations allowing remote workers, as well as international business travel becoming more common even for small businesses, the threats grow exponentially. The constantly evolving threat environment underscores why it is imperative for IT teams to continually educate themselves on the latest threats, as well as the mitigations of these threats. Businesses need to ensure that all of their security services, from firewalls to the desktop solutions and everything in between, have their subscriptions maintained and that all signatures are up to date.

But these threats don’t just impact businesses. People in their homes are becoming targets as well. Some of the threats are the same as with a business, but there are also other threats. I’ve witnessed and listened to countless stories of individuals receiving phone calls claiming to be tech support from one company or another, and a threat was reported to tech support by their computer. For a “small fee,” they will remote in and clean up the problem. The people that fall for this usually have to get a new credit card due to fraudulent charges. Also, the computer that they allowed this actor to access, which had no problems before the tech support call, has now been infected with some form of malware.

Part of the solution for businesses is cybersecurity awareness education for their staff. Employees need to understand the threats that exist in the cyber landscape. They must be made aware of how the attacks come in, what they can do, and the potential consequences to the business. Patterns in the senders writing style, as well as types of attachments, need to be discussed. The training does not need to be super detailed, down in the weeds cybersecurity education. The training must be presented in a way that individuals of all understanding levels can comprehend the topic. Management needs to stress to the attendees that this training is critical to the business, and will help protect the business as well as the individuals who receive the training. By educating the employees in the workforce, and by providing updates to this education, businesses will reduce the likelihood that a cyber attack will be successful. Firewalls, content filters, and anti-malware applications must be in place, with the subscription services actively maintained and monitored. The education of the employees provides another level of protection to the business infrastructure.

Educating a home user on cybersecurity can be more challenging. Here, the training that employees receive at the office can provide others with a basic understanding of cybersecurity. Individuals in the workforce will go home and take their training with them, providing increased awareness of cybersecurity in the home, but it shouldn’t stop there. Many security providers offer basic and free online security training. By sharing these resources with your employees, they will feel that they can share them with others, once again spreading the knowledge. Here are a couple of sources for free online cybersecurity awareness training:

ESET Cybersecurity Awareness Training

Cybrary End User Awareness

Keep in mind that security awareness is every person’s responsibility. By providing even a basic education on the threats and ways to avoid those threats, business and their employees will be better prepared to manage and mitigate the threats that inevitably make it through the defensive measures their IT teams have in place.

As always, I welcome comments and questions. Got a topic you want to see covered? Let me know in the comments!

System Down: The Anatomy of an “Oopsie!”

January 8, 2020 1 comment

You have all heard the horror stories of large organizations, and even governments, that have fallen prey to malicious actors. The massive data breaches, crippled businesses and governments, and the theft of medical records are some examples of these debilitating and embarrassing threats.

Having worked in IT since 1993, I have seen a lot. I have worked with businesses and governments, ranging in size from 3 employees to over 1000 employees. I have worked with businesses that have been hit by viruses that just a nuisance, and I have resolved situations where data got encrypted with a crypto locker variant. All of these situations occurred in businesses that were ill-prepared to deal with an imminent threat. One organization in particular leaps to mind when I think about this situation. The impact for them was especially painful in cost to recover, loss of productivity, and delay of order fulfillment.

For confidentiality, I will not be mentioning the company name or location. All I will say is that they operate in the Pacific Northwest.

This organization was a newer client at the time and had not yet agreed to implement all of the recommendations that I had proposed to them. As most businesses are, they are cost-conscious and had not budgeted for some of the changes. Their servers had been virtualized using Hyper-V on older hardware, so I was supporting one physical server and three virtualized servers.

This episode started when one of their employees disabled the anti-malware software on their computer because they thought it was causing performance issues with their virtual load testing solution. After it was disabled, this person mistyped the name of a well-known web site. That site was able to plant a RAT (Remote Access Trojan) on the computer. One more important detail: This person happened to be a local administrator on every computer in the company. After business hours, a bad actor located in another part of the world accessed this employee’s computer via the RAT. They then proceeded to disable the security solutions on every other computer in the organization. Once they accomplished this, they uploaded a file to every workstation and server in the organization. This file proceeded to encrypt all the data stored on the local drives. It then damaged the operating system in such a way that if the user rebooted to see if the problem went away, the operating system got damaged beyond repair. Since they were able to attack every computer in the organization, every bit of data on all the servers was encrypted.

By now you are probably thinking something like “Yikes! Thank goodness for disaster recovery solutions!”. That is the same thing I thought on my way into resolving this solution. And yes, thank goodness for the backups. The biggest problem we ran into with the restoration of data was performance. Their entire backup solution was cloud-based. Their internet was 50-megabit, so you’re thinking “no problem!”. That’s what I thought too. We’ll circle back to that in a few minutes.

The recovery for this client started immediately. The biggest blessing on this dark day was that I had just started an infrastructure refresh. I had just delivered a new physical server that was destined to by the new Hyper-V host. It was replacing hardware that was almost seven years old. Because I had the basic groundwork laid, I had all the new servers built and fully updated within 5 hours. This is the point where I started running into issues.

Something you may already know, but I’ll say it anyway: Not all cloud-based backup solutions are equal. This client had about 12-terabytes of data backed up to the cloud. Most of it was enormous CAD or other modeling files. As the data stared restoring to the server, we quickly maxed out the 50-megabit connection. I got the go-ahead from the owner to increase the speed to “whatever I thought was appropriate.” I called the ISP and had the bandwidth bumped to 200-megabit in less than 45 minutes. Now the frustration began in earnest. The backup solution that was in place did not list any speed limits on upstream or downstream data. There was a limit somewhere. There had to be with the poor restoration performance.  The speed never went above 56-megabit. After testing and verifying the performance of the ISP, I called the backup vendor. When I finally got through 30 minutes later, they informed me that there wasn’t a speed limit, but they had algorithms that distributed the bandwidth so that one customer could not consume the entire connection. They either had a lot of customers, or they had very limited bandwidth. Of course, they would not admit to either, and I was stuck with the miserable performance.

I ended up working with the various department heads to determine which files were critical RIGHT NOW and selectively restored those files first. They then specified a secondary level of important files. Everything left was restored last. The largest downside to this was that restoration was extremely tedious due to complex directory structures.

While the data was restoring, I started rebuilding all the computers in the organization. After the first 24-hours, I had the servers rebuilt, updated and secured, the domain and AD restored, all the workstations rebuilt, and data restoring to the shares.

All told, this project took the better part of 5 days. The majority of that was restoring the data files and fixing glitches with the permissions on shares and files. In total, there were over 90 billable hours spent on this project. The total cost in billable hours worked out to $16,650. All because one person decided to disable their security software. We worked with the client and lowered the bill to just over $11,000. They still complained, but they also realized the value of the work to their business, so eventually paid.

Lessons learned from this experience:

  • Verify performance and capabilities for cloud based backup solutions before signing up for them
  • Have a local copy of the backup date
    • Their backup solution had an unused option to backup to a local NAS
  • Don’t just list the security recommendations, but make them a key part of the presentation, repeatedly highlighting the potential issues and driving the security concerns home
  • When there is push-back on remediation suggestions, you also need to push back, so the point is made abundantly clear. Be prepared when you go into your meeting with the following information.
    • Be able to back up your assertions with actual data and examples
    • Include potential disaster remediation times and costs
    • Include the hidden costs, such as damage to the business reputation, loss of productivity, and loss of product production

This story could have had a much worse ending than it did. At the time, this was an organization of 12 people, with seven computers and three servers. Imagine the impact on a larger organization that was ill-prepared for such an event. The results could be catastrophic to the business!

As always, I welcome feedback and comments.

Zero Trust: What exactly is it?

January 5, 2020 Leave a comment

You’ve probably heard about the principle of Zero Trust, but what exactly is it? At it’s most basic, Zero Trust is a strategy that involves technologies, processes and the individuals that make use of them. Zero Trust requires strict identification of every person and device trying to access resources on a network. The principle does not differentiate between devices or people that are inside or outside the network perimeter.

The traditional paradigm for network security is the castle-and-moat approach. This defense approach made it difficult to gain access to the network from outside, but people and devices that were inside the network were automatically trusted. This approach was OK before the advent of the Cloud. As companies realized the flexibility and power of cloud services, the security paradigm had to change. Businesses no longer have data stored only within the walls of their “castle”, but increasingly have data stored in the Cloud as well. Most often, this data is a mixture of on premise (in the castle) and in the Cloud.

With this change, businesses needed to be able to authenticate individuals as well as devices before access was granted to any of the data, no matter where it was stored. This additional security has been proven to data breaches. IBM sponsored a study that demonstrated that the average cost of a data breach was over $3 million dollars. With these results, it is not a surprise that organizations are rapidly adopting a Zero Trust policy.

Another aspect of Zero Trust is the principle of least-privileged access. This means each person and device only has the access needed to perform their function, and no more. You can think of this “need-to-know” access, like in a military or spy movie. This minimizes each persons and devices access, and in so doing protects the sensitive parts of the network from access by people and devices that have no business even know the resources are there.

Another critical component of Zero Trust is having a mechanism in place to monitor and report on activities. As Zero Trust continues to evolve, these monitoring solutions have become increasingly more automated. This is especially important for larger organizations that can have thousands of employees, devices, and access requests occurring at any given moment. For smaller organizations, the alerting can be as simple as an email informing of a potential issue. For larger or more complex organizations, the best solutions typically involve a combination of an active display that is visible to key staff at all times who are visually alerted to an incident in progress. This visual alert, in conjunction with an email or SMS message to the incident response team, offers a much improved alerting mechanism for events than the tradition method of log review. The most complex environments deploy monitoring and alerting solutions that use a combination of machine learning and AI to provide a complete monitoring and alerting solution.

For more information on Zero Trust, I highly recommend this article provided by Guardicore.

As always, I value comments and feedback on the articles I write.

Success in the IT Industry: Whatever you do, DON’T PANIC!

July 27, 2017 Leave a comment

I’m going to kick off a small series here about succeeding in the IT industry. These will be topics that I have learned over 20+ years of working as an IT Professional. I will do my best to make sure the topics and content cover consultants, such as myself, as well as those who work for a single entity. So, with that introduction, off we go!

If you have worked in this industry any length of time, I can guarantee you have had at least one person come running up to you sure that their life was about to end due to a lost file, a jammed printer that contains their presentation to the board that’s due in 5 minutes, or their inability to access the internet on their smartphone while in the restroom. In any of those situations, it is pretty easy for us to remain calm, hopefully reassuring that person, and helping them quickly resolve their problem.

But what do you do when it’s your server or server farm that has suddenly dropped off the network denying the CFO access to his data that he needs for a meeting that started 5 minutes ago? How do you react when the worst happens in the systems that you are responsible for and all the upper management staff are standing over your shoulders watching you and demanding an estimate of when the company will be back up and running, all the while reminding you of the expense of having 50+ employees that they have to pay for sitting around and drinking coffee?

Hopefully, your answer doesn’t contain the words “panic”, “freak out”, or “I don’t know”.

If you work in the IT industry as a network or systems administrator, I can personally guarantee you that there will be times that this happens. Technology is not infallible and, in my personal opinion, subscribes to Murphy’s law: “Anything that can go wrong, will go wrong, and at the worst possible time.”

So, how do you prepare for that? Can you prepare for that? How do you deal with the ownership or management staff breathing down your neck?

Rule number one: KEEP CALM!

There is absolutely nothing gained by you panicking. In fact, if you panic, it will increase the panic level of all the others around you. Imagine, if you will, a heard of zebras on the plains of Africa. One of them notices a lion that appears to be stalking the herd. It follows it’s natural instinct to run away from the danger as fast as it can, making noise while doing so. This alerts the rest of the herd to the danger and causes them all to panic. The result is a stampeed and ever increasing panic as they lose sight of where the lion is due to the dust cloud they create in running. Now imagine this same zebra that, instead of panicking, watches the lion. After a few seconds, it sees that the lion is going to lie down in the shade because it is really hot. I sleeping lion is not a great threat, so it goes back to munching the plains grass. The herd doesn’t stampede, and the peace if kept. That doesn’t mean the zebra stops checking on the lion every so often, just to make sure it really is napping.

Same thing applies in IT. You will get people that come running into your office or calling you in a panic. You WILL have servers that go offline for mysterious reasons and cause all sorts of havoc around the office. You might even have equipment that quite literally goes up in smoke. I have been witness to that several times. Since these things are pretty much inevitable in this industry, you need to have a plan to deal with them. And you need to have the proper attitude to handle any situation that comes up. You need to appear to be calm, cool and collected in everything you do.

Some of this response comes from experience. The longer you do something, the more you see the issues, and the better prepared you are to handle the issues as they come up. The hardest part is not dealing with the issues. It’s dealing with the people affected by the issues. When they approach you at a dead run or panicked on the phone, you need to be able to reassure them, let them know you are aware of the problem and that you are working on resolving the problem as quickly as humanly possible. Easy to say, not always easy to do. And to my knowledge, there is no training program that can prepare you for the flood of varied responses you will get from the people in your organization. Some will decide it’s time for a coffee break. Some will call you or come visit you thinking that their presence might in some way help you solve the problem faster. I have even seen people break down in tears over issues they have no control over.

All of these responses can be a major distraction and can cause you to feel more and more stress as you try to resolve the situation. Sometimes it becomes necessary to ask people to leave you alone so you can do your job. This needs to be stated nicely, but firmly. My best example is a CFO/VP at one of my clients. He came from a very large company that had a huge IT staff. He was used to getting status updates and resolution estimates every 10 minutes during an outage or incident. His new company, my client, has two location, about 100 employees overall, and one IT guy…me. With me being the only point of contact for IT issues, needing to give status updates every 10 minutes could be a real problem since it distracts from the task at hand. During one particularly major issue involving Microsoft Exchange, I finally had to sit him down and explain to him that having to stop every 10 minutes, find him, update him on the problem, give him a resolution estimate, and then go back to work on the task was going to easily triple the amount of time (and thus the bill) for getting the issue resolved. Once he finally understood that and realized that I would let people know when there was something to actually report, he backed off on his requirement to update him so frequently on the progress. The net result was that problems got resolved much faster, and if he was really curious, he would come find me, and if I was not looking completely absorbed in the issue at hand, he would ask a simple “how’s it going?” and get a quick reply while I got to keep working the issues. It was a win-win for everyone.

The bottom line is this. When everything around you is is going crazy, and the employees and/or management are all panicked, it is your job to be the calm at the center of the storm. Let it swirl around you, maybe even ruffle your hair a little. But under no circumstances should you visibly panic. It could cause the panic in other people to amplify, and could even cause some people to lose a little faith in you and your abilities. As the person responsible for protecting their network, for protecting their data, and some will even see you as the person protecting their livelihood, you need to be the bastion of calm during a real or perceived crisis.

Categories: Life, Success in IT

Too much protection?

July 24, 2017 Leave a comment

So, I had someone make a comment the other day:

“My IT staff are all over us about security and viruses. They keep upgrading our security, and it feels like I really have to work to get anything done anymore. Do we have too much protection?”

There are many ways to look at security. I happen to be a believer in a layered approach. Each layer has a function and purpose. Sometimes those layers seem to replicate each other, but if implemented correctly they will not adversely affect the person sitting at their desk just trying to do their job.

Take anti-virus for example. This is a security feature every computer should have, and it should ALWAYS be up to date. In a business environment, you will likely have this feature built into your firewall or another device that protects your network from the dangers that are on the internet. Does this mean that you don’t need a good anti-virus solution on each desktop computer in the company? NO! You need the protection on the computers to protect you from the times a coworker or client  (we’ll call him Bob) brings in a flash drive with a file they worked on at home. The anti-virus in your network firewall does not protect you from malware that could be on that flash drive. And if Bob doesn’t have adequate protection on his home or office computer, then you would potentially be introducing a virus or other malware into your office computer and possibly the entire office network. However, with anti-virus on your office computer, it would alert you if there was something bad on the flash drive.

You can have “too much” anti-virus if you have more than one anti-virus program installed on your computer. The programs are known to occasionally see each other as a threat and then cause problems. They also all use some of the same “hooks” in the operating system to provide their security. If two or more try to use the same hook at the same time, then you have a major conflict. This has been known to cause crashes, and at the very least extremely poor performance on the computer.

“What about add-on applications that look for other threats, not just viruses, and malware,” you ask? I’ll give you my approach. It may not be the best for you and your situation, but as a consultant that goes to multiple business locations every day, sometimes locations that are known to have an active infection (the reason they called me), it is proven and works great for me.

Personally, I use a stand alone anti-virus product, not a suite. I have found the suites to be…how to put it nicely…a little heavy handed and sometimes extremely resource intensive. There are many good anti-virus products out there. Over the years I have used Avast!, Symantec Endpoint Protection, BitDefender and a few others. My current weapon of choice is BitDefender. It gets frequent updates, is reasonably light weight (meaning not resource intensive) and I have yet to see anything sneak past it, even in environments that are known to be actively infected.

To round out my personal protection, I have a subscription to Malwarebytes. This program does not look for viruses in the way a tradition anti-virus does. It targets active malware. When I go into a client situation where I know they are actively infected and I need to clean them up, Malwarebytes is able to detect in an incoming request from the source computer and actively block the activity, even before my anti-virus needs to get into the loop. This makes my computer not have to work as hard to protect itself (since the infections never get to the computer at all), and sometimes makes it easier to identify the source computer on the network.

Additionally, I make sure I have a firewall on my laptop that keeps out connections I have not specifically authorized. A firewall on each computer in a business is not always a feasible approach. It can complicate the administration of the network in many ways. If you decide to check your firewall and see that it is not on, don’t panic! Call or email your network administrator and ask if this is by design. Most of the time you will hear yes. Here is where I may get a little flack… I use third party software for all of my protections…except the firewall. Here I simply use the Windows firewall. My experience shows that this is adequate protection. In a business environment, it is also easy for you network administrator to manage and maintain policies on. I have never been a fan of the built-in anti-virus protection, but as things stand right now I am comfortable with the Windows firewall.

The answer to the original question “do we have too much protection?” is a combination of yes and no. You really can’t ever have enough, if it is done right. You can have poorly configured, poorly managed, and poorly implemented solutions. You can have too many protection programs installed. But overall, if implemented in a layered fashion where each piece does not trample on the other, you can never have “too much” protection.

As a side note: I DO NOT recommend that you test your personal computer protection by connecting to networks that you know have problems. Sometimes there are threats that can get past even the best defenses. I do this because it is part of my job. I have many years of experience and the knowledge to deal with the threats, which is why I am a consultant with many happy clients.

Categories: Security, Solutions, Technology

DISM upgrade from Server 2008 Standard to Enterprise Caused Havoc

December 13, 2012 2 comments

Don’t want to read the story? Jump to the Solution!

So, I had a client that was needing to increase the amount of RAM beyond 32GB on a SQL Server. I start researching ways to make that migration and ran across the Microsoft endorsed method of using DISM to do an in-place upgrade from Standard to Enterprise or Datacenter. Well, my client is licensed for Standard and Enterprise, so this method sounded like a great way to resolve the RAM limitations of Standard edition.

It’s a fairly straight forward process. The instructions can be found here, and directly from Microsoft here. I will mention a small caveat… If your licenses are volume, then you need to use the Microsoft Public KMS key to change editions. After the upgrade, you put your key back in. The other qualifier is that the target server CAN NOT be a DC.

On with the story. All went well with the upgrade, including the insertion of the proper key and subsequent activation of Server 2008 R2 Enterprise using the clients key. Everything looked fine and seemed to be running properly. No errors in the event log, and the areas I checked out looked good.

The next morning, I get a call from my client telling me they cannot print from remote sessions (this is also an RDS server). I connect to the server and get the exact same results. An error message that ends with “Could not create print job”. A few minutes later I get a call from their controller, who is trying to do payroll, and cannot get the application to open properly. I suggest they start by contacting the vendor of the app and we’ll go from there. I continue troubleshooting the printing issue, and discover the Print Management cannot open the snap-in MMC component.

Then I get a call from the accounting vendor. We troubleshoot his app for a bit and find the it accesses some if it’s components via IIS, which looks to be running but not serving data. We decide to reboot. As soon as that reboot is complete, we no longer have even the basic RDS services. I’m down to using an alternate method to connect and admin the server. I check all the basics. Firewall is off. UAC is disabled. IP address and network settings have not changed. No events in Event Viewer. The server appears to be alive and completely healthy.

We troubleshoot a bit longer and determine that there must be some sort of connectivity or communication issue internal to the OS. I decide to work at it a bit more, but ultimately decide to engage MS Support. I also decide to work on this from the comfort of home, as it would likely be a long night.

I create the MS Support request once I get home. I know I have a while to wait (supposedly no more then two hours), so I decide to find some dinner and let my brain veg out on some TV. Two hours passes and no phone call, but I start thinking of other things to look for. A brain break will do the IT guy good sometimes.

I remember from searches earlier in the day that there was at least one person that gave up and reloaded his server from scratch. For me, that is not an option. I start making simpler queries. These lead me down the path of discovery. I see several people with this issue that found temporary fixes, as well as some apparently untested suggestions. I start researching each of these and find more useful information. With each of these threads, strings and nuggets of information, I started to formulate a solution.

It turns out that the component that was broken was licensing. Windows was reporting that it was “Genuine” and activated, but in reality the license management module thought it had an invalid key and had told all the vital components to cease their function until they were licensed again.

I want to save others from hours of time and turmoil (not to mention a hefty MS Support bill)and make sure that the complete solutions gets out to those that may need it. Here it is, as concise as possible.


First I removed any and all traces of the license keys:

slui –ckms (This clears any KMS entries)

slui –upk (The removes installed product keys)

After running these the desktop will go to BLACK and tell you that your version of windows might not be genuine. DO NOT REBOOT YET!

Next, navigate to the Microsoft Windows Validation site. This process will reinstall/repair your damaged licensing components. For me, it reinserted a generic key and validated my Windows Server 2008 R2 as Genuine.

Reboot!

After the reboot, if you look at system properties, you will likely see that Windows only has 4GB available of however much you have installed. In my case, I had 4GB of 28GB available. At this point, I clicked “Change Product Key” on the properties page and pasted in my proper key for Server 2008 R2 Enterprise.

This completed and activated and told me to reboot to activate all the features.

After the reboot, all components, applications and sub-systems were working exactly as they should.


At that moment, I finally received my phone call from MS Support, 3.5 hours after opening the support request with the promise of a call within two hours. I thanked them for the call, and informed them that the problem was already resolved without their assistance.

I don’t believe Microsoft would have come up with a solution to this problem. This took too much research and was such an odd problem that I believe they would have eventually told me to format and reload my server. I found instances of others that faced this problem, and ultimately did completely reload their servers. Hopefully I can save someone that fate with this information.

Cloud Computing Introduction

October 4, 2012 Leave a comment

Business Information Technologies is hosting a get together on October 25th. There will be a couple of brief presentations, including a presentation introducing the concept of Cloud Computing and discussing the Pros and Cons in a small business environment.

Categories: Uncategorized

Defense in Depth: What is it? (Part 1)

August 11, 2011 Leave a comment

Defense in Depth is a term that gets thrown around in the IT community a considerable about these days. It is not always clearly defined or explained, and may leave you with questions like those that follow. What is it? What does it cost? What impact would greater security have on my business? In this post, I will try to define what Defense is Depth is. I will tackle the other questions in additional posts.

Defined:

So, what is Defense in Depth? In a nutshell, it is a combination of security devices, software and education of end users (you and your employees). At its most basic, Defense in Depth involves a high grade firewall at the edge of your network, good anti-virus software (preferably centrally managed) on your servers and workstations, and education of the end user in safe email, internet and social networking practices. In a larger environment, there may be multiple firewalls that protect various branches of your network, as well as specialized Intrusion Prevention devices for monitoring potential points of access and and cutting off the attack before it begins in earnest.

Layer One:

The firewall that you place on your internet connection should have features such as Anti-Virus, Anti-Spyware, Intrusion Detection and Email Filtering built into it. You should also be able to define “zones” for additional security. For example, with Sonicwall and Watchguard firewalls, you can define separate a zones for your servers and place the desktops in another zone. This approach allows all data that passes from your workstations to your servers to be scanned for potentially damaging viruses and spyware. On the Sonicwall, you can also enable Intrusion Detection and Content Filtering to these zones as well as applying those filters BETWEEN the zones, again enhancing your security. As these devices are maturing, you are also able to block certain kinds of traffic that you may not want crossing your workplace network, such as the use of BitTorrent and video/music streaming.

Layer Two:

Next, you need a good anti-virus package installed on your servers and desktops. Ideally, this software will be centrally managed and able to alert you automatically if there is an infection on a system, virus definitions get to far out of date, or any other criteria that you specify. Central management alleviates the load on your internet connection for updating the client systems. Updates are downloaded to one server and then pushed to the clients, versus having every computer in your company going online to download updates and potentially choking your internet connection for a period of time. This central console also lets you define policies that you can push to the clients. You can manage exceptions, as well as block unwanted applications. A couple of example of good anti-virus software are Symantec Endpoint Protection and Avast Professional with the server component.

Layer Three:

Finally, there is the aspect of end user education. This involves learning and teaching the safe use of email systems, internet habits and social networking. In my experience, especially the last few years, the majority of viruses and spyware that I end up cleaning start with the user saying "I was just looking at something on Facebook, when…" or "I was just on Twitter and all of a sudden…". It may sound like I am advocating for the blocking of social networking sites, but the reality is that these sites are here to stay and becoming more a part of our businesses each and every day. What I advocate is education of our users. Teach them to recognize invalid links. Teach them to think about posting patterns. If the post from their friend "Bob" doesn’t look like it written by him, it is possible that "Bob" lost control of his account and a prankster created the post. The links in these kinds of posts can be anything from a benign advertisement to something that makes your account "like" the post, then automatically post to all your friends walls, sending them to sites with "questionable material", and in turn causing their computer to become infected with malware of one variety or another.

Additional Layers:

In some instances, additional layers of security and traffic inspection may be employed. This may be as simple as a packet sniffer looking for errant traffic, or a traffic recording device that records every bit of data that enters or exits the network. This traffic can be reviewed manually, or sent through a program that can report any traffic anomalies.

Conclusion Part 1:

Defense in Depth, implemented even at its most basic levels, will help to ensure the consistent computing experience of your organization and your users. Whether it is the basic three layers described above, or a more in depth implementation, Defense in Depth should be implemented in every organization. The hardware, software and training may be different in each organization, but the end result is a more secure environment for everyone.

Trojans attack Android

Fraudsters have cranked up production of malware targeting Android devices with with a rash of Trojans, many of which apply tricks long used against Windows PCs.

F-Secure reports that a rogue developer has modified a harmless app that displays pictures of bikini-clad babes into a tool that secretly establishes a rudimentary mobile botnet. “The added code will connect to a server and send details about the infected handset to the malware authors,” F-Secure reports. The malware waits…

Read the rest

Categories: Phones, Security

Mac owners, beware!

Just hours after Apple updated a security update to protect Mac users against a rash of scareware attacks, a new variant began circulating that completely bypasses the malware-blocking measure.

The trojan arrives in a file called mdinstall.pkg and installs MacGuard, a malicious application that masquerades as security software the user needs to clean a Mac of some nasty…

Read the rest…

Categories: Mac, Security