CompTIA CYSA+ CS0-002 – Vulnerability Scanning Part 3
6. Scheduling and Constraints (OBJ 1.3)
Scheduling and constraints. In this lesson, we’re going to talk about scheduling and constraints. So the first question I have for you is, how often should you scan? Well, this is going to be determined based on your internal risk management decisions of your organization. If you have a larger risk appetite, you’re going to have more time in between scans than you might go towards rarely or sometimes. If you have a very low risk appetite, you may do it very often or often. This is is going to depend on you and your organization. Remember, a scan is a point in time assessment. One organization I worked with did scans every six months. Now, that seems like an awfully long time to me to wait between scans, and they’re assuming a ton of risk because they did that. Another organization I worked with did it every month, yet others do it every week.
Again, it really depends on you and how quickly you want to get those results and see what vulnerabilities exist so you can start mitigating them. Remember, vulnerability scans are really up to your organization, but my personal recommendation is they should be done at least weekly. Now, this isn’t something that comes from your book. This is just from my professional experience. I think if you don’t do it at least weekly, you are going to be missing a lot. And there’s just way too much stuff that goes on in a seven day cycle that you should be at least scanning once a week. Now, when we start talking about scans, there are lots of different things we have to consider when we think about when we should schedule them. Some of them include things like when we deploy a new or updated system.
If I’m going to install some new piece of gear onto my network, I want to make sure it’s been scanned and I know what vulnerabilities exist. And so that’s my first time I think about when I should scan something. Another time I think about it is when new vulnerabilities have been identified. Now, this can be identified in your network or just identified at large. For instance, you might be reading the newspaper and find out about this new thing called WannaCry. If this was 2017, that’s when it came out. It hit the newspapers and everybody goes, oh my goodness, what is WannaCry? And you could actually find there was a vulnerability associated with it, and you could scan for that. If I see something like that in the news, I immediately want to make sure I’m scanning my network to see if I’m vulnerable.
The next thing you want to think about is whenever there’s a security breach. If you’ve had a security breach in your network, you want to scan your network and make sure you find all your vulnerabilities. Just because they got in one way doesn’t mean they’re going to come back in that same way. So you can’t just patch the one way they got in. This isn’t a penetration test. If they got in, you need to start making sure you lock down everything. Because generally what we’ve seen is when an attacker gets in one way, they come back to reattach you again and again. So if you had a security breach, you need to make sure you follow up and do another scan. Another reason to do it is when you have regulatory or oversight requirements. For instance, if you deal with PCI DSS with credit card data, you have to do a scan once a quarter.
That’s required to be compliant with PCI DSS requirements. So if you have a regulatory or oversight requirement, you’ll do it based on their scan schedule. And then the other one you’re going to do it for is anytime it’s regularly scheduled. Now, what I mean by that, in my organization, we do weekly scans. So every week we make sure we do a full scan of our servers and we make sure that everything is good. If you do that, you’ll know exactly where you are every single week and what your vulnerability posture is. So why doesn’t an organization just scan continuously all day, every day? I should just run my scanners over and over and over again to check. Well, because vulnerabilities don’t show up that way. It’s not the way things work. There are a lot of technical constraints here that would preclude you from being able to do scans continuously over and over and over again.
One of the big things is your feeds are only updated so often. Think about like antivirus signatures. If you haven’t gotten new antivirus signatures, then you’re not going to be able to detect anything new, right? That’s the same concept here. But really the biggest limitation is your technical constraints. Technical constraints can limit your ability to conduct scans more frequently than you’d like. When you start scanning your network, you’re going to start causing processor utilization and memory utilization. For instance, here you can see exactly when the scan started and when the scan stopped, noticed the processor usage was at like five to 6%, and then it shot up to almost 100%. That’s because this computer was in the middle of doing a scan.
Now this consumes a lot of network bandwidth as well as significant processing and memory usage on the target and from the scanning system. And if you’re using agent based scans like this particular system was, you’re going to see on that system how the CPU can spike up during that time. So you need to make sure you’re timing your scans right so they don’t have an effect on your end user. And you need to make sure that you’re not overburdening these systems by trying to do continuous scans. Because if you do that, your systems can become useless because they won’t get any work done. Another constraint here to consider is cost. Each time you scan that has a cost associated with it. Now, your corporate policy is going to dictate how much risk you’re willing to assume.
And again, this comes down to a cost situation because the more scans I do, the more people hours I have to use to do those scans and people to read those scans and people to analyze those scans and all of that stuff, let alone the processing power and the network bandwidth and all the other things that go into that. So all of that has additional cost. And so there’s going to be a risk appetite of, hey, if I can do this once a week versus once a day, is that still good enough? If I do this once an hour versus once a day, which one is better? And you’re going to weigh that based on how much it’s going to cost you and what benefit you’re really going to get. As I said, I tend to lean on the side of about once a week is a good scan frequency. But again, you get to choose this in your own organization.
As you start looking at this and your overall threat intelligence, you’re going to see how much of a threat there is against your organization and how often new code and exploits are coming out. Generally, we see from Microsoft once a week we get patches, right. We get them on patch Tuesday. And so if we have them coming out on patch Tuesday, that’s a good frequency for us once a week to be able to scan, patch, scan, and make sure that we’re getting up to date with the latest patches. Now, your scanning frequency and technique will be affected also by the data type that’s being processed by the target. Now, what I mean by this is what’s being processed by the server will help you determine how many times you need to scan this thing and how often. For example, if you have a server that contains confidential or top secret or sensitive information, it should be scanned more frequently than a computer that’s being used by the mailroom, because that’s maybe not as important. And so you’re going to have to make those decisions. Conversely, you’re going to want to make sure the system is scanned using a Credentialed scan instead of a non Credentialed scan if you’re dealing with something like secret or top secret or confidential data, right? Because this is important stuff. We want to find all the vulnerabilities.
Now, again, this becomes tricky, though, because as I said, once the scan administrator has to do a Credential scan, that means they need to have the administrative credentials on that sensitive target, which again, could lead to an insider threat. So all these things are things that have to be balanced, and you have to weigh that risk. This is why CYSA is a much harder exam than some of the earlier exams you may have taken, because there’s not a clear cut answer all of the time. A lot of these things are risk decisions that we have to weigh, and based on the different circumstances, we are going to choose different answers. It’s not always going to be A, sometimes it’s going to be B or C or D based on the circumstances.
And so you have to think these things through and put on your manager hat and your risk management hat as you start thinking about these things and what you’re going to do.Now, one way to mitigate the risk of giving out administrator credentials and still being able to do a Credentialed scan is to use what’s called a Pam, a Privileged Access Management Solution. Now, a Pam allows you to mitigate this risk of the insider threat. Essentially, this is the technology that you can use. So the scanning software goes to the server, the Pam server, and it will actually get the privileged credentials from there. These are actually one time use credentials. So what happens is the Pam server will actually go to the target that you’re trying to scan.
It will change the password to some random thing, give that random thing over to the scanning server. The scanning server will finish its scan. Once it’s done, it tells the Pam server, hey, I’m done. And the Pam server goes and re changes that password to something else entirely. So you only have the password, the administrative credentials for the time of the scan, and then they’re taken away from you again. That’s how these pam solutions work. These privileged access management solutions. So if you are in a big organization and you really want to do these Credentialed scans, which I do recommend, then you really do want to get yourself a Privileged Access Management solution because it’ll help you mitigate this insider threat.
Now, unfortunately, these Pam solutions do cost money and sometimes you don’t have the budget for it. So what can you do if you don’t have the budget for a Pam solution? Well, you can do the next best thing. You can create restricted log on hours for a specific period of time that only allows scanning during that time using those administrative credentials. So we might say we’re only going to allow scanning from midnight to 02:00 A. m. . So if somebody tries to log on at three in the afternoon when the administrator is actually there, then we would know that’s an insider threat and we’d have to think about that, right, and look into that and see what’s going on.
This way, that privilege account can only be used to do the scanning between midnight and 02:00 A. m. , which is the time the server will do it on its own without the scan administrator there, so they don’t have access to those credentials as well. So this is just another Mitigation you can use. But again, the Privilege Access Management solution is a much better option and one I highly recommend.
7. Vulnerability Feeds (OBJ 3.4)
Vulnerability feeds, just like your antivirus software needs to be updated with the latest definitions, so does your vulnerability scanning tools. The way they do this is through a vulnerability management feed. Now these vulnerability feeds are also known by other names as well. When we talk about a vulnerability feed, these are synchronized lists of data and scripts that are used to check for vulnerabilities. If you’re using Nessus, they like to call these plugins. If you’re using Open Boss, they call them NVT’s or Network Vulnerability Tests. Either way you slice it, they all fall into this category of vulnerability feeds and I like to equate them to an antivirus signature because it really is what they are. But the difference here is that they’re not just looking for a snippet, these are actually scripts that can check those vulnerabilities.
So when you run this scan and you’re testing for some kind of an exploit, it’s actually running that exploit against your system to see if it will be successful. And if it is, it reports back to you that there’s a vulnerability there because it was able to exploit it. So keep that in mind when you start using these tools. Now many of these commercial vulnerability scanners require an ongoing paid subscription to access the feeds. For example, if you use Nessus, which is made by Tenable, that is a commercial tool and it requires an ongoing paid subscription for you to access the feeds. If you as a company don’t pay for that, you can still use the tool but you’re going to be using old definitions that are out of date. If you’re using old outofdate definitions, you’re not going to find the latest vulnerabilities and you’re going to be vulnerable.
Now, when we start looking at these tools, they all use a common format, it’s known as SCAP and we’ve mentioned this one before. The Security Content Automation Protocol is a NIST framework that outlines various accepted practices for automating vulnerability scanning by adhering to standards for scanning processes, results reporting and scoring, and vulnerability prioritization. This is what SCAP is and SCAP is used to uphold internal and external compliance requirements. Because everybody is using the same language, it makes it easy for us to transfer information from one tool to another because we’re all speaking in the same language, SCAP. Now SCAP has two main components to it. They are known as Oval and XCCDF.
Now, oval is the Open vulnerability and assessment language, also known as oval. It’s an XML schema for describing system security states and querying vulnerability reports and information. Now on the other hand, we have XCCDF. This is the extensible configuration. Checklist description format. This is an XML schema for developing and auditing best Practice Configuration Checklists and rules. Now previously Best Practice guides were actually a big long written essay almost with step by step guidance that told you what to do, an administrator would print out this thing, it was 30, 40, 50 pages and they would go through the checklist to make sure they did everything.
Now, the problem with that is there is no automated way to check against that. But XCCDF provides you with this machine readable format that can be applied and validated using compatible software. So if there was 30 or 40 steps that were all dealing with configurations, you could use it through inside of a compatible tool and it will run those for you. So the initiative doesn’t have to do it all him or herself. Instead, the software can do it for you and check it. So these are great tools and one of the great things about using things like oval and XCCDF is that they are compatible across all the different vulnerability tools. And so as you start using these tools, they’re going to help you be more and more compatible with the different tools and get more done.
8. Scan Sensitivity (OBJ 1.3)
Scan sensitivity. In this lesson, we’re going to talk about scanner sensitivity. And when I talk about the scan sensitivity, this is the amount and intensity of vulnerabilities to test against a target. As I mentioned before, when you start scanning your network for these vulnerabilities, the vulnerability tool is actually going to try to exploit these different vulnerabilities for you. And you can actually have these things called Safe Scan or not Safe scan. If you’re doing safe scan, it won’t actually try any vulnerabilities that it thinks might corrupt your system or crash your system. If you do a not safe scan, it will throw that out the window and it will try everything it has to find every vulnerability possible. Now again, this is one of those things you have to decide when you’re setting up your profiles.
One of these profiles is known as a scan template. A scan template is going to define the settings used for each vulnerability scan. Now, there’s lots of different types of vulnerability scans that we’re going to talk about. We’re going to talk about four basic ones here in this lesson. We’re going to talk about a Discovery Scan, a fast or basic Assessment Scan, a full or deep Assessment Scan, and a compliance scan. When we talk about a Discovery Scan, this is used to create and update an inventory of your assets. And it does this by conducting an enumeration of your network by mapping it out and it finds all the different targets without scanning for any vulnerabilities. Think about this like an end map scan.
Essentially, your vulnerability management tool is going to go out and do a ping sweep of the entire network and find out who’s up and who’s down, which ones have which ports open. That’s the idea of using a Discovery Scan. These tend to be very fast, at least in comparison to the other types of scans, but they are not very indepth. They are mostly used for enumeration. The next one we’re going to talk about is a fast or basic Assessment Scan. This is a scan that contains options for analyzing hosts for unpatched software vulnerabilities and configuration issues. Now, when we talk about this fast or Basic Assessment Scan, it looks something like this. When you go to set it up, you’re going to give it a name, give it a description, give it a place to save, list out the targets that you want to use, all the different IP addresses, and then you’re going to go ahead and save it and off you go.
You’ll pick out which plugins you want, which credentials, if you want to use a Credentialed or non Credentialed scan and it will go out and do it scan, it’s going to find anything that has a minor vulnerability and any kind of minor configuration issues. Now, these aren’t really in depth, but they are going to do a basic scan, meaning it’s going to look at a couple of plugins when you do this under the plugins tab, you’ll actually select which ones you want to use. If you’re doing a more intense scan, you’ll select every single plugin. But for a faster basic scan, it’s going to use the information it got from the discovery scan. It will know that this is a Windows system versus a Linux system, for instance, and it will select the right plugins for that host.
Because if you’re going to be scanning a Linux system, there’s no reason to have the Windows plugins enabled. So you’ll disable those and that will save you a lot of time. That’s the idea of a fast or basic scan. Now, the next one we can do is what’s known as a full or deep scan. Now, when you deal with a full or deep assessment scan, this is a comprehensive scan that forces the use of more plugin types. It’s going to take you a lot longer to conduct this host scanning and it has more risk of causing a service disruption. For instance, you might turn off that safe scan function and test every single plugin you have, even though they may crash the system. Now, full and deep assessment scans will ignore your previous scan results and they are going to fully rescan every single host.
This is actually one of the nice things about some of the fast or basic scans is that if you’ve done a scan on that host before and it already knows that this vulnerability doesn’t exist because you’ve patched it, it’ll skip that and save you some time. Now, a full or deep scan won’t do that. It will ignore any previous scan you did and it scans everything as if it was the first time really intrusively and really in depth. The last type of scan we have is what’s known as a compliance scan. Now, a compliant scan is a scan that’s based on a compliance template or a checklist. And this is going to ensure the controls and configuration settings are properly applied to a given target or host. Now, what’s a great example of a compliant scan? Well, the one I love to think about is PCI DSS.
Here, for example, you can see a template for a PCI DSS Quarterly external scan that you can run inside a Nessus. There’s really no configuration you have to do except tell it, give it a name, give it a description and tell it which target you want to hit and it will go out and it will do that scan for you. It’s going to know exactly which plugins to look for, exactly which configurations to look for, because this is a scan that anybody who uses credit cards has to run across their networks once a quarter. And so this is something that is very well known and so they build it right into the tool. This is the idea of using a compliance scan using a template like this.
Now, some external compliance organizations require scanning frequencies at a certain time. I’ve already mentioned PCI DSS is one of them. This is actually one that they like to ask about on the exam. Sometimes if you see PCI DSS on the exam, the answer is a quarterly scan. If they ask which of these regulatory things requires a quarterly scan, the answer is PCI DSS on the exam. There is no other regulatory requirements that they require you to memorize. But for some reason they love to ask PCI DSS. And it’s probably because almost every organization uses it these days. Because if you accept credit cards, you fall subject to PCI DSS.
9. Scanning Risks (OBJ 1.3)
Scanning risks. In this lesson, we’re going to talk about some of the risks associated with scanning. Now, I’ve mentioned this before, but when you scan things, you have a risk that you can actually do harm. For example, if you’re scanning some things, you can actually crash the system because some of those things are exploiting vulnerabilities that could trigger a system reboot or a system reset. Now, where is this really going to be most common? Well, when you’re dealing with printers, VoIP phones, or embedded systems, these components can react unpredictably to any type of scanning. And so you really have a lot more chance of resetting those things or causing it in a crash when you’re doing vulnerability scans.
This, again, is why it is so important for you to really scope your scans properly. And one of the reasons why I put all my VoIP phones on one scope and I put all my embedded systems on another, and I put my printers on yet another. That way I can really tailor what vulnerabilities I’m going to scan against on those devices and I can minimize my risk of crashing those systems. Now, another thing you need to think about when you start dealing with your scans is all these scan results that come back to you. They have a lot of great information and we’re going to start looking at those in the next section of the course. But these scan results need to be protected because they have the keys to the kingdom.
When you have those scan results, you know exactly what vulnerabilities you have. And if you let an attacker get those, that gives them a blueprint of exactly how to attack you. So you want to make sure these are protected. Anytime you finish with a scan, you should take those results and you should encrypt them before storing them. You also should place them behind a restrictive access control list to make sure nobody can get into those except the personnel who need them. Now, when you’re doing your scanning, we talked a lot about the fact of the Credentialed versus not Credentialed. Well, if you’re going to use a Credentialed scan, you need to have administrator privileges, right? Well, not necessarily.
Instead of giving somebody local administrator privileges, you should use a service account. This way you can conduct your Credentialed scans with a service account on those machines across the network. And if you need to change the password, it’s just one service account that goes across the entire domain. So it’s much easier for you to change those credentials as needed. Another thing you have to think about is ports. If you’re going to start opening ports for scanning, this is sometimes needed so you can actually do your scans, but you’re also increasing your network’s attack surface because the more ports are open, that means the more bad things can happen.
So you want to make sure you’re thinking about that and weighing those risks another way that we can mitigate this is by configuring static IP addresses. If we configure static IP addresses for my scanning servers, this allows me to set up the right ACLs through the firewall and open up ports directly for those servers and those IPS. Only instead of opening it up for everybody, this can help you minimize the network attack surface. So, again, this is all about risk versus reward and risk mitigation. And by taking these things into account, you can make sure that you’re going to be scanning more effectively, more efficiently and more securely.
Amazon AWS Certified SysOps Administrator Associate – Advanced Storage Section
1. [CCP/SAA] AWS Snow Family Overview Okay, so now let’s do a little bit of a deeper dive into how the multipart upload works. So this is allowing you to update a large object in parts in any order. And it’s recommended when you have a file that is over 100 megabytes and it must… Read More »
Amazon AWS Certified SysOps Administrator Associate – Databases for SysOps Part 9
18. [DVA] ElastiCache Redis Cluster Modes Two types of elastic cache replication you can have for redis and we need to know them both. So the first one is called cluster mode disabled. And in this case you have one primary node and up to five replicas for your node. So this is for redis.… Read More »
Amazon AWS Certified SysOps Administrator Associate – Databases for SysOps Part 8
15. Aurora for SysOps Okay, so one last quick bit of information regarding Aura for your sysps exam. So you can associate a priority tier between zero and 15 on each read Replica and this is helped in case you want to control the failover priority. So RDS will promote a read Replica with the… Read More »
Amazon AWS Certified SysOps Administrator Associate – Databases for SysOps Part 7
13. [SAA/DVA] Aurora Hands On So let’s create an Aura database and we are in the new interface. So there was an old interface and now we can switch back to the old one, but I’ll keep the new so that the video is more compatible with you. And so we’re going to create an… Read More »
Amazon AWS Certified SysOps Administrator Associate – Databases for SysOps Part 6
11. RDS Performance Insights So let’s talk about RDS Performance Insights, and that’s the last one around monitoring for RDS, but I think you need to know it and it is quite a cool tool overall. So Performance Insights allows you to visualize your database performance and you can analyze why there are issues that… Read More »
Amazon AWS Certified SysOps Administrator Associate – Databases for SysOps Part 5
9. RDS Events and Logs Okay, so now let’s talk about RDS events and event subscriptions. So RDS will keep a record of events related to multiple events related to the database instances, to your snapshots, to changes made to your parameter groups or your security groups, etc. So what is an event? Well, an… Read More »