Amazon AWS Certified Security - Specialty SCS-C02 Exam Dumps & Practice Test Questions

Question 1:

A security engineer is setting up a new website called example.com and wants to enforce secure connections by using HTTPS. 

Which of the following is the proper method for storing SSL/TLS certificates to enable this secure communication?

A. Custom SSL certificate stored in AWS Key Management Service (AWS KMS)
B. Default SSL certificate stored in Amazon CloudFront
C. Custom SSL certificate stored in AWS Certificate Manager (ACM)
D. Default SSL certificate stored in Amazon S3

Answer: C

Explanation:

To ensure secure communication between a website and its users, SSL/TLS certificates are essential because they enable HTTPS, which encrypts data during transit and verifies the website’s identity. Among the given options, the best practice is to use AWS Certificate Manager (ACM) to store and manage SSL/TLS certificates, especially when working with AWS services.

Option C is the correct choice because ACM is specifically designed to provision, manage, and deploy SSL/TLS certificates for AWS resources. It offers seamless integration with services such as Amazon CloudFront, Elastic Load Balancers (ELB), and API Gateway. ACM can automatically renew certificates, reducing operational overhead and minimizing the risk of expired certificates causing outages.

Option A, AWS KMS, is not intended for storing SSL/TLS certificates. Instead, KMS is used for encryption key management, such as encrypting data at rest or in transit, but it doesn’t manage SSL certificates themselves.

Option B refers to a default certificate in CloudFront, but CloudFront does not inherently store SSL certificates by default; it requires certificates to be imported or managed via ACM to enable HTTPS for your distributions.

Option D incorrectly suggests storing certificates in Amazon S3, which is primarily an object storage service and lacks any certificate management capabilities.

Therefore, ACM (Option C) is the most appropriate and secure choice for storing and managing SSL/TLS certificates, enabling HTTPS for example.com efficiently and according to AWS best practices.

Question 2:

A security engineer must create a process to investigate and respond to potential security incidents on Amazon EC2 instances backed by Amazon EBS. The company uses AWS Systems Manager with SSM Agent installed on all instances. The process must: preserve volatile and non-volatile data for forensics, update instance metadata with incident info, keep the instance online but isolated to prevent malware spread, and capture investigative activities. 

Which combination of actions should the engineer take with the least operational effort? (Choose three.)

A. Collect metadata, enable termination protection, restrict access by modifying security groups, detach from Auto Scaling groups, and deregister from load balancers.
B. Collect metadata, enable termination protection, move the instance to an isolation subnet with denied traffic, associate subnet to restrict access, detach from Auto Scaling groups, and deregister from load balancers.
C. Use Systems Manager Run Command to run scripts that collect volatile data.
D. Log in via SSH or RDP to run scripts that gather volatile data.
E. Create an EBS volume snapshot and tag the instance with incident metadata.
F. Use Systems Manager State Manager to create an EBS snapshot and tag the instance.

Answer: A, C, E

Explanation:

When responding to a compromised EC2 instance, the priority is to preserve evidence and contain the threat with minimal disruption and operational effort. The ideal approach involves isolating the instance, collecting volatile and non-volatile forensic data, and maintaining the instance's metadata for tracking.

Option A addresses isolation by modifying security groups to restrict network access, which effectively isolates the instance without requiring subnet changes, thus reducing complexity. Enabling termination protection prevents accidental instance termination, and detaching from Auto Scaling groups and load balancers ensures the instance is no longer part of dynamic scaling or traffic distribution, preventing infection spread.

Option C leverages Systems Manager Run Command, allowing remote execution of scripts to gather volatile data such as memory dumps or running processes. This approach is safer and more scalable than manual SSH or RDP access, avoids exposing investigator machines to potential malware, and reduces operational overhead.

Option E involves creating an EBS snapshot, which captures non-volatile data (disk contents) for detailed post-incident forensic analysis without affecting the live instance. Tagging the instance with metadata and incident details helps track the investigation status and integrate with incident management workflows.

Options B and F introduce unnecessary complexity by requiring subnet reconfiguration or using State Manager for snapshots, which are less direct or operationally efficient. Option D involves manual access via SSH/RDP, increasing the risk of contamination and operational burden.

In summary, combining A, C, and E provides a balanced, secure, and efficient way to isolate, preserve, and investigate compromised EC2 instances while complying with AWS best practices and minimizing operational overhead.

Question 3:

A company uses AWS Organizations and wants to deploy infrastructure components like EC2 instances, ELBs, RDS, and EKS/ECS clusters across accounts using AWS CloudFormation StackSets. Developers currently create their own stacks, which accelerates deployment. The security team wants to enforce internal resource configuration standards and receive notifications about non-compliant resources without slowing down developers. 

Which approach best balances operational efficiency with these requirements?

A. Create an SNS topic for security notifications. Use a custom Lambda function in the CI/CD pipeline to run AWS CLI CloudFormation template validation and notify SNS if issues are found.
B. Create an SNS topic. Use CloudFormation Guard with custom rules in a Docker container during the CI/CD pipeline to validate templates. Notify SNS of compliance issues.
C. Create an SNS topic and SQS queue. Use an S3 bucket where developers upload templates. Set up EC2 instances that scale with SQS to validate templates using CloudFormation Guard and notify SNS of problems.
D. Create a centralized CloudFormation StackSet with a standard resource set. Update templates for new resources, have security review them, and then add approved templates to the repository for developers.

Answer: B

Explanation:

The question aims to find a solution that enforces security compliance checks on CloudFormation templates without hindering the speed at which developers can deploy infrastructure. Automation, integration, and notification are key factors.

Option B is the best choice because it leverages CloudFormation Guard—a dedicated tool for validating CloudFormation templates against custom organizational policies. By integrating CloudFormation Guard directly into the CI/CD pipeline through a Docker image, the company can automatically validate templates before deployment. Any compliance issues trigger notifications to an SNS topic subscribed to by the security team. This process maintains developer velocity by automating compliance checks and providing real-time alerts without manual intervention.

Option A involves custom Lambda functions running AWS CLI validation commands. While functional, it introduces additional complexity and maintenance overhead. It’s not as specialized or seamless as CloudFormation Guard, which is purpose-built for compliance validation.

Option C adds unnecessary infrastructure with SQS, EC2 autoscaling, and manual template uploads to S3. This complex architecture increases operational burden and slows the process since developers must change their workflow to upload templates to S3 instead of integrating directly into the pipeline.

Option D proposes a manual, centralized approach where templates are reviewed before being shared with developers. This method lacks automation and delays deployments, negatively impacting developer agility.

Therefore, Option B offers the most efficient, scalable, and developer-friendly way to enforce compliance policies and notify the security team, preserving speed and reducing operational overhead.

Question 4:

A company is migrating an application server to AWS while keeping its database on-premises due to compliance. The database requires low network latency and all data transfer between AWS and on-premises must use IPsec encryption. 

Which two AWS services would best satisfy these conditions? (Select two.)

A. AWS Site-to-Site VPN
B. AWS Direct Connect
C. AWS VPN CloudHub
D. VPC peering
E. NAT gateway

Answer: A, B

Explanation:

When migrating parts of an application to AWS while retaining the database on-premises, it’s critical to ensure secure, low-latency communication that complies with encryption standards. In this scenario, two AWS solutions best address these needs.

AWS Site-to-Site VPN (Option A) provides a secure IPsec-encrypted tunnel between the on-premises network and AWS VPC. This encryption fulfills the compliance requirement for data protection in transit. However, because the VPN traverses the public internet, latency might be variable, which can be a limitation for latency-sensitive applications. Despite this, the security benefit of IPsec encryption and ease of setup make it a core component of the solution.

AWS Direct Connect (Option B) establishes a dedicated private network link from on-premises to AWS, bypassing the public internet. This reduces latency significantly and offers consistent network performance, which is essential for a sensitive database. While Direct Connect itself does not inherently provide encryption, it can be paired with Site-to-Site VPN over the Direct Connect link to meet the IPsec encryption requirement. This hybrid approach offers the best balance between performance and security.

The other options are less suitable: AWS VPN CloudHub (Option C) is mainly for connecting multiple VPNs across sites, not directly linking on-premises to AWS with IPsec requirements. VPC peering (Option D) only connects AWS VPCs and doesn’t extend to on-premises environments. NAT gateway (Option E) is for outbound internet traffic from private subnets and doesn’t provide encrypted links to on-premises data centers.

In summary, combining AWS Site-to-Site VPN with AWS Direct Connect offers a secure, low-latency, and compliant connection between the on-premises database and AWS-hosted application server.

Question 5:

A company must back up several Amazon DynamoDB tables twice a month—on the 15th and 25th at midnight—and keep those backups for three months to comply with data retention policies. 

Which two approaches should a security engineer implement to meet these backup and retention requirements? (Choose two.)

A. Use DynamoDB’s on-demand backup feature and set a lifecycle policy to expire backups after three months.
B. Use AWS DataSync with a backup rule specifying a three-month retention.
C. Use AWS Backup to create a backup plan with a three-month retention rule.
D. Use a cron schedule expression to trigger backups and assign DynamoDB tables to the backup plan.
E. Use a rate schedule expression to trigger backups and assign DynamoDB tables to the backup plan.

Answer: A, C

Explanation:

To comply with the company’s policy of backing up DynamoDB tables twice monthly and retaining those backups for three months, the solution must allow precise scheduling and automated retention management.

AWS Backup (Option C) is a fully managed service that simplifies centralized backup management for AWS resources, including DynamoDB. It allows the creation of backup plans with specific schedules and retention rules. You can define backup windows to match the 15th and 25th of each month and set retention to automatically delete backups after three months. This automation reduces manual effort and improves compliance with the company’s data policies.

DynamoDB On-Demand Backup (Option A) lets you create backups manually or programmatically via automation tools such as AWS Lambda combined with Amazon EventBridge for scheduling. You can pair these on-demand backups with lifecycle policies that automatically expire backups after a defined period, such as three months. While this method requires more manual setup compared to AWS Backup, it provides flexibility and precise control over backup timing.

The other options are less appropriate. AWS DataSync (Option B) is intended for data transfers between on-premises and AWS, not for managing DynamoDB backups. Using cron schedules (Option D) or rate schedules (Option E) can schedule backups but involve more manual overhead and are less integrated with DynamoDB's backup lifecycle management compared to AWS Backup and on-demand features.

Overall, combining AWS Backup’s centralized scheduling and retention capabilities with DynamoDB’s on-demand backup feature effectively meets the company’s backup frequency and retention requirements while maintaining compliance and operational simplicity.

Question 6:

A company wants to implement scalable multi-account authentication and authorization using AWS native tools, minimizing additional user-managed components. They have AWS Organizations fully enabled and AWS IAM Identity Center (SSO) activated. 

What is the best next step for the security engineer to complete the setup?

A. Use AD Connector to create users and groups linked to IAM roles, accessed through AWS Directory Service.
B. Use IAM Identity Center default directory to create users and groups, assign them permission sets, and have users access via the IAM Identity Center portal.
C. Use IAM Identity Center default directory and link groups to IAM users in each account, with users accessing via the IAM Identity Center portal.
D. Use AWS Directory Service for Microsoft AD to create users and enable console access, linking it with IAM Identity Center for permissions.

Answer: B

Explanation:

For scalable multi-account authentication and authorization in AWS, the best approach is to leverage AWS IAM Identity Center (formerly AWS Single Sign-On) with minimal complexity and maximum use of native AWS features.

Using the IAM Identity Center default directory (Option B) allows the security engineer to create and manage users and groups directly within AWS without needing to maintain an external directory service. This centralized directory supports assigning groups to multiple AWS accounts and linking those groups to permission sets, which define fine-grained access controls based on job roles. This approach is highly scalable and eliminates the need for managing IAM users individually across accounts.

Users access AWS accounts via the IAM Identity Center user portal, which provides a seamless single sign-on experience to all linked AWS accounts. This method leverages AWS-native services fully and avoids the overhead of external user directory management.

Option A (AD Connector) involves integrating on-premises Active Directory with AWS. While this is valid in hybrid environments, it adds complexity with external infrastructure to maintain, conflicting with the requirement to minimize user-managed components.

Option C suggests linking IAM Identity Center groups to IAM users within each account. This is inefficient as it defeats the purpose of centralized user management; it would require managing individual IAM users and permissions per account, complicating the setup.

Option D involves using AWS Directory Service for Microsoft AD, which requires deploying and managing a directory infrastructure in AWS, again adding complexity and maintenance burden.

In conclusion, Option B, using IAM Identity Center’s default directory, is the simplest, most scalable, and AWS-native method to enable multi-account authentication and authorization, meeting the company’s goals without unnecessary user-managed components.

Question 7:

A company has implemented Amazon GuardDuty and wants to automate the response to potential threats. They plan to focus first on blocking RDP brute-force attacks originating from EC2 instances within their AWS environment. 

What is the best solution to automatically block traffic from suspicious EC2 instances until the security team can investigate and remediate?

A. Configure GuardDuty to send events to an Amazon Kinesis data stream. Use Amazon Kinesis Data Analytics (Apache Flink) to process the events, send notifications via Amazon SNS, and update network ACLs to block traffic from suspicious instances.
B. Configure GuardDuty to send events to Amazon EventBridge. Deploy an AWS WAF web ACL. Use a Lambda function to send SNS notifications and add a web ACL rule blocking traffic to suspicious instances.
C. Enable AWS Security Hub to receive GuardDuty findings and forward them to Amazon EventBridge. Deploy AWS Network Firewall and use a Lambda function to add firewall rules blocking traffic from suspicious instances.
D. Enable AWS Security Hub to ingest GuardDuty findings. Configure Amazon Kinesis as an event destination for Security Hub. Use a Lambda function to replace the security group of the suspicious instance with one that blocks all traffic.

Correct Answer: D

Explanation:

In scenarios where automation is needed to swiftly block suspicious EC2 instances detected by GuardDuty—such as RDP brute-force attacks—using a combination of AWS Security Hub, Kinesis, and Lambda is highly effective. AWS Security Hub consolidates security findings, including those from GuardDuty, into a central dashboard, simplifying event management.

By configuring Security Hub to send findings to an Amazon Kinesis data stream, the security events are streamed in near real-time, enabling immediate processing. The Lambda function triggered by Kinesis events can dynamically modify the security groups associated with the suspicious EC2 instances. Specifically, it replaces the instance's existing security group with one that denies all inbound and outbound traffic. This action effectively isolates the suspicious instance without impacting other resources or requiring manual intervention.

Other options are less efficient or more complex. For example, modifying network ACLs or deploying AWS WAF to block traffic can be cumbersome and may not provide the granular control needed for instance-specific blocking. AWS Network Firewall adds complexity and cost, and it might not provide the quickest automated response for this particular use case.

This solution leverages the scalability, real-time processing, and automation features of AWS services to ensure rapid containment of threats, which is critical for minimizing exposure and preventing lateral movement within the environment. It also maintains normal operations for unaffected instances, ensuring minimal disruption to the overall infrastructure.

Question 8:

A company receives a GuardDuty alert indicating anomalous behavior by an IAM user in their AWS account. A security engineer must investigate the incident while ensuring the production application remains unaffected. 

What is the most efficient approach to gather and analyze information for this security investigation?

A. Log in with read-only credentials, review the GuardDuty alert for details about the IAM user, and immediately attach a DenyAll policy to the IAM user.
B. Log in with read-only credentials, analyze the GuardDuty alert to identify the API calls involved, and use Amazon Detective to examine the activity in context.
C. Log in with administrator credentials, review the GuardDuty alert, and add a DenyAll policy to the IAM user immediately.
D. Log in with read-only credentials, review the GuardDuty alert for relevant API calls, and analyze these with AWS CloudTrail Insights and AWS CloudTrail Lake.

Correct Answer: B

Explanation:

When responding to a GuardDuty finding related to anomalous IAM user activity, the goal is to investigate the incident thoroughly and quickly while minimizing any risk to the production environment. The best approach involves using Amazon Detective, a service designed specifically for security investigations.

Amazon Detective integrates with GuardDuty and AWS CloudTrail to provide an interactive, visual investigation experience. It helps correlate events and API calls, giving investigators a clear understanding of what happened, when, and how. Using read-only credentials ensures the security engineer does not inadvertently disrupt the production environment during the investigation, preserving system integrity.

Directly applying a DenyAll policy (options A and C) without investigation could lead to unnecessary service disruptions if the activity turns out to be benign or a false positive. Additionally, using administrator credentials unnecessarily increases risk if credentials are compromised.

While AWS CloudTrail Insights and CloudTrail Lake (option D) offer powerful analysis capabilities, Amazon Detective provides a more user-friendly and faster investigation workflow by automatically organizing relevant data and linking events to provide context. This accelerates the process of identifying the root cause and scope of the anomaly.

Thus, option B strikes the optimal balance between thoroughness, speed, and operational safety, making it the preferred choice for incident investigation in a live production environment.

Question 9:

A company needs to protect sensitive data stored in Amazon S3 buckets. The security team wants to enforce that all objects uploaded to these buckets are encrypted at rest. Which of the following is the most effective way to ensure that objects are always encrypted upon upload?

A. Use a bucket policy that denies any PutObject request if the x-amz-server-side-encryption header is not included.
B. Enable default encryption on the S3 bucket using AWS Key Management Service (AWS KMS) keys.
C. Require users to upload encrypted objects by configuring client-side encryption on all client devices.
D. Enable versioning on the S3 bucket and require encryption for previous versions.

Answer: B

Explanation:

This question tests your knowledge about enforcing encryption for Amazon S3 data at rest, a common security best practice.

  • Option A describes using a bucket policy to deny PutObject requests that do not specify the x-amz-server-side-encryption header. While this can work, it relies on the client to specify the encryption header during upload, which may be error-prone. Additionally, this approach blocks unencrypted uploads but does not enforce encryption automatically if the header is omitted.

  • Option B is the most effective solution. By enabling default encryption on the S3 bucket, AWS automatically encrypts every object uploaded to the bucket using the specified AWS KMS key (SSE-KMS) or Amazon S3-managed keys (SSE-S3). This approach guarantees encryption without relying on the client to include the encryption header. This is a scalable and seamless way to enforce encryption at rest.

  • Option C requires clients to implement client-side encryption before uploading objects. This is not centrally enforceable and adds complexity for client management. It also doesn't prevent unencrypted uploads if a client fails to comply.

  • Option D relates to versioning, which keeps previous versions of objects but does not enforce encryption by itself. Enabling versioning alone does not guarantee that new or existing objects are encrypted.

In summary, default encryption on the S3 bucket is the most reliable and operationally simple way to ensure that all objects are encrypted at rest.

Question 10:

A security engineer needs to monitor AWS CloudTrail logs to detect unauthorized API calls and policy changes in a multi-account AWS Organization environment. 

Which approach would provide the most centralized and automated alerting solution with minimal operational overhead?

A. Configure CloudTrail in each member account to send logs to their respective Amazon S3 buckets and manually review logs regularly.
B. Create a single organization trail in the management account with log file validation enabled, aggregate logs into a centralized S3 bucket, and use Amazon GuardDuty to detect suspicious activity.
C. Enable AWS Config rules in each account to monitor for changes and send alerts to the security team’s email via Amazon SNS.
D. Use AWS CloudWatch Events (EventBridge) in each account to detect API calls and invoke Lambda functions that send notifications to the security team.

Answer: B

Explanation:

This question assesses your understanding of centralized logging and automated detection of security events across AWS Organizations.

  • Option A involves setting up CloudTrail separately in each account with logs in individual S3 buckets. While this provides logs, manual review is inefficient and slow, especially in multi-account setups. This approach does not scale well and increases operational overhead.

  • Option B is the best solution. Creating a single organization trail from the management account enables you to collect CloudTrail logs from all member accounts into a centralized S3 bucket. Enabling log file validation improves integrity checks of the logs. Pairing this with Amazon GuardDuty, a threat detection service that analyzes CloudTrail logs for suspicious activities like unauthorized API calls, automates the detection and alerting process. This approach reduces manual intervention and centralizes monitoring effectively.

  • Option C suggests enabling AWS Config rules, which can monitor configuration changes but does not cover all API calls or broader security anomalies. While useful, it is complementary and not sufficient alone for comprehensive API call monitoring.

  • Option D requires setting up CloudWatch Events (EventBridge) and Lambda functions in every account, which increases operational overhead. This decentralized approach complicates management and scaling compared to the centralized organization trail with GuardDuty.

To summarize, Option B offers the most centralized, scalable, and automated method to monitor and alert on unauthorized API activity and policy changes in a multi-account AWS environment.


SPECIAL OFFER: GET 10% OFF

ExamCollection Premium

ExamCollection Premium Files

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads
Enter Your Email Address to Receive Your 10% Off Discount Code
A Confirmation Link will be sent to this email address to verify your login
We value your privacy. We will not rent or sell your email address

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Next

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.