Download AWS Certified SAP on AWS - Specialty.PAS-C01.Actual4Test.2026-04-02.130q.tqb

Vendor: Amazon
Exam Code: PAS-C01
Exam Name: AWS Certified SAP on AWS - Specialty
Date: Apr 02, 2026
File Size: 889 KB

How to open TQB files?

Files with TQB (Taurus Question Bank) extension can be opened by Taurus Exam Studio.

Demo Questions

Question 1
A company is running its on-premises SAP ERP Central Component (SAP ECC) workload on SAP HANA. The company wants to perform SAP S/4HANA conversion of the on-premises SAP ECC on SAP HANA landscape and migrate to AWS.
Which solutions can the company use to meet these requirements? (Choose two.)
  1. Perform SAP S/4HANA conversion of the SAP ECC on SAP HANA system by using SAP Software Update Manager (SUM). Migrate to AWS by using SAP Software Provisioning Manager.
  2. Perform SAP S/4HANA conversion and migration of the SAP ECC on SAP HANA system to AWS by using SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move.
  3. Perform migration of the SAP ECC on SAP HANA system to AWS by using SAP HANA system replication for database migration and AWS Application Migration Service for migration of the SAP ECC application instances. Perform SAP S/4HANA conversion by using SAP Software Update Manager (SUM).
  4. Perform SAP S/4HANA conversion of the SAP ECC on SAP HANA system by using SAP Software Provisioning Manager. Migrate to AWS by using AWS Application Migration Service.
  5. Perform SAP S/4HANA conversion of the SAP ECC on SAP HANA system by using SAP Software Update Manager (SUM). Migrate the database to AWS by using AWS Database Migration Service (AWS DMS). Deploy SAP S/4HANA application instances.
Correct answer: B, C
Question 2
A company is deploying SAP landscapes in a single AWS account. The company must use separate VPCs to host its production environment and non-production environment. The company is using an Amazon Elastic File System (Amazon EFS) file system to host the SAP transport file systems.
An SAP engineer attempts to use AWS Launch Wizard for SAP to perform an automated SAP deployment of the production environment. A deployment failure occurs when the SAP engineer attempts to reuse the SAP transport directory share from the non-production environment. This failure did not occur in previous non-production deployments.
The SAP engineer needs to complete the deployment and ensure that no additional costs are incurred for SAP transport directories.
What should the SAP engineer do to meet these requirements?
  1. Perform a manual deployment.
  2. Set up a new SAP transport directory for the production environment. Copy all files from the non-production transport host into the production transport directory by using rsync. Continue to use separate SAP transport directories for the systems.
  3. Set up a transit gateway or direct VPC peering to make communication possible between the production VPC and the non-production VPC.
  4. Skip the SAP transport directories step to complete the deployment.
Correct answer: C
Question 3
A company is deploying SAP Business Suite on SAP HANA by using two Amazon EC2 bare metal instances. The company has set up a Pacemaker cluster for SAP HANA. The cluster is set up between the two instances, which are configured to use SAP HANA system replication.
An SAP engineer notices that the overlay IP address is not reachable from the application servers. The overlay IP address is only reachable locally on the database cluster. Which actions should the SAP engineer take to resolve this issue? (Choose three.)
  1. Turn off the source/destination check on each bare metal instance.
  2. Modify the security groups to ensure that the minimal ports for connectivity between the application server and the database are opened.
  3. Add a route table entry to the route tables for the subnets of both bare metal instances for the overlay IP address.
  4. Ensure that both bare metal instances are in the same subnet.
  5. Perform a failover and tailback by using the Pacemaker cluster. Check whether the overlay IP address routing is functioning correctly.
  6. Move the Pacemaker cluster to EC2 VM instances instead of bare metal instances.
Correct answer: A, B, C
Question 4
A company is planning to implement a new SAP workload on SUSE Linux Enterprise Server on AWS. The company needs to use AWS Key Management Service (AWS KMS) to encrypt every file at rest. The company also requires that its production SAP workloads and non-production SAP workloads are separated into different AWS accounts.
The production account and the non-production account share a common SAP transport directory, /usr/sap/trans. The two accounts are connected by VPC peering.
What should the company do to achieve the data encryption at rest for the new SAP workload?
  1. Create an asymmetric KMS customer managed key in the production account. Create Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage for all root and SAP data. Implement encryption that uses the KMS key. Share the EFS file system from the production account with the non-production account. Import the KMS key into the non-production account to allow the production systems to access the SAP transport directory.
  2. Create a symmetric KMS customer managed key in the production account. Create Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage for all root and SAP data. Implement encryption that uses the KMS key. Share the EFS file system from the production account with the non-production account. Create an IAM role in the non-production account and a key policy for the KMS key in the production account to allow the non-production systems to access the SAP transport directory.
  3. Create a symmetric KMS customer managed key in the production account. Create Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage for all root and SAP data. Implement encryption that uses the KMS key. Share the EFS file system from the production account with the non-production account. Create an IAM role in the production account and a key policy for the KMS key in the production account to allow the non-production systems to access the SAP transport directory.
  4. Create an asymmetric KMS customer managed key in the production account. Create Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage for all root and SAP data. Implement encryption that uses the KMS key. Share the EFS file system from the production account with the non-production account. Create an IAM role in the non-production account and a key policy for the KMS key in the production account to allow the non-production systems to access the SAP transport directory.
Correct answer: B
Question 5
A company is running its on-premises SAP ERP Central Component (SAP ECC) production workload on SUSE Linux Enterprise Server. The SAP ECC workload uses an Oracle database that has 20 TB of data.
The company needs to migrate the SAP ECC workload to AWS with no change in database technology. The company must minimize production system downtime.
Which solution will meet these requirements?
  1. Migrate the SAP ECC workload to AWS by using AWS Application Migration Service.
  2. Install SAP ECC application instances on SUSE Linux Enterprise Server. Use AWS Database Migration Service (AWS DMS) to migrate the Oracle database to Amazon RDS for Oracle.
  3. Migrate the SAP ECC workload to AWS by using SAP Software Provisioning Manager on Oracle Enterprise Linux.
  4. Install SAP ECC with an Oracle database on Oracle Enterprise Linux. Perform the migration by using Oracle Cross-Platform Transportable Tablespace (XTTS).
Correct answer: D
Question 6
A company plans to migrate its SAP NetWeaver environment from its on-premises data center to AWS. An SAP solutions architect needs to deploy the AWS resources for an SAP S/4HANA-based system in a Multi-AZ configuration without manually identifying and provisioning individual AWS resources. The SAP solutions architect's task includes the sizing, configuration, and deployment of the SAP S/4HANA system.
What is the QUICKEST way to provision the SAP S/4HANA landscape on AWS to meet these requirements?
  1. Use the SAP HANA Quick Start reference deployment.
  2. Use AWS Launch Wizard for SAP.
  3. Create AWS CloudFormation templates to automate the deployment.
  4. Manually deploy SAP HANA on AWS.
Correct answer: B
Question 7
A company is running an SAP Commerce application in a development environment. The company is ready to deploy the application to a production environment on AWS.
The company expects the production application to receive a large increase in transactions during sales and promotions. The application's database must automatically scale the storage, CPU, and memory to minimize costs during periods of low demand and maintain high availability and performance during periods of high demand.
Which solution will meet these requirements?
  1. Use an SAP HANA single-node deployment that runs on burstable performance Amazon EC2 instances.
  2. Use an Amazon Aurora MySQL database that runs on serverless DB instance types.
  3. Use a HyperSQL database that runs on Amazon Elastic Container Service (Amazon ECS) containers with ECS Service Auto Scaling.
  4. Use an Amazon RDS for MySQL DB cluster that consists of high memory DB instance types.
Correct answer: B
Question 8
A company is running SAP ERP Central Component (SAP ECC) on SAP HANA on premises. The current landscape runs on four application servers that use an SAP HANA database. The company is migrating this environment to the AWS Cloud. The cloud environment must minimize downtime during business operations and must not allow inbound access from the internet.
Which solution will meet these requirements?
  1. Design a Multi-AZ solution. In each Availability Zone, create a private subnet where Amazon EC2 instances that host the SAP HANA database and the application servers will reside. Use EC2 instances that are the same size to host the primary database and the secondary database. Use SAP HANA system replication in synchronous replication mode.
  2. Design a Single-AZ solution. Create a private subnet where a single SAP HANA database and application servers will run on Amazon EC2 instances.
  3. Design a Multi-AZ solution. In each Availability Zone, create a private subnet where Amazon EC2 instances that host the SAP HANA database and the application servers will reside. Shut down the EC2 instance that runs the secondary database node. Turn on this EC2 instance only when the primary database node or the primary database node's underlying EC2 instance is unavailable.
  4. Design a Single-AZ solution. Create two public subnets where Amazon EC2 instances that host the SAP HANA database and the application servers will reside as two separate instances. Use EC2 instances that are the same size to host the primary database and the secondary database. Use SAP HANA system replication in synchronous replication mode.
Correct answer: A
Question 9
A company is planning to migrate its SAP Content Server from on premises to Amazon EC2 instances. The SAP Content Server stores data in a MaxDB database. The on-premises servers run the SUSE Linux Enterprise Server operating system.
The company wants to assess the benefits of cloud deployment by performing a proof of concept. An SAP solutions architect needs to perform a rehosting of the SAP Content Server on AWS to provide highly available and resilient storage.
Which solutions will meet these requirements? (Choose two.)
  1. Configure Amazon Elastic File System (Amazon EFS) file systems for the MaxDB permanent storage. Install the nfs-utils package on the EC2 instances. Create the necessary mounts to attach the EFS file systems to the EC2 instances.
  2. Configure Amazon FSx for Lustre file systems for the MaxDB permanent storage. Create the necessary mounts to attach the FSx for Lustre file systems to the EC2 instances. Update the /etc/fstab file with the directory name, DNS name, and mount name.
  3. Configure General Purpose SSD (gp2 or gp3) or Provisioned IOPS SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) volumes for the MaxDB permanent storage. Use the aws ec2 attach-volume AWS CLI command with device, volume ID, and instance ID to attach the mount to each EC2 instance.
  4. Configure Amazon S3 buckets for the MaxDB permanent storage. Create an IAM instance profile that specifies a role to grant access to Amazon S3. Attach the instance profile to the EC2 instances.
  5. Configure Amazon Elastic Container Service (Amazon ECS) volumes for the MaxDB permanent storage. Install the nfs-utils package on the EC2 instances. Create the necessary mounts to attach the ECS volumes to the EC2 instances.
Correct answer: A, C
Question 10
A company is planning to migrate its SAP Business Warehouse (SAP BW) 7.5 system on SAP HANA from on premises to AWS. The production database is 4 TB in size and has a scale-out architecture that consists of three nodes. Each node has 2 TB of memory. The company needs to keep the three SAP HANA nodes in the target architecture.
Which solution on AWS will provide the HIGHEST throughput for the SAP HANA database?
  1. Implement SAP HANA scale-out Amazon EC2 instances with default tenancy.
  2. Implement SAP HANA scale-out Amazon EC2 instances with Capacity Reservations in a cluster placement group.
  3. Implement SAP HANA scale-out Amazon EC2 instances in a spread placement group.
  4. Implement SAP HANA scale-out Amazon EC2 instances in a partition placement group.
Correct answer: B
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX FILES

Use ProfExam Simulator to open VCEX files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!