The following architecture diagram shows the workflow for pulling the SAP BW data using open hub file destination to land the data in S3.
The open hub destination allows you to distribute data from a BW system to non-SAP data marts, analytical applications, and other applications. It allows you to ensure controlled distribution across several systems.
Open hub supports full and delta modes for extraction making this convenient to extract from info providers like Info cubes, DSO’s and ADSO’s or PSA’s. OpenHub destinations include Files, data base tables and other third party destinations. See SAP open Hub documentation for detailed information.
-> AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage.
-> Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. These include moving tape backups to the cloud, reducing on-premises storage with cloud-backed file shares, providing low latency access to data in AWS for on-premises applications, as well as various migration, archiving, processing, and disaster recovery use cases.
-> For this LAB : We will be using the file gateway component in AWS Storage gateway. The file gateway acts as a File share which is mapped to an S3 bucket.
-> The file share is NFS mounted as a directory on the BW application server. The detailed steps to NFS mount an S3 bucket on to the SAP BW application server and transfer a cube to the S3 bucket is mentioned below.
Please note that you can use similar flows to replicate from any info provider like ADSO’s etc.
To create a file gateway, you simply visit the Storage Gateway Console, click on Get started, and choose File gateway. See AWS Storage gateway documentation for more setup instructions.
Then choose your host platform: Amazon EC2:
➡️ For Inbound security rules – Please ensure that ports 443, 80 and 2049 are accessible from your desktop and the SAP BW instance.
Once the instance is launched, Please use the public IP address from the Ec2 console and paste it in the next step.
The last step is to set the timezone and give your gateway a name – for eg BWgateway and click Activate Gateway.
Once the Gateway is activated, it configures a local disk on the volume added in the instance configuration. You can assign these to cache in the next step.
➡️ Your gateway is now up and running and you can see it in the console.
Click on Create file share
➡️ Ensure to populate the bucket name that you created earlier. You can also configure allowed clients, For the sake of establishing the concept, we will allow all connections.
➡️ Please ensure that you only allow connections from the BW host in your AWS environment.
➡️ Please look at the Linux configuration for nfsmount. We will use this in our BW console.
1) Login to your linux host in our SAP sand box.
2) Create a directory – mkdir /gateway
3) Assign permissions – chmod 744 /gateway
4) Change owner chown
sudo chown -R <sid>adm:sapsys /gateway
5) Copy the NFS mount command for Linux appending the directory name gateway(Linux):
sudo mount -t nfs -o nolock,hard 18.xxx,xxx,xxx:/sapbwdemo /gateway
➡️ In this case 18.xxx.xxx.xxx is my public IP for storage gateway, sapbwdemo is my s3 nucket and /gateway is my linux directory that I map to the bucket.
5) Create a file or subdirectory and check if it appears in your S3 bucket.
sid-adm:/gateway # mkdir sap
1) Login to SAP BW system through SAP GUI 2) Enter transaction AL11 3) Click on the configure user directories icon as highlighted
Give the Directory name as DIR_AWS and path as /gateway and valid for server name as All and hit Save
Double click on the directory in AL11. It should be in Sync with what you see in your S3 bucket. Now the next step is to create a logical file system.
➡️ Select Logical file name definition cross client and create a new entry as below. Please note that the logical file path we created earlier is assigned here
➡️ A logical file ZBWAWSOUT is created and assigned to the NFS directory shared with AWS Storage gateway.
1) Go to Transaction RSA1 2) Under Modelling -> Select Open Hub destination 3) Under Info area Netweaver Demo – Right Click and select Create Openhub destination. 4) Please provide a name and description and your source object cube as indicated below. ( This is just an example, you can literally choose any source)
6) Choose Destination type as File , Select Application server and Type of file name as logical file name as indicated below
8) Go to the field definition tab to verify the fields. 9) Check and activate the destination 10) The destination is created 11) Right click on the destination and select create data transfer process 12) A pop up as indicated below appears, Please confirm your selections.
14) Accept the default settings and choose activate ( Feel free to change them if needed) 15) Click on Activate data transfer process 16) Click on the execute tab and select execute, a log should appear as below.
17) Verify the data in your S3 bucket. You should get 2 files one with header and one with the data. These extracts can be scheduled as a job as necessary.