DS2: SAP Data Services Setup

SAP Data Services => Data Source creation:

  1. Open the SAP Data Services Designer application.

  2. Right-click on your SAP Data Services project name in Project Explorer.

  3. Select New > Datastore.

  4. Fill in Datastore Name. For example, NPL.

  5. In the Datastore type field, select SAP Applications.

  6. In the Application server name field, provide the instance name of the SAP Application server,

  7. Specify the access credentials. Please create a separate user in SAP for Data services if possible.

  8. Click OK.

aws-console.find.amazon-connect

New datastore appears in the Datastore tab in the local object library in Designer.

aws-console.find.amazon-connect

SAP Data Services => Target Data Store creation:

Configure a File Location object to point to the S3 Bucket

  1. Select File Formats. See SAP documention to learn about new file location object in data services.

  2. Select File Locations and Right click on new file location.

  3. Choose Protocol as Amazon Cloud Storage and enter the access key and secret key credentials saved earlier.

  4. Save the credentials and test to ensure connectivity works.

aws-console.find.amazon-connect

SAP Data Services => Creating Import:

Following steps import ODP objects from the source datastore for the initial and delta loads and make them available in SAP Data Services.

  1. Open the SAP Data Services Designer application.

  2. Expand the source datastore for replication load in the Project Explorer. Double click on the ODP object.

aws-console.find.amazon-connect

  1. Select the External Metadata option in the upper portion of the right panel. The list of nodes with available tables and ODP objects appears.
  2. Click on the ODP objects node to retrieve the list of available ODP objects. The list might take a long time to display.
  3. Click on the Search button.
  4. In the dialog, select External data in the look in menu and ODP object in the Object type menu.
  5. In the Search dialog, select the search criteria to filter the list of source ODP object(s).
  6. Select the ODP object to import from the list. aws-console.find.amazon-connect
  7. Right-click and select the Import option.
  8. Fill in the Name of Consumer.
  9. Fill in the Name of Project.
  10. Select Changed-data capture (CDC) option in Extraction mode.
  11. Click Import. This starts the import of the ODP object into Data Services. The ODP object is now available in the object library under the NPLT node.

➡️ For more information, see Importing ODP source metadata section in SAP Data Services documentation.

aws-console.find.amazon-connect

Now that the source and connections are created, it is time to create the data flow.

➡️ Please review this video for step by step instructions on how to create a data flow and load to a file in case you are using data services for the first time.

SAP Data Services => Creating Data Flow:

  1. Open the SAP Data Services Designer application.

  2. Right-click on your SAP Data Services project name in Project Explorer.

  3. Select Project > New > Data flow.

  4. Fill in the Name field. For example, DF_S3.

  5. Click on Finish.

aws-console.find.amazon-connect

Create a Batch job

  1. Right click on the project and select Create new batch job.
  2. Drag and Drop the dataflow on the canvas and double click.

Build your data flow

  1. Build your data flow by Dragging the ODP object on to the data flow workspace.

  2. Double click on the ODP object and select intial load to no

  3. Drag a query object from the transforms tab to the data flow workspace and connect it to the ODP object.

    aws-console.find.amazon-connect

  4. Open the query object, Right click on the schema out and create a file format.

    aws-console.find.amazon-connect

  5. In the location, Select the S3 location saved earlier. In the directory name , you can enter the folder name in your S3 bucket.

  6. A new file object is created in the File format tab. Please drag that on to the data flow workspace.

  7. Connect the output of the Query to the file format.

  8. Save your changes.

Execute your data flow

  1. Right click on the job name and execute your data flow. The initial set of records will be loaded.

  2. Add more records to SNWD_PD via SEPM_PD.

  3. Run the job again, it should load the deltas.

➡️ The jobs can be scheduled in the background for large tables.