Reading directly from tables in your SAP ERP system has, in the past, always been impossible, not from a technical point of view, but due to licensing restrictions. Today, I see more customers of SAP that do have a license in place to access the database of their S/4HANA system. Still, one of the problems why SAP did not allow this in the past remains: there are simply no datatypes available to you as they reside in what SAP calls the Data Dictionary or DDIC for short.
If you go through the application layer to read the table, you get datatypes through the ABAP runtime, and one of your options is, therefore, using SAP Remote Function Call modules. That said, this blog is not about finding the suitable RFC-enabled function module to read tables but more about the vast potential of SnapLogics Expressions, which you can use in almost all input fields of any Snap including the output of SAP RFC's. So, let's dive right into it and see what it can do for you.
Prerequisites
The RFC Execute Snap can only run on SnapLogic Groundplexes; it requires SAPs JCo and native libraries. The Linux installation of our Groundplex comes with all libraries, and there is no need for additional configuration, while the Windows version does not. For RFC to work on a Windows-based Groundplex, you have to download JCo from SAP, install the Windows 2013 Redistributable Package, and add the folder that contains the SAP JCo native library to the Windows PATH variable as described in our Documentation.
Configuring an account for SAP Snaps is straightforward. It can be done directly on an Application Server or via the Central Instance.
Calling the RFC Function Module
Reading an SAP Table the RFC way requires an RFC-enabled function module on the SAP side and the SAP Execute Snap in SnapLogic. I am sure other function modules can directly read a specific table, but I would like to focus on RFC_READ_TABLE in this blog. This RFC-enabled function module allows you to specify the table you want to read from the outside, including a WHERE clause if you choose to use it. The function module has been around for many decades and is used by many projects at SAP customers. While it has restrictions that we will look at a bit further down, I think it is an option you should consider over writing a custom function module yourself.
Using the SAP Execute Snap
The configuration of the SAP Execute Snap is pretty straightforward, as shown in the screenshot below; all you need to do is specify RFC_READ_TABLE as the function module to use, and you are done.
Ideally, you should place a Mapper Snap in front and behind the SAP Execute Snap to understand the function module's behavior.
Once you validate this pipeline, you can use the Mapper Snap before the SAP Execute Snap to specify the table you want to read by mapping the table name as a string to the QUERY_TABLE input parameter. Optionally, the OPTIONS input array allows you to specify a WHERE clause, as shown in the screenshot below.
Validating the pipeline for a second time will return the data read from the table specified in the field DATA with all fields packed into a single string. The restriction of the function module becomes apparent when you look at the definition of this field inside the SAP Gui in transaction SE37. The field is of type TAB512, and the length is 512 characters. This means that tables with a combined length of all fields more significant than 512 will only partially return information or not at all.
Transforming these strings into usable data with the correct types can then be achieved by using the output table named FIELD. This table contains the type, length, and offset information used to split the string into individual fields by leveraging the .map, .toObject, match, parse functions for int float, date, and the substr functions in SnapLogics Expressions. Match is used to find the type in the FIELD output table, susbstr returns the part of the DATA string that contains the data of the field, parse(Int, Float, Date) parses the string into the respective SnapLogic type, .toObject creates a new object with the name of the fields and the parsed content and map adds each row to an array in the SnapLogic Document.
$DATA.map(d=> $FIELDS
.toObject(
f=> f.FIELD NAME
, f=> match f.TYPE {
'I' => parseInt(d.WA.substr(f.OFFSET, f.LENGTH))
, 'N' => parseFloat(d.WA.substr(f.OFFSET, f.LENGTH))
, 'P' => parseFloat(d.WA.substr(f.OFFSET, f.LENGTH))
, 'D' => Date.parse(d.WA.substr(f.OFFSET, f.LENGTH), 'yyyyMMdd')
, _ => d.WA.substr(f.OFFSET, f.LENGTH).trim()
}
)
)
Validating the pipeline for the third time now will present you with a perfectly structured output.