Where are we coming from
We are a long-standing software partner of SAP, specializing in supply chain AddOns for nearly 20 years. We have different solutions inside of the portfolio and the older ones are more procedural coding, while the newer ones have more object oriented, more unit testing, and so on. But the big majority of all user interfaces are ALV Grid, ALV Tree and graphics (based on cl_gui_chart_engine). By the time we developed our own framework for these tools, so that it is easy to create several new containers with ALVs, letting the user decide, if he wants to use docking containers or dialogbox containers, support multi screens and so on.
In essence, we had a user interface that our end users greatly appreciated due to its ease of use. However, despite creating additional UI5 applications in recent years, most of our current users continue to work primarily within the SAP GUI. As a result, our initial plans were to gradually transition to more UI5-based applications. But the pressure to do so wasn’t significant, at least not from our customer base.
The majority of our customers continue to operate on onPremise systems. However, we also serve a significant number of private cloud customers. Interestingly, we have only one customer currently using a public cloud system. As for the software versions, most customers are still on ECC 6.0, although many are currently undergoing S/4 conversion projects. Despite this, we haven’t observed a substantial shift toward public cloud scenarios. But some customers are transitioning from their on-premise systems to private cloud environments.
Steampunk and embedded steampunk
Upon learning about steampunk in the context of public cloud scenarios, we decided to explore its potential. However, our assessment revealed more disadvantages than benefits for our specific situation.
Whitelisted APIs posed challenges, being slow and not always designed to meet our specific needs. Additionally, data availability was an issue—critical data often remained inaccessible via the whitelisted APIs.
Our unique challenge lies in providing highly integrative solutions that primarily operate on SAP documents, MRP results, capacities, production orders, and more. Unlike scenarios where data is merely replicated from the ERP system, our solution often involves real-time interactions with orders, order conversions, and other dynamic processes. Unfortunately, this doesn’t align well with the steampunk approach.
In 2021, we encountered the first information about embedded steampunk.
Direct access to (released) CDS-Views would give us the opportunity to read data in a fast way and without the need to replicate it onto other systems sounded good. Also BAdI usage was announced, RAP as the programming model, and maybe one codeline for onPrem, private and public cloud was also interesting.
Clean core project with SAP
And then in late 2022 we heard that SAP is looking for customers and partners, which want to participate in a pilot project for clean core development on private cloud systems. This would give us the opportunity to get more experience with clean core development, would get a glimpse at the new tools like the landscape portal, learn more about the changing software development process, etc. That sounded great, so we picked an smaller part of our application, where details of purchasing requirements are shown, with additional alerts, additional text functions, functions for releasing, source determination and conversion into purchase orders. That looked good as a first example, with which we would get our first experiences.
So we began to discuss the approach with the SAP project team members, started development and the SAP colleagues always helped us, when we were stuck, didn't know how special parts had to be implemented to fit the clean core requirements, etc.
Namespace and package structure
Our initial step involved creating a new namespace and package structure to house all our new code. As the project progressed and information about Tier 2 became more detailed, we also established an additional namespace and package structure specifically for Tier 2 development. In summary:
- Tier 1: We maintain a separate namespace and package structure for Tier 1 components. All packages within the Tier 1 software component use the “ABAP for Cloud Development” language version.
- Tier 2: A distinct namespace and package structure is dedicated to Tier 2 development. The language version for Tier 2 packages remains as “Standard ABAP.”
- Our old legacy AddOn continues to exist as Tier 3.
To change the ABAP language version to “ABAP for Cloud,” execute the report "RSMAINTAIN_SWCOMPONENTS" (see screenshot).
Looking for the released APIs
The next step was, that we had to look for released APIs, to get our data read. For this we normally start on api.sap.com and then looking into the Product "SAP S/4HANA Cloud Private Edition" and checking the CDS View tab. This often helps, but sometimes it can get complicated to find the correct object. For example we were looking for the missing part indicator in the reservations. We quickly found reservation data, but this didn't offer the missing part indicator.
So we created an Customer-Influence Request (CI-Request), to get this field added to the existing CDS view. But the reply was, that we should use the CDS views of the planned order and production order, because the information is available there. Additionally we created a little helper report, where we could enter the old table and field name, which then checks for released or not released CDS views, which container this field.
When it got more clear, how the process is working with Tier 2, additionally the Cloudification Repository Viewer can be helpful, because you could easily look up, if an object is already released for Tier 1 or Tier 2.
We additionally decided to create an internal list to keep track of all our topics, the successor objects, CI-Request numbers for public and private cloud, etc.
But in total we will have to create most likely a few hundred CI-Request until we get most of our application transferred to the clean core and we will have to see, if some of our application won't be able to work in a clean core environment.
Time delays and non-strategic solutions
CI-Request take time… When we created CI-Request the response time was quite different. Some got early feedback, some took several weeks. And the majority of them, didn't get positive results immediately. Many needed additional information, why the information is important for us, and some were declined, because the information was belonging to solutions, which are not strategic for SAP, but still usable by our customers. This is of course problematic for us, because we cannot offer our customers a solution, if we don't get the data, even if the SAP solution is not officially obsolete.
Additionally it could take quite a long time, even if SAP is willing to modify or create an API for us. Let's sketch an example:
- We want to offer a solution to private cloud and onPrem customers and an API is missing
- We request this API on may 1st, 2024
- SAP has some questions, but let's say give us a positive feedback two month later. After a positive feedback, it could of course take time to develop this API and this could of course interfere with the deadlines for specific SAP release cycles. But let's assume it will be directly available in the next private cloud and the next onPrem Release
- With the current 2 year release cycle onPrem, this would be fall 2025
- To use it for our own developments we would have to upgrade our own development system onto this level and would ideally ship the first product which is using this in 2026
- This scenario is quite positive, depending on release cycles of SAP or ourselves it could be easily later…
And last but not least, the customer also have to be on SAP release 2025 and have our latest release installed. So from a todays point of view, it takes additional years until maybe half of our customers can use this function…
Tier 2
At least for some of these problems Tier 2 can be a solution. Tier 2 basically says, that SAP allows us the use of functionality, which has not been designed for clean core, but can still be used, until SAP has a new API available. So old function modules, database tables, and so on could get a tier 2 release, so that we might not have the above stated problem, but directly can use Tier 2, but also preparing our Tier 2 coding, so that it could later be easily replaced with the new API.
This tier 2 will be an own software component, where only the neccessary Tier 2 parts are available, and which releases it's data for the Tier 1 and Tier 3 software component, which basically results in the following installation sequence for customers:
- Tier 2
- Tier 1
- Tier 3 (old legacy AddOn, until we have transferred all of it's functions and applications into Tier 1 and 2)
Success is a process
Since our AddOn is quite big and in many parts very integrated into the SAP system, this is not a fast and easy conversion, but it will take years. So our current plan is to transfer functionality year by year into the new solution, certify the new solution and in the first years our customers will normally be installing both solutions, the old legacy addon and the clean core solution, but in some years, maybe the old legacy addon will no longer be neccessary. It's currently not completely clear for us, how much data we will provide in the BTP layer, but there might also be some additional dashboards, etc. on this level. But since our applications are highly integrative the data part will be on-stack almost all of the time.
In the following graphic you can see he evolution of the feature scope which we are offering, and where the percentage of functions which shall be offered with a clean core certification will increase over the years.
Migrating applications and migrating code
When we want to migrate our solutions into the cloud, then it's a challenge on multiple levels. For new applications it might be a bit easier, because you can directly create the concept with regards to the available data and the UI5 possibilities.
It get's more complicated, when we are looking at old applications, which are still running with ALVs. On the one hand it's not uncommon, that not all of the data is available through released APIs, on the other hand even if the data is available, automatic checks of the current code are often difficult.
When I analyze the source code of an old application, all the data elements, structures, etc. which are connected to the old framework (for example ALVs) is not really relevant for me, because if I create it new there will be no ALV for the output. So I am mainly interrested in the selection and the processing of the data. If I am lucky, then data and framework are separated, but at least in our case it is not separated by packages, or something like this, so that I cannot build my object list for ATC checks only on the data level.
In our case it is normally separated through methods and form routines, so we did manual checks in these objects, which data we are reading, and checking the APIs afterwards. Since our solutions contains ~5.8 million lines of source code, we don't do this directly at once. Each of our different development teams picks an pilot application out of their product scope, which will be transformed first. Since the first transformations will be slower, due to the learning curve, this pilot application shouldn't be too large.
Since it is problematic, when we i.e. need 20 not released objects for one of the pilot applications, and we don't get alle important parts released (or released on a different times), we created CI requests with multiple objects as well. Of course this makes only sense, if alle the different objects are from the same SAP module, because otherwise the handling on SAP side would be difficult. But when they are in the same SAP module, this might make sense, so that we might get the objects released in the same time and it is clear, that we have problems if only some are successful.
Outlook
When we are looking at the clean core project until now, then we still think, that this is the way to go, at least for solutions, which are tightly coupled with ERP processes and data. Otherwise it might make sense to check, if the solution should be created on the BTP.
But the clean core on-stack development looks good as a replacement for the classical on-stack development. Of course, this topic is still quite new and therefore there are still some problems, but we are seeing that SAP is trying to build up a good development and delivery process for clean core.
No comments:
Post a Comment