Have you ever used REDUCE keyword to resolve a real task in your daily work? For me not till today.
There is a table CRM_JSTO which stores the object guid together with the status object name it uses.
There is a table CRM_JSTO which stores the object guid together with the status object name it uses.
I am asked by my colleagues to generate a statistics about how many records exist for the combination of each object type and its status profile. The result should be sorted by occurrence number in the DESCENDING order.
In my system the table CRM_JSTO has 462325 entries.
Here is a peek of my result, which shows for example object type COH ( CRM Order Header ) with status profile CRMACTIV is used 30489 times.
As first attempt I get the result via the following report using REDUCE approach:
REPORT zreduce1.
DATA: lt_status TYPE TABLE OF crm_jsto.
SELECT * INTO TABLE lt_status FROM crm_jsto.
DATA(lo_tool) = NEW zcl_status_calc_tool( ).
lo_tool = REDUCE #( INIT o = lo_tool
local_item = VALUE zcl_status_calc_tool=>ty_status_result( )
FOR GROUPS <group_key> OF <wa> IN lt_status
GROUP BY ( obtyp = <wa>-obtyp stsma = <wa>-stsma )
ASCENDING NEXT local_item = VALUE #( obtyp = <group_key>-obtyp
stsma = <group_key>-stsma
count = REDUCE i( INIT sum = 0 FOR m IN GROUP <group_key>
NEXT sum = sum + 1 ) )
o = o->add_result( local_item ) ).
DATA(ls_result) = lo_tool->get_result( ).
Source code of ZCL_STATUS_CALC_TOOL could be found from my github.
After this implementation has finished I realized that it is actually not necessary to calculate the total number of each group.
As a result I have another solution: the count of each group is directly calculated by kernel via key word GROUP SIZE.
Finally I use the following report to compare the performance of both solutions:
report z.
DATA: lt_status TYPE zcl_status_calc_tool=>tt_raw_input.
SELECT * INTO TABLE lt_status FROM crm_jsto.
DATA(lo_tool) = NEW zcl_status_calc_tool( ).
zcl_abap_benchmark_tool=>start_timer( ).
DATA(lt_result1) = lo_tool->get_result_traditional_way( lt_status ).
zcl_abap_benchmark_tool=>stop_timer( ).
zcl_abap_benchmark_tool=>start_timer( ).
lo_tool = REDUCE #( INIT o = lo_tool
local_item = VALUE zcl_status_calc_tool=>ty_status_result( )
FOR GROUPS <group_key> OF <wa> IN lt_status
GROUP BY ( obtyp = <wa>-obtyp stsma = <wa>-stsma )
ASCENDING NEXT local_item = VALUE #( obtyp = <group_key>-obtyp
stsma = <group_key>-stsma
count = REDUCE i( INIT sum = 0 FOR m IN GROUP <group_key>
NEXT sum = sum + 1 ) )
o = o->add_result( local_item ) ).
DATA(lt_result2) = lo_tool->get_result( ).
zcl_abap_benchmark_tool=>stop_timer( ).
ASSERT lt_result1 = lt_result2.
The comparison result shows the second solution ( LOOP AT GROUP ) is much more efficient than the REDUCE version, which makes senses since it avoid the group size calculation in ABAP layer.
Uah, not only demonstrating REDUCE (I wasn't aware, FP instructions were already in there ), but compare with it Loop at group. Too Cool
ReplyDelete