Database Tables

«« Previous
Next »»

In ABAP Dictionary, the term "database table" (or table for short) is the database-independent definition of a database table. In Open SQL, only those database tables can be accessed that are defined in ABAP Dictionary.

The name of a database table in ABAP Dictionary can have a maximum of 16 characters, can consist of letters, numbers, and underscores, must start with a letter, and can be prefixed by a namespace prefix (/.../) from a prefix namespace. The name is in the namespace of the data types in ABAP Dictionary.

The definition of a database table consists of the following:
  • A flat, non-nested structure in ABAP Dictionary with its technical and semantic attributes
  • Additional technical attributes of the database table
  • Additional semantic attributes of the database table
The most common table category of database table is the transparent table. When the table is activated, the platform-specific SQL DDL statement CREATE TABLE generates a physical database table in a separate database schema SAPsid (with all indexes not excluded explicitly in the central AS ABAP database definition). Here, sid stands for the system ID (the name of the AS ABAP). The predefined types in ABAP Dictionary used to define the components of the structure are implemented as database-specific types. The definition of a table in the database can be displayed in ABAP Dictionary by choosing Utilities → Database Object. Alongside regular transparent tables, the following categories also exist:
  • Special global temporary tables (GTTs) for saving temporary data within a work step
  • Pooled tables and cluster tables that are grouped in other tables and do not exist as individual variants of database tables on the database
The order of the fields in ABAP Dictionary and on the database does not need to match. More specifically, the order of the table fields (with the exception of key fields) of existing database tables in the dictionary can be modified without modifying the database object. When using Open SQL, this is handled appropriately in the database interface and the order in ABAP Dictionary always applies. When using Native SQL, the order of the columns in the database system must be applied explicitly. The results are placed in the same order as in ABAP Dictionary (if possible) only if AMDP is used.

◐ Technical attributes of database tables

The technical attributes of a database table include the attributes of its structure and attributes specific to the table.

1. Technical attributes of the structure of a database table

The following technical attributes are mostly the same as those of a structure in ABAP Dictionary:

►Table fields (columns)

A table field is a component of the structure of the database table and describes the name, the data type, and any semantic attributes of a field of a database table. A table field is a component of the structured data type of the table and the associated rules apply to the component name and component type, with the following restrictions:
  • The name of a table field cannot be a reserved name. The reserved names that cannot be used are in the database table TRESE. 
  • A table field always has one elementary data type. Reference types, substructures, and table types are not possible. The elementary data type is specified directly for the table field or by using a data element and is always based on a predefined type in ABAP Dictionary. A table field has a platform-specific type in the physical database table. For each platform, there is a mapping between the elementary types in ABAP Dictionary and the platform-specific types. If only Open SQL is used, this mapping is transparent for ABAP programmers and only the predefined types in ABAP Dictionary and their ABAP types are relevant for the ABAP program.
  • A table can have a maximum of 749 fields. The total of all field lengths is restricted to 4030 bytes. In the case of deep fields such as STRING, RAWSTRING, LRAW, and LCHR, only the references are counted and not the actual content.
  • Fields with the type CHAR can have a maximum of 1333 characters.
  • Any fields with the types LRAW or LCHR must be located at the end of the table and only one field of this type is permitted in each table. Furthermore, a field with the type INT2 or INT4 (for the maximum length of the field) must be located directly in front of this field.
  • Other rules apply to the fields in pooled tables and cluster tables.
Include structures

Structures (but not database tables or views) can be included in a database table as include structures. A structure can only be included in a table if its components meet the table field prerequisites above. In particular, an included structure cannot contain any substructures. Any naming conflicts must be bypassed by defining suffixes. As in regular structures, the include can be assigned the name of a group.

When a structure is included, all its fields can be flagged as key fields. The include structure must then be inserted in the existing key fields or directly after them. Individual fields of an included structure cannot themselves by key fields.

When an include structure is included, the flag for initial values can be set for the entire structure. In this case, those fields in the include structure for which this attribute is set also keep their attribute. If the flag for initial values is not set for the included include structure, none of its fields have this attribute.

The fields of an included include structure are still part of the structure. They can only be edited using the structure and any changes made to the structure are applied to all including database tables. The fields of an included include structure can also be transformed to direct fields of the database table that are no longer associated with the originally included structure.

Enhancement category

The same applies to the enhancement category of database tables as to the enhancement category of structures.

Notes:
  • Database tables cannot be written to using nested structures because these structures are not possible in every supported database.
  • If it meets the prerequisites above, an existing structure can be transformed to a database table. Conversely, a database structure can also be transformed to a regular structure (this deletes the database object).
  • The length of the structure is restricted in such a way that the requirements of all supported database platforms are met. At SAP, tables can be flagged to allow this restriction to be exceeded in the case of features. When tables flagged in this way are accessed using Open SQL, a syntax check warning occurs. The warning indicates that a cross-platform replacement implementation must be made. The class CL_ABAP_DBFEATURES can be used to identify which restriction is supported by the current database system.
2. Specific technical attributes of a database table

The following technical attributes are provided specifically for database tables. They define attributes of the physical database table and how the database interface is used for access. Apart from the activation type, these technical settings are a standalone object and can be activated and transported separately from the table.

➤ Table category

The default table category is the transparent table. In transparent tables, the database object has an identically named variant with the same columns as in the definition in ABAP Dictionary. The fields of transparent tables can be edited using Native SQL as well as using Open SQL. Outside AS ABAP, they can be edited using the programming interface of the database.

ABAP Dictionary makes it possible to transform database tables from one category to another. To create a different table category as a transparent table, a transparent table must be created first and then transformed into one of the following table categories:
  • Global temporary table
A global temporary table (GTT) is a special transparent table used exclusively as a repository of temporary data during a database LUW.
  • Pooled tables and cluster tables
Pooled tables and cluster tables are the content of a table pool or table cluster and do not exist as variants of tables on the database.

➤ Data class

For the database platforms Oracle and Informix, the data class determines the physical area (tablespace) of the database in which a database table is created (otherwise it is ignored). The most important data classes for application data are:
  • APPL0 (master data)
Data that is rarely changed. One example of master data is data in a table of addresses, such as names, postal addresses, and telephone numbers.
  • APPL1 (transaction data)
Data that is changed a lot. One example of transaction data is warehouse stock that changes each time an order is placed.
  • APPL2 (organizational data)
Customizing data specified when the system is configured and then changed only rarely. One example is the table T005 that contains the country codes.
  • USR and USR1 (customer data)
Database tables for customer developments in a separate tablespace.

There are also other data classes for system tables used internally by the runtime environment, such as SDIC for ABAP Dictionary tables.

➤ Size category

The size category determines the size of the initial memory reserved for the table on the database. Values between 0 and 9 can be specified. A number of expected rows from the table are assigned to these values.
 Size Category  Expected Rows
 0 0 to 1,000
 1 1,000 to 4,200
 2 4,200 to 17,000
 3 17,000 to 68,000
 4 68,000 to 270,000
 5 270,000 to 540,000
 6 540,000 to 1,000,000
 7 1,000,000 to 2,100,000
 8 2,100,000 to 4,300,000
 9 4,300,000 to 170,000,000

If the initial space reserved is exceeded, a new memory area is added implicitly in accordance with the chosen size category.

Note: 

A size category must be chosen that does not create too many small memory area and no memory areas that are too big.

➤ Logging

If logging is activated, any changes to table entries are logged in the log table DBTABLOG. This also require the profile parameter rec/client to be set accordingly. This profile parameter can have the following values:
  • ALL
Logging is cross-client.
  • clnt1, clnt2, ...
Changes are logged only in the specified clients clnt1, clnt2, ...
  • OFF
No logging.

The logs can be analyzed using the transaction "Table History" (transaction SCU3). Logging takes place independently of updates.

If there are more than 250 characters in the table key or if a full table row has more than 16000 bytes, the table cannot be logged. The lengths of fields with the predefined types STRING, RAWSTRING, LCHR, and LRAW are counted in full here.

Notes: 

1. If logging is activated, access to the database table slows down accordingly. If logging is activated for many database tables, lock situations can arise for the log table.
2. No data can be written to a database table with enabled logging using the statement INSERT with subquery.
3. The documented method IS_LOGGING_ON of the system class CL_DBI_UTILITIES can be used to verify whether logging is currently switched on for a database table.

➤ SAP buffering

It is possible to specify whether SAP buffering is active for a database table and also how it is performed.
  • Buffering permission
  • Buffering type
Buffering permission

SAP buffering can be permitted for a database table as follows:
  • Buffering not allowed
SAP buffering is not performed for the table. Open SQL always accesses the current data of the data table directly. This setting also dictates that SAP buffering should also not be activated for this table in any other systems due to the way it is used.
  • Buffering allowed, but switched off
SAP buffering is not performed for the table when delivered. This setting dictates, however, that buffering is possible in principle and can be activated in other systems depending on how the table is used.
  • Buffering switched on
SAP buffering is performed for the table. If possible, Open SQL statements access the SAP buffer in the shared memory instead of accessing the table directly.

Note:

Certain prerequisites, some dependent on the buffering type, must be met before Open SQL can access the SAP buffer instead of accessing the database table directly.

Buffering type

The buffering type determines which data is loaded from the table to the SAP buffer when a table entry is accessed and buffering is switched on. The possible buffering types are:
  • Single record buffering
  • Generic buffering
  • Full buffering
➤ Storage type

The storage type only applies if the current database is an SAP HANA database. The specified storage type is ignored by other database systems. In this case, the platform-specific storage type is used. The following settings can be defined for the SAP HANA database:
  • Column store
This setting should be used in following cases:
    • The database table is designed for application data analyzed in SAP HANA.
    • The database table contains a very large number of rows and the storage type column store provides better compression.
    • The database table has a full text index.
    • The database table contains table fields with HANA data types that are supported only by the storage type column store.
  • Row store
This setting should only be used in SAP HANA databases if the database table is used mainly for frequent single row accesses. This is the only possible setting for global temporary tables (GTTs).
  • Undefined
    • This setting cannot be used for transparent tables. Column store or row store must be specified for these tables instead.
    • This is the only setting possible for pooled tables and cluster tables Pooled tables and cluster tables are not defined as database tables on the database, which means there is no point in specifying a fixed storage type. The fixed storage type is set when pooled tables and cluster tables are transformed to transparent tables. Column store is usually the recommended type here.
➤ Activation type

The activation type is not relevant for database tables in application programming. In these tables, the default value 00 can be kept. In the default setting, a database table can be activated interactively in the ABAP Dictionary tool.

The following values for the activation type only apply to those database tables required by the ABAP runtime environment itself (and hence only to internal SAP developments):

1. Activation type 01

The runtime object of the database table must be created using an ABAP kernel program before the table is activated in ABAP Dictionary. This may make it impossible to modify and activate important system tables directly.

2. Activation type 02

The database table is used in the ABAP kernel, but the runtime object can be activated (and hence created) in ABAP Dictionary. It may be necessary to adjust the consumer program of the kernel. This is indicated by a note in the activation log.

3. Activation type 10

The database table is required before other objects are modified as part of a transport.
  • Semantic attributes of database tables
  • Global temporary tables
  • Pooled tables and cluster tables
  • Converting database tables

◉ Semantic Attributes of Database Tables 


The semantic attributes of a database table include the attributes of its structure and attributes specific to the table.
  • Semantic attributes of the structure of a database table
  • Specific semantic attributes of a database table
➤ Structure-Specific Semantic Attributes of Database Tables

The following semantic attributes are the same as those of a structure in ABAP Dictionary:
  • Short text
  • Documentation
  • Short text of table fields
  • Output style of table fields
  • Reference field for components of the predefined data types CURR and QUAN. If a field in a database table is used as a reference field in a different table, the field cannot be deleted.
  • Search help

➤ Table-Specific Semantic Attributes of Database Tables

The following semantic attributes are provided specifically for database tables.

1. Key Fields of Database Tables


At least one table field must be flagged as a key field in the structure of a database table. The table fields flagged as the key field become the primary key of the table. The key fields of a table must be located together at the beginning of the table, which means that no non-key fields are allowed between two key fields. The primary key comprising the key fields is a unique identification for a table row. If other combinations of table rows identify a table row uniquely, they are called key candidates. The primary index is created for the primary key of the table. This index provides optimized key access to the table rows when values are specified for the key fields.

A maximum of 16 key fields are allowed per table and they can contain a maximum of 900 bytes. If a key contains more than 120 bytes, the following restrictions apply:
  • Table contents cannot be transported by specifying the complete key values. Instead, the generic key values must be specified with a maximum length of 120 bytes.
  • The table cannot be used as the basis table of a lock object.
The following predefined data types are not allowed for key fields: DF16_SCL and DF34_SCL (both obsolete) plus FLTP, STRING, RAWSTRING, LCHAR, and LRAW.

Note: The key fields are restricted in such a way that the requirements of all supported database platforms are met. At SAP, tables can be flagged to allow these restrictions to be exceeded in the case of features. When tables flagged in this way are accessed using Open SQL, a syntax check warning occurs. The warning indicates that a cross-platform replacement implementation must be made. The class CL_ABAP_DBFEATURES can be used to identify which restrictions area supported by the current database system.

2. Client-Dependence of Database Tables


If the first column of a database table is a key field of the predefined type CLNT, the table is client-specific. In this case, the column contains the client ID evaluated in client copies and in automatic client handling in Open SQL.

Note: Generally, tables in the delivery class A are client-specific, whereas system tables in the delivery classes E, S, and W are cross-client.

3. Flag for Initial Values in Database Tables


Table fields of database tables can be given a flag for initial values. On the database, this flag is set to NOT NULL. When a new field is inserted into an existing database table, this flag can be used to assign the type-friendly initial value to this field in all rows. This is always the case in key fields.

If this flag is not set when a new field is inserted into an existing table, the field is given the null value in all rows. This value does not have a counterpart in ABAP and, in Open SQL, can only be queried using the special WHERE condition IS [NOT] NULL.

This flag is only needed when inserting new fields and the initial value is always set. This is of particular importance when tables are converted.

If the flag is set for an entire include structure, it only applies to those structure components where the flag is also set and not to all components. If the flag is not set for an include structure, all flags set in the included structure are ignored.

Notes:
  • The flag cannot be set for data types without an initial value, namely LCHR, LRAW, RAW, and the obsolete VARC plus NUMC with a length of 70 or greater.
  • In tables with many rows, it can take a long time to set the type-friendly initial value and the initial value flag should only be used if absolutely necessary or if the table has a small number of entries.
  • Generally, table fields on the database are created as NOT NULL even if the flag for the initial value is not set in ABAP Dictionary:
    • When a new table is created, all fields of the table are created as NOT NULL.
    • When a table is converted, all fields are given the attribute NOT NULL.
With the exception of the key fields, the new fields are not created as NOT NULL if the flag is not set in ABAP Dictionary only when they are appended or inserted The NOT NULL definition of a database field can be seen in ABAP Dictionary by displaying the database object.

4. Foreign Key Dependencies


A foreign key dependency is a semantic dependency between two database tables, a foreign key table and a check table.

A foreign key dependency is defined in the foreign key table. Check tables are assigned to table fields and these fields become foreign keys.

➢ Check table

Table fields of a database table whose data type is determined using a data element can be assigned a check table. Here, the value table of any domain of the data element is used as a default value. A table field to which a check table is assigned is called a foreign key field and becomes a part of the foreign key of the table. The table itself becomes a foreign key table. The primary key of the check table is the foreign key of the foreign key table. Each key field of the check table corresponds to a field in the foreign key table. Here, the corresponding field pairs must have the same data type.

Notes: 
  • Fields whose data type is specified directly cannot be assigned a check table.
  • As long as a database table is a check table none of its key fields can be deleted and the order of the key fields cannot be modified.
  • As long as a database table is a check table its primary key can be expanded later only by appending new key fields to its end. The existing foreign keys then automatically become generic with respect to the new key fields.
  • A client field cannot be inserted later in an existing check table.
➢ Foreign key

A foreign key consists of one or more foreign key fields of a foreign key table that is itself the primary key of a check table. A database table can have more than one foreign key. A foreign key dependency joins a foreign key table and its check tables. A foreign key table usually only contains entries where the content of the foreign key also occurs precisely once as content of the primary key in the check table. This must also be respected in writes performed using Open SQL.

Foreign keys can have the following attributes:
  • Input check
In the definition of the foreign key, it is possible to activate an input check for dynpros or Web Dynpros using a foreign key dependency. Furthermore, the standard message here can be replaced by a custom message. Any placeholders in the message are filled as follows when the message is sent: The first three placeholders are filled with the content of the foreign key fields that are assigned to the first three key fields of the check table after the client field. The fourth placeholder is filled with the name of the check table.
  • Generic foreign key fields
Foreign key fields can be flagged as generic in the definition of a foreign key. Generic foreign key fields are not checked against their check table in dynpro input checks.
  • Constant values
Foreign key fields can be given constant values in the definition of a foreign key. In dynpro input checks, the field of the check table assigned to a foreign key field of this type must have this value in the row determined by the remaining foreign key fields.
  • Inheritance
The fields of included include structures can be foreign key fields and are part of the foreign key of the database table. An inheritance relationship with the included structure exists by default and any changes to the definition of the foreign key in the included structure are applied to the foreign key of the database table in question. This inheritance relationship can be broken, and any changes to the definition of the foreign key in the included structure are not then applied to the database table.
  • Cardinality
An n:m cardinality can be defined for each foreign key. This cardinality has only a logging purpose, except in the definition of maintenance views .
Possible values for n:
Precisely one assigned row of the check table must exist for each record of the foreign key table.
No assigned rows must exist in the check table for a record of the foreign key table.

Possible values for m:
Precisely one assigned row in the foreign key table must exist for each row of the check table.
There can be no more than one assigned row in the foreign key table for each row of the check table.
There must be at least one assigned row in the foreign key table for each row of the check table.
    • CN 
There can be any number of assigned rows in the foreign key table for each row of the check table.
Defining a generic foreign key turns the cardinalities 1:m or C:m into CN:m. There can be multiple rows in the check table for each dependent row. This cardinality cannot, however, be specified explicitly.
  • Type of the foreign key fields
The type of the foreign key fields can be defined to describe the meaning of the foreign key fields in the foreign key table. The possible values are as follows (with the first two only having a logging purpose):
    • No key fields/key candidates 
The foreign key fields are not primary key fields of the foreign key table and do not identify a record of the foreign key table uniquely.
    • Key fields/key candidates 
The foreign key fields are either primary key fields of the foreign key table and or they identify a record of the foreign key table uniquely as a key candidate. 
    • Key fields of a text table 
If this type of foreign key fields is defined, the foreign key table is handled as a text table of the check table. The primary key of the foreign key table must match the check table, plus a language key field with the type LANG. There can be only one text table for each check table (otherwise an activation warning occurs). The texts in a text table are used, for example, to explain input helps on dynpros and Web Dynpros.

5. Delivery Class for Database Tables


The delivery class of a database table controls the transport of table data in installations, upgrades, or client copies, and in transports between customer systems. It is also applied in extended table maintenance (transaction code SM30).
  • Delivery class A
Application table for master data and transaction data. The data is written by application programs.
    • An application table is delivered as an empty table or using a pattern for real data in client 000.
    • In client copies, the data in client-specific tables is copied only if the appropriate parameter is set. Any cross-client data is not copied.
    • In installations, updates, and language imports, the data in client-specific tables is only imported into the system client with the client ID "000", overwriting any existing data. Any cross-client data is not imported.
    • In transports between customer systems, all data is transported. In client-specific tables, only the data in the specified target client is respected.
  • Delivery class C
Customer table for data entered only by the customer.
    • Client-specific customer tables can be delivered with patterns for real data.
    • In client copies, the data of client-specific tables is copied. Any cross-client data is not copied.
    • In installations, updates, and language imports, the data in client-specific tables is only imported into the system client with the client ID "000", overwriting any existing data. Any cross-client data is not imported.
    • In transports between customer systems, all data is transported. In client-specific tables, only the data in the specified target client is respected.
  • Delivery class L
Table used as a repository for temporary data. SAP delivers tables in the delivery class L as empty tables.
    • These tables are delivered as empty tables.
    • No data is copied in client copies.
    • No data is imported in installations, updates, and language imports.
    • No data is transported in transports between customer systems.
    • Data entered using extended table maintenance (transaction code SM30) cannot be transported here.
  • Delivery class G
Customer table where SAP can add data but not modify or delete it. A customer table in the delivery class G must be assigned a customer namespace in the database table TRESC using the program RDDKOR54.
    • Customer tables can be delivered with patterns for real data.
    • In client copies, the data of client-specific tables is copied. Any cross-client data is not copied.
    • The data in client-specific tables is imported as follows in installations, updates, and language imports: In system clients with the client ID "000" , all data is imported and existing data is overwritten. In other clients, only new data is imported and no existing data is overwritten. Any cross-client data is inserted without overwriting existing data.
    • In transports between customer systems, all data is transported. In client-specific tables, only the data in the specified target client is respected.
    • Any data entered using extended table maintenance (transaction code SM30) is checked here for any violations of the namespace assigned in the database table TRESC.
  • Delivery class E
System table in which customers can make entries. A system table in the delivery class E must be assigned a customer namespace in the database table TRESC using the program RDDKOR54.
    • These system tables are delivered with prefilled entries.
    • In client copies, the data of client-specific tables is copied. Any cross-client data is not copied.
    • In installations, updates, and language imports, all data is imported and existing data is overwritten.
    • In transports between customer systems, all data is transported. In client-specific tables, only the data in the specified target client is respected.
    • Any data entered using extended table maintenance (transaction code SM30) is checked here for any violations of the namespace assigned in the database table TRESC.
  • Delivery class S
System table delivered by SAP with predefined data as part of the system, such as ABAPDOCU_TREE containing the read-only tree of the ABAP keyword documentation. When transported, the data is handled like repository objects, which means that they have an original system and any changes in other systems are modifications or repairs.
    • These system tables are delivered with prefilled entries.
    • In client copies, the data of client-specific tables is copied. Any cross-client data is not copied.
    • In installations, updates, and language imports, all data is imported and existing data is overwritten.
    • In transports between customer systems, all data is transported. In client-specific tables, only the data in the specified target client is respected.
  • Delivery class W
System table for system administration data, such as TFDIR as a directory of all function modules. A system table in the delivery class W is usually delivered with prefilled entries by SAP and can be affected by customer modifications. The content is transported using the transport objects assigned to the objects in questions, such as R3TR FUNC for function modules.
  • These system tables are delivered with prefilled entries.
  • No data is copied in client copies.
  • In installations, updates, and language imports, all data is imported and existing data is overwritten.
  • In transports between customer systems, all data is transported. In client-specific tables, only the data in the specified target client is respected.
  • Data entered using extended table maintenance (transaction code SM30) cannot be transported here.

6. Indexes in Database Tables


An index in a database table helps to speed up the selection of rows. An index is a sorted copy of selected database table fields. An additional field contains a pointer to the actual table rows. Sorting enables faster access to the rows in the table, for example in binary searches. A database table has at least one primary index defined by its key fields. It can also have one or more optional secondary indexes.

Primary Index

The primary index is a unique index constructed from the key fields of the primary key. It is always created automatically in AS ABAP. A maximum of one record exists in the table for each combination of index fields. If the primary index cannot be used to identify the results set, for example because no field from the primary index was selected, the table is scanned in full or an attempt is made to use a suitable secondary index (if one exists).

Secondary Indexes

Alongside the primary index defined using the primary key, both unique and non-unique secondary indexes can be created for a database table. Creating secondary indexes usually improves the performance of database reads that evaluate the indexes of the database.

The secondary indexes of a database table consist of a series of table fields and are identified by an alphanumeric index ID with a maximum of three characters (letters or digits). The ID 0 is reserved for the primary index. Table fields with the predefined data types STRING and RAWSTRING must not be index fields. It is recommended that table fields with the data type FLTP are not index fields.

The secondary indexes defined for a database table are created when the table itself is created in the database system. Furthermore, new secondary indexes can be added later in the same system. When further secondary indexes are added in other systems without making modifications, they are created as extension indexes. The following are recommended as namespaces for indexes added at a later time:
  • The IDs of indexes added to delivered tables by customers start with "Y" or "Z".
  • The IDs of indexes added to delivered tables by partners start with "J". There can be conflicts between the index names of different partners in follow-on systems.
  • The IDs of indexes added to other tables can have any names, but cannot start with "Y", "Z", or "J".
The name of an index on the database is usually DBTAB~ID, where DBTAB is the name of the database table and ID is the three-character ID. Other names can occur, however, and blanks can be padded using underscores.

A secondary index can be unique, but (unlike the primary index) does not have to be. In the case of unique indexes, the database table cannot contain multiple rows with the same values in the index fields. Any attempts to insert a row like this cancel processing in the database and raise an appropriate exception in ABAP. A unique index of a client-specific table must always contain the client field.

When the database is accessed, the optimizer of the database system checks whether a suitable index exists and uses it if appropriate. The index selected depends on the platform, which means it is possible in ABAP Dictionary to define to which database systems a non-unique secondary index is applied or not:
  • Index in all database systems
    • The index is created on every database.
  • In selected database systems
    • The database systems can be defined using a selection list or an exclusion list with up to four entries each.
  • No database index
    • The index is not created on any database. This setting makes it possible to delete existing secondary indexes from the database.
Unique secondary indexes are always created and can then no longer be deleted from the database. The SQL Trace function in the Performance Trace tool (transaction ST05) can be used to determine which index is being used by the database system to access data.

The value of an index for selecting data from a table depends on how well the data set selectable using the index represents the final set selected. Only those fields are useful in an index that make a significant restriction on the results set of a selection. If an index is constructed from multiple fields, it can also be used if only a few of these fields are specified in a selection condition. Here, the order of the fields in the index is an important factor in how quickly they can be accessed. The first fields must be those filled with constant values in a large number of selections. In selections, an index is useful only until the first field not specified in the selection condition. Alternatively, an index field is generally used only when all index fields located in front of it are specified in the selection condition. The speed at which a field is accessed is not affected by whether or not an index is defined as unique.

Creating secondary indexes is beneficial in the following cases:
  • If table entries are to be selected based on fields that are not contained in an index, and the response times are very slow, a suitable secondary index should be created.
  • The field or fields of a secondary index are so selective that each index entry corresponds to a maximum of 5% of the total number of table entries.
  • The database table is accessed mainly to read entries. When accessing a table to modify entries, each additional index must also be updated.
  • If only those fields are read that also exist in the index, the data does not need to be accessed a second time after the index access. If only a very small number of fields are selected, there can be significant efficiency gains if these fields are included in an index in their entirety.
Secondary indexes can also place a load on the system, since they need to be adjusted each time the table content is modified. Each additional index slows down the insertion of rows in the table. Tables where new rows are often created should only have a small number of indexes. Too many indexes can also cause the database system optimizer to select the wrong index. To prevent this, the indexes in a table must be as disjoint as possible (meaning that they share as few fields as possible).

An index should only consist of a few fields; as a rule, no more than four. This is because the index has to be updated each time its fields are updated in a database operation. Fields that are suitable for indexes are:
  • Fields that are selected often and that have a high level of selectivity. The most selective fields should be placed at the beginning of the index.
  • A field should not be included in an index if its value is initial for most of the table entries.
  • If more than one index is used for a database table, they should not overlap.
No more than five indexes should be created for any one table because
  • Each index produces additional update costs.
  • The amount of data increases.
  • The optimizer of the database system is given too many selection options and becomes more error-prone.
An index can only support those selection conditions that describe the search value positively, such as = or LIKE. The response times of conditions including <>, for example, are not improved by an index. The optimizer generally stops if the selection condition contains an OR. In other words, it does not evaluate the fields checked by OR when selecting and applying the index. An exception to this are OR relationships standing on their own. Therefore, conditions containing an OR join for one of the indexed fields should be reformulated if necessary.

Note: The zero value in some database systems is ignored by the indexes, meaning that no index can be used when selecting by zero values.

Example

The optimizer stops working when it encounters OR in the following SELECT statement.

SELECT * FROM spfli
         WHERE carrid = 'LH' AND
              ( CITYFROM = 'FRANKFURT' OR  cityfrom = 'NEW YORK' ).
When replaced by the equivalent statement (below), the entire condition can be optimized with respect to the existing indexes.

SELECT *
       FROM spfli
       WHERE ( carrid = 'LH' AND cityfrom = 'FRANKFURT' ) OR
             ( carrid = 'LH' AND cityfrom = 'NEW YORK' ).

Full Text Index

The SAP HANA database supports a full text index as a secondary table index. A full text index is created as an additional invisible column on the database. The content of the column created for a full text index is saved to this additional column with appropriate formatting and is evaluated when the relevant data is accessed.

The following conditions must be met:
  • A full text index can only be created for the SAP HANA database and for tables with the storage type column store.
  • A full text index can only be created for precisely one column in a database table whose predefined data type is CHAR, SHORTSTRING, STRING, or RAWSTRING.
  • The database table must have a column for the text language.
A full text index is always non-unique. Accesses that make use of the full text index are based on the SQL language element WHERE CONTAINS .... This is not currently supported by Open SQL and Native SQL must be used instead.

7. Displaying and Editing Database Tables


In ABAP Dictionary, it is possible to define whether database tables are displayed or edited using the tools Data Browser (transaction SE16 ) and "Table View Maintenance" (transactions SM30 and SM31).

The following settings are possible:
  • Display/maintenance not allowed
    • Tables cannot be displayed or edited using Data Browser SE16.
    • No maintenance dialogs can be created using transaction SE54, which means that transactions SM30 and SM31 cannot be used.
  • Display/maintenance allowed to limited extent
    • Tables can be displayed but not edited using Data Browser SE16
    • Maintenance dialogs can be created using transaction SE54.
    • Tables cannot be displayed and edited with transactions SM30 and SM31. Maintenance dialogs can, however, be used in "View Cluster Maintenance" (transaction SM34).
  • Display/maintenance allowed
    • Tables can be displayed and edited using Data Browser SE16
    • Maintenance dialogs can be created using transaction SE54.
    • Tables can be displayed and edited using transactions SM30 and SM31. Maintenance dialogs can be used in "View Cluster Maintenance" (transaction SM34).
◐ Global Temporary Tables 

Global temporary tables (GTTs) are special transparent tables used as repositories for temporary data. This data exists only within a database LUW and can be accessed only by one consumer. GTTs are used to save temporary subtotals thereby splitting complicated database processes into multiple steps. GTTs are designed for this purpose only and hence incur far lower administration costs than regular transparent tables.

The GTT concept specifies that a GTT is always empty at the start of a database LUW and hence always has to be cleared at the end of each database LUW. Generally, database systems provide native support for this concept by defining data in a GTT database table as transaction-specific data implicitly. This data is deleted automatically at the end of a database LUW (in a database commit or database rollback).

In ABAP Dictionary, the table category global temporary table can be specified for a database table. Tables in this category have the variant GTT in the database. The following additional rules apply in Open SQL access to GTTs in ABAP Dictionary to avoid platform-dependent behavior and unexpected responses when handling GTTs:
  • If an ABAP Dictionary GTT was filled by a modifying Open SQL statement, it must be cleared explicitly before the end of the current database LUW. The following can be used here:
    • The Open SQL statement DELETE FROM dbtab without specifying a WHERE condition.
    • All explicit database commits and database rollbacks, such as the Open SQL statements COMMIT WORK, COMMIT CONNECTION, ROLLBACK WORK, ROLLBACK CONNECTION plus all associated Native SQL statements and calls.
  • If an ABAP Dictionary GTT filled using Open SQL was not cleared explicitly using one of these methods before an implicit database commit, the runtime error COMMIT_GTT_ERROR occurs independently of its content.
  • The use of the additions USING CLIENT and CLIENT SPECIFIED is not allowed in any Open SQL statement.
Alongside these rules, GTTs can be used like regular transparent tables. With respect to their definition in ABAP Dictionary, the following settings have predefined values that cannot be modified:
  • The data type and size category are ignored and undefined.
  • SAP buffering is not allowed.
  • Logging is switched off.
  • The storage type is row store.
  • The delivery class is L.
  • No replacement objects can be used.
Notes:
  • When accessed using Open SQL, explicit clears of ABAP Dictionary GTTs are forced before implicit database commits for the following reasons:
    • For making the program easier to understand. If an implicit database commit occurs within a programming module, for example due to an RFC, a developer may be surprised to find that the table is empty afterwards, since the database system deleted it implicitly at the end of the database LUW.
    • It prevents errors caused by any platform dependencies. It cannot be guaranteed that every database platform deletes the data of a GTT in an implicit database commit. This is guaranteed, however, when the GTT is cleared explicitly.
  • The statements COMMIT WORK and ROLLBACK WORK clear the GTTs of all currently open database connections, whereas COMMIT CONNECTION and ROLLBACK CONNECTION only delete the GTTs of the specified connection.
  • Only the variant DELETE FROM dbtab without a WHERE condition specified prevents the runtime error COMMIT_GTT_ERROR. Other variants of the DELETE statement do not prevent the runtime error, even if they clear the entire table.
  • These additional rules apply only to writes using Open SQL. If a GTT is filled using only Native SQL methods, no exceptions are raised in the case of implicit database commits. The GTT is then generally cleared by the database system. Conversely, using Native SQL to clear a table filled using Open SQL does not prevent the runtime error COMMIT_GTT_ERROR.
  • It is advisable to only use Open SQL to access ABAP Dictionary GTTs.
  • INSERT statements with a subquery are particularly well suited for filling GTTs, since the operation is then performed only on the database and no data transports are required between the database and the application server.
  • When an ABAP Dictionary GTT is accessed using Open SQL, the syntax check is performed in a strict mode, which handles the statement more strictly than the regular syntax check.
Examples

1. Global Temporary Tables, Access


This example demonstrates how global temporary tables in ABAP Dictionary are accessed.

Source Code

REPORT demo_gtt.

CLASS demo DEFINITION.
  PUBLIC SECTION.
    CLASS-METHODS main.
ENDCLASS.

CLASS demo IMPLEMENTATION.
  METHOD main.
    DATA(o) = cl_demo_output=>new( ).

    DATA delete TYPE abap_bool.
    cl_demo_input=>request(
      EXPORTING text         = `Delete lines before implicit commit`
                as_checkbox = abap_true
      CHANGING  field       = delete ).

    "Fill GTT with Open SQL
    INSERT demo_gtt FROM @( VALUE #( id = 'X' col = 111 ) ).
    SELECT SINGLE * FROM demo_gtt INTO @DATA(wa).
    o->write( COND #( WHEN sy-subrc = 0
                      THEN `Line found after open insert`
                      ELSE `No line found after open insert` ) ).
    IF delete = abap_true.
      DELETE FROM demo_gtt.
    ENDIF.
    WAIT UP TO 1 SECONDS.
    SELECT SINGLE * FROM demo_gtt INTO @wa.
    o->write( COND #( WHEN sy-subrc = 0
                      THEN `Line found after implicit commit`
                      ELSE `No line found after implicit commit` ) ).

    "Fill GTT with Native SQL (for demonstration only!)
    wa = VALUE demo_gtt( id = 'X' col = 111 ).
    EXEC SQL.
      INSERT INTO DEMO_GTT VALUES ( :wa-id, :wa-col )
    ENDEXEC.
    SELECT SINGLE * FROM demo_gtt INTO @wa.
    o->write( COND #( WHEN sy-subrc = 0
                      THEN `Line found after native insert`
                      ELSE `No line found after native insert` ) ).
    WAIT UP TO 1 SECONDS.
    SELECT SINGLE * FROM demo_gtt INTO @wa.
    o->write( COND #( WHEN sy-subrc = 0
                      THEN `Line found after implicit commit`
                      ELSE `No line found after implicit commit` ) ).

    o->display( ).
  ENDMETHOD.
ENDCLASS.

START-OF-SELECTION.
  demo=>main( ).

Description

Open SQL and Native SQL are used to access a global temporary table (GTT) DEMO_GTT:
  • After the GTT is filled using the Open SQL statement INSERT, the table is cleared again (after user input) using the Open SQL statement DELETE FROM without a WHERE condition. If the DELETE statement is not executed, the statement WAIT UP TO (which performs an implicit database commit) produces the runtime error COMMIT_GTT_ERROR, which can be found in the ABAP dump analysis (ST22).
  • After the GTT has been filled using a static native INSERT statement between EXEC SQL - ENDEXEC, the table is not cleared explicitly. Instead it is cleared in most database platforms using an implicit database commit raised by the statement WAIT UP TO.
Note: Native SQL is used to fill the GTT here for demonstration purposes only. Only Open SQL should be used to access GTTs.

2. Union with Global Temporary Table


This example demonstrates a union across a table and a global temporary table (GTT).

Source Code

REPORT demo_select_union_sum_gtt.

CLASS demo DEFINITION.
  PUBLIC SECTION.
    CLASS-METHODS main.
ENDCLASS.

CLASS demo IMPLEMENTATION.
  METHOD main.
    DATA carrid TYPE sflight-carrid VALUE 'AA'.
    cl_demo_input=>request( CHANGING field = carrid ).

    INSERT demo_sflight_agg FROM (
    SELECT carrid,
           connid,
           CAST( '00000000' AS DATS ) AS fldate,
           SUM( seatsocc ) AS seatsocc
           FROM sflight
           WHERE carrid = @( to_upper( carrid ) )
           GROUP BY carrid, connid ).

    SELECT ' ' AS mark, carrid, connid, fldate, seatsocc
           FROM sflight
           WHERE carrid = @( to_upper( carrid ) )
           UNION SELECT 'X' AS mark,
                        carrid, connid, fldate, seatsocc
                        FROM demo_sflight_agg
           ORDER BY carrid, connid, mark, fldate, seatsocc
           INTO TABLE @DATA(result).

    DELETE FROM demo_sflight_agg.
    cl_demo_output=>display( result ).
  ENDMETHOD.
ENDCLASS.

START-OF-SELECTION.
  demo=>main( ).

Description

This example has the same result as the example for a union with aggregate expression. Here, however, the SELECT statement is split into two Open SQL statements for demonstration purposes:
  • The aggregation is performed in the subquery of an INSERT statement whose results set is written to a global temporary table (GTT).
  • UNION is used to create the union of the results set of a SELECT statement on the table SFLIGHT and a SELECT statement on the GTT.
The GTT must be cleared explicitly before the end of the program to prevent the runtime error COMMIT_GTT_ERROR in the next implicit database commit.

◑ Pooled Tables and Cluster Tables

Alongside transparent tables, pooled tables and cluster tables are an SAP-specific category of database tables that can be defined in ABAP Dictionary. Unlike transparent tables, pooled tables and cluster tables do not exist with this definition on the database and instead contain the content of a table pool or table cluster. Only the table pool or table cluster is defined as a special table on the database. It is still possible, however, to access pooled or cluster tables using Open SQL, just like a transparent table. The database interface transforms accesses to the relevant accesses to the table pool or table cluster. Some restrictions apply, however, which are also specified under the statements in question. Direct access to pooled tables and cluster tables using Native SQL is not possible.
  • Pooled tables in table pools
  • Cluster tables in table clusters
Note: It is not usually recommended that new tables are created as pooled table or cluster tables. It was also never recommended that business data be saved in pooled tables or clustered tables. Pooled tables and cluster tables were only ever designed for internal control information, temporary data, or continuous texts such as documentation. It is generally recommended that any existing pooled tables and cluster tables are transformed to transparent tables.

1. Pooled Tables in Table Pools


Pooled tables are a special category of database table in ABAP Dictionary. They are defined by being saved in table pools rather than being defined as pooled tables on the database.
  • Table Pools
A table pool is defined as a database table on the database. It contains all rows of the pooled tables assigned to it. A table pool has the following table fields for this:
  1. Key field TABNAME with the type CHAR of length 10 for the name of a pooled table
  2. Key field VARKEY with the type CHAR with a maximum length of 110 for the values (chained as a character string) of the key fields of a row of the pooled table specified in TABNAME
  3. Field DATALN with the type INT2 for the length of the string in VARDATA
  4. Field VARDATA with the type RAW for the non-key fields (compressed in a byte chain) of the row specified by the key in VARKEY of the pooled table specified in TABNAME
The name of a table pool can have a maximum of ten characters. The configurable technical attributes of a database table are the size category and the activation type. All other technical attributes are fixed.

Once it is activated in ABAP Dictionary, a table pool must be created explicitly on the database using the Database Utility tool. A table pool that already contains data can no longer be modified or deleted.

Example

In systems where no transformation to transparent tables has taken place, the table pool ATAB contains pooled tables from the ABAP runtime environment.

  • Pooled Tables

All key fields of a pooled table must have a flat character-like data type. The total length of all key fields or the total length of all non-key fields of a pooled table cannot be greater than the length of the VARKEY or VARDATA field of the associated table pool. A pooled table cannot have any fields of the obsolete predefined types DF16_SCL and DF34_SCL. The name of a table field can have a maximum of 10 characters, which is also important when including structures as include structures.

Before a pooled table is created, a suitable transparent table with a suitable delivery class must be created and transformed to a pooled table. This assigns the pooled table to a table pool.

No secondary indexes can be created for a pooled table. The same technical settings can be made as in a transparent table, but these are ignored (they are defaults for the transformation to a transparent table.

Note: A pooled table can be flagged as transparent in its technical settings, which is a simple way of transforming it to a transparent table.

Note: In systems where no transformation to transparent tables has taken place, the table PVERI_CLNT is a pooled table in the table pool ATAB.

2. Cluster Tables in Table Clusters


Cluster tables are a special category of database table in ABAP Dictionary. They are defined by being saved in table clusters rather than being defined as cluster tables on the database.
  • Table Cluster
A table cluster is defined as a database table on the database and contains all rows of the cluster tables assigned to it. Here, multiple rows from various cluster tables are grouped in a byte string. There is one row and (potentially) multiple continuation rows for this byte string. A table cluster has the following table fields for this:
  1. Custom key fields CLKEY1, CLKEY2, ... with any name and any type permitted for key fields except decimal floating point numbers.
  2. Key field PAGENO with the type INT2 for indicating continuation rows.
  3. TIMESTMP with the type CHAR of the length 14.
  4. PAGELG with the type INT2 for the length of the byte string in VARDATA.
  5. VARDATA with the type RAW for the rows (compressed in a byte string) of the assigned cluster tables determined using the key fields CLKEY1, CLKEY2, ... Alongside the actual values, the byte string contains information about the structure of the data and from which cluster table it comes. If the byte string is longer than the maximum length of the VARDATA field, a continuation row is written using the same key values. The continuation rows of a key are distinguished by their values in the field PAGENO. The actual length of the byte string is saved in the field PAGELG.
The name of a table cluster can have a maximum of ten characters. The configurable technical attributes of a database table are the size category and the activation type. All other technical attributes are fixed.

Once it is activated in ABAP Dictionary, a table cluster must be created explicitly on the database using the Database Utility tool. A table cluster that already contains data can no longer be modified or deleted.

Note : Table clusters and cluster tables should not be confused with INDX-like tables used to add data clusters.

Example

In systems where no transformation to transparent tables has taken place, the table cluster DOKCLU contains the cluster table DOKTL. The key fields ID, OBJECT, LANGU, TYP, and DOKVERSION cover a part of the table key DOKTL.
  • Cluster Tables
The intersection of the key fields of the cluster tables constructs the primary key of the table cluster. The table key of a cluster table must contain the key of the assigned table cluster as its initial part. The key fields of a cluster table cannot have any predefined types for decimal floating point numbers. A cluster table cannot have any data fields of the obsolete predefined types DF16_SCL and DF34_SCL. The name of a table field can have a maximum of 10 characters, which is also important when including structures as include structures.

Before a cluster table is created, a suitable transparent table with a suitable delivery class must be created and transformed to a cluster table. This assigns the cluster table to a table cluster.

No secondary indexes can be created for a cluster table. The same technical settings can be made as in a transparent table, but these are ignored (they are defaults for the transformation to a transparent table.

Note: Table clusters and cluster tables should not be confused with INDX-like tables used to add data clusters.

Example

The cluster table DOKTL, which contains all documentation created in the transaction SE61, is a part of the table cluster DOKCLU in systems where no transformation to transparent tables has yet taken place.

3. Restrictions in Pooled Tables and Cluster Tables


The following general restrictions apply to pooled tables and cluster tables:
  • Access to pooled tables and cluster tables using Native SQL is not possible.
  • Pooled tables and cluster tables cannot be data sources of CDS views. Views like this can be activated but not used.
  • Cluster tables cannot be buffered.
  • Pooled tables and cluster tables cannot be global temporary tables (GTTs).
  • No replacement objects can be defined for pooled tables and cluster tables.
The following restrictions apply when accessing pooled tables and cluster tables using Open SQL:
  • CDS views that contain pooled tables or cluster tables as a data source cannot be used as data sources of the statement SELECT.
  • The addition DISTINCT of the statement SELECT cannot be used when accessing pooled tables or cluster tables if single columns are specified in the SELECT list.
  • When accessing pooled tables or cluster tables, not all columns of a data source can be specified in the statement SELECT using data_source~* in the SELECT list.
  • When accessing pooled tables or cluster tables, no aggregate expressions except COUNT( * ) can be created in the statement SELECT. COUNT(*) is emulated in this case.
  • When accessing pooled tables or cluster tables, no SQL expressions can be used.
  • When accessing pooled tables or cluster tables, no joins can be created in the statement SELECT.
  • When accessing pooled tables or cluster tables, no columns of the data source can be specified on the right side in comparisons in a WHERE condition.
  • In pattern matches using LIKE in a WHERE condition, the addition ESCAPE cannot be specified when a pooled table is accessed.
  • The additions GROUP BY and HAVING of the statement SELECT cannot be specified when accessing pooled tables or cluster tables.
  • When accessing pooled tables or cluster tables, it is not possible to use ORDER BY to sort by single columns in the statement SELECT.
  • When accessing pooled tables or cluster tables, no subqueries can be used, either as a WHERE condition or as a data source of the statement INSERT.
  • The primary key fields of pooled tables or cluster tables cannot be overwritten using the addition SET of the statement UPDATE.
  • When accessing pooled tables or cluster tables, the addition CONNECTION of the statement SELECT cannot be specified.
  • The addition CONNECTION cannot be used in writes to pooled tables or cluster tables.

4. Transforming Pooled Tables and Cluster Tables


Any existing pooled tables and cluster tables should be transformed to transparent tables. In particular, if a SAP HANA database is used as the central database of AS ABAP, only transparent tables should be used, since the architecture of SAP HANA databases does not provide optimum support for pooled tables and cluster tables.

The tables can be transformed in ABAP Dictionary for the current system. By default, the table is then also transformed when migrated to another database in follow-on systems. If, in exceptional cases, the transformation in follow-on systems is to be restricted only to specific database systems, this can be noted using the Transparent on following DBs flag in the original system. If this is the case, any pooled tables or cluster tables transformed to transparent tables in one system, are transformed in follow-on systems only if migrated to the specified databases. They are kept as pooled tables or cluster tables for all other databases. Currently, only SAP HANA databases can be specified as database systems where this transformation can take place.

Once a table is transformed to a transparent database table, the Transparent on following DBs flag is kept and defines the transformation in further follow-on systems. If a transformed transparent table is migrated to a follow-on system, the pooled tables or cluster tables are only transformed to transparent tables if the flag is not set or if the flag is set and the target database system matches.

In addition, this flag determines that a table on each database where it has once been made transparent is no longer transformed to a pooled table or cluster table, even if a transport with the corresponding table definition is imported or an attempt is made to perform the transformation in ABAP Dictionary. Before this can happen, the flag must be removed explicitly.

 Converting Database Tables 


When an existing modified table is activated, existing content must be converted on the database in the following cases:
  • When a client column is inserted. Here, the data of the table is copied to all clients specified in the client table T000.
  • When fields of existing database tables are deleted. Rows can be lost when key fields are deleted.
  • When the technical attributes of table fields are changed.
  • When a pooled table or cluster table is changed to a transparent table or back.
  • When new fields are inserted between existing fields or when the field order is changed in a pooled table or cluster table.
  • When a pooled table or cluster table is assigned to a different table pool or table cluster.
Note
  • The order of table fields in transparent tables in ABAP Dictionary may be different from the order of the columns of the physical database table on the database, which means that not every modification here requires a conversion.
  • In pooled tables or cluster tables, the conversion is made for the table pool or table cluster.
  • Pooled tables or cluster tables that contain more than 749 fields cannot be converted. This means that is not possible to make changes to these tables that require a conversion. New fields can only be appended to the end of these tables.
«« Previous
Next »»

No comments:

Post a Comment