9.6 KiB
Additional Features Guide
Additional SAP Data Intelligence features including monitoring, cloud storage services, scenario templates, data types, and Git integration.
Table of Contents
- Monitoring Application
- Cloud Storage Services
- Scenario Templates
- Custom Data Types
- Git Terminal Integration
- Graph Snippets
Monitoring Application
SAP Data Intelligence includes a stand-alone monitoring application for operational oversight.
Accessing the Monitor
Options:
- SAP Data Intelligence Launchpad tile
- Direct stable URL access
Capabilities
| Feature | Description |
|---|---|
| Graph Status | View execution status, timing, type, source |
| Scheduling | Schedule graph executions |
| Termination | Terminate running processes |
| Navigation | Open graphs directly in Modeler |
| Configuration | Review graph configurations |
| Replication Flows | Monitor flows and associated tasks |
Access Permissions
With sap.dh.monitoring policy:
- View analytics and instances for all tenant users
- Does not include schedule access
Without policy:
- Monitor only your own graphs
What's Displayed
For each graph instance:
- Execution status (Running, Completed, Failed, Dead)
- Run timing (start, end, duration)
- Graph classification
- Source origin
Cloud Storage Services
SAP Data Intelligence supports multiple cloud storage platforms.
Supported Services
| Service | Description | Protocol |
|---|---|---|
| Amazon S3 | AWS object storage | S3 API |
| Azure Blob Storage (WASB) | Microsoft cloud storage | WASB protocol |
| Azure Data Lake (ADL/ADLS Gen2) | Microsoft data lake | ADLS API |
| Google Cloud Storage (GCS) | Google object storage | GCS API |
| Alibaba Cloud OSS | Alibaba object storage | OSS API |
| HDFS | Hadoop distributed file system | HDFS protocol |
| WebHDFS | HDFS via REST API | HTTP/REST |
| Local File System | Local storage | File system |
Connection Configuration
Each service requires specific connection parameters in Connection Management.
Common Parameters:
- Connection ID
- Root path/bucket
- Authentication credentials
Service-Specific Examples:
Amazon S3:
Connection Type: S3
Region: us-east-1
Access Key: <access-key>
Secret Key: <secret-key>
Bucket: my-bucket
Azure Blob Storage:
Connection Type: WASB
Account Name: <storage-account>
Account Key: <account-key>
Container: my-container
Google Cloud Storage:
Connection Type: GCS
Project ID: <project-id>
Service Account Key: <json-key>
Bucket: my-bucket
Usage in Operators
File operators use connection IDs to access storage:
- Structured File Consumer/Producer
- Binary File Consumer/Producer
- Cloud-specific operators (S3 Consumer, etc.)
Scenario Templates
Pre-built graph scenarios for common use cases.
Finding Templates
- Open Modeler application
- Navigate to Graphs tab
- Enable "Scenario Templates" in visible categories
- Or search for package
com.sap.scenarioTemplates
Template Categories
1. ABAP with Data Lakes
Ingest ABAP data into cloud storage.
Use Cases:
- Extract ABAP tables to S3/Azure/GCS
- Replicate CDS views to data lake
- S/4HANA data extraction
Key Operators:
- ABAP CDS Reader
- Read Data From SAP System
- Structured File Producer
2. Data Processing with Scripting Languages
Manipulate data using scripts.
Use Cases:
- Custom transformations with Python
- JavaScript data processing
- R statistical analysis
Key Operators:
- Python3 Operator
- JavaScript Operator
- R Operator
3. ETL from Database
Extract, transform, and load database data.
Use Cases:
- Database to file storage
- Database to database transfer
- SQL-based transformations
Key Operators:
- SQL Consumer
- Structured SQL Consumer
- Table Producer
4. Loading Data from Data Lake to SAP HANA
Batch and stream data to HANA.
Use Cases:
- Load files to HANA tables
- Stream data to HANA
- Data lake integration
Key Operators:
- Structured File Consumer
- HANA Client
- Write HANA Table
Using Templates
- Find template in Graphs tab
- Copy template to your workspace
- Customize connections and parameters
- Test with sample data
- Deploy for production use
Custom Data Types
Extend the type system with custom data types.
Data Type Categories
| Category | Description | Customizable |
|---|---|---|
| Scalar | Basic types (string, int, etc.) | No |
| Structure | Composite with named properties | Yes |
| Table | Column-based with keys | Yes |
Creating Global Data Types
-
Access Editor:
- Open Modeler
- Navigate to Data Types tab
- Click plus icon
-
Configure Type:
- Enter name (two+ identifiers separated by periods)
- Select type: Structure or Table
- Click OK
-
Define Properties:
- Add properties with plus icon
- For structures: property name + scalar type
- For tables: property name + scalar type + optional Key flag
-
Save:
- Click save icon
- Use "Save As" for variants
Naming Convention
namespace.typename
Examples:
com.mycompany.CustomerRecord
com.mycompany.SalesData
Structure Type Example
Type: com.company.Address
Properties:
- street: string
- city: string
- country: string
- postalCode: string
Table Type Example
Type: com.company.OrderItems
Properties:
- orderId: string (Key)
- lineNumber: int64 (Key)
- productId: string
- quantity: int32
- unitPrice: float64
Managing Data Types
| Action | How |
|---|---|
| Edit | Double-click in tree view |
| Delete | Right-click > Delete |
| Copy | Save As with new name |
Git Terminal Integration
Version control integration for SAP Data Intelligence artifacts.
Purpose
Integrate file-based content with Git servers:
- Graphs
- Operators
- Dockerfiles
- Script code
Accessing Git Terminal
- Open Modeler
- Navigate to Git Terminal option
- Terminal opens with Git capabilities
Available Commands
| Command | Function |
|---|---|
git init |
Initialize local repository |
git clone <url> |
Clone remote repository |
git add |
Stage changes |
git commit |
Commit changes |
git push |
Push to remote |
git pull |
Pull from remote |
git branch |
Create/list branches |
git merge |
Merge branches |
git rebase |
Rebase commits |
git status |
View status |
git log |
View history |
Credential Handling
Configure Git credentials using standard Git Credential Helper:
git config --global credential.helper store
git config --global user.name "Your Name"
git config --global user.email "your.email@company.com"
Creating Local Repository
cd /workspace/my-project
git init
git add .
git commit -m "Initial commit"
Cloning Remote Repository
git clone [https://github.com/your-org/your-repo.git](https://github.com/your-org/your-repo.git)
cd your-repo
.gitignore Configuration
Control what gets tracked:
# Ignore all except specific files
*
!graph.json
!operator.json
!*.py
Best Practices
- Commit Often: Small, focused commits
- Use Branches: Feature branches for development
- Pull Before Push: Avoid conflicts
- Meaningful Messages: Descriptive commit messages
- Review Changes: Check status before commit
Graph Snippets
Reusable graph fragments for common patterns.
Creating Snippets
- Build working graph pattern
- Select operators to include
- Right-click > Save as Snippet
- Name and describe snippet
Using Snippets
- Open Graphs tab
- Find snippet in repository
- Drag snippet to canvas
- Configure connections
- Customize parameters
Snippet Best Practices
- Document Well: Clear descriptions
- Parameterize: Use substitution variables
- Test Thoroughly: Verify before sharing
- Version: Track snippet versions
Documentation Links
- Monitoring: https://github.com/SAP-docs/sap-hana-cloud-data-intelligence/tree/main/docs/modelingguide/dataintelligence-monitoring
- Service-Specific: https://github.com/SAP-docs/sap-hana-cloud-data-intelligence/tree/main/docs/modelingguide/service-specific-information
- Scenario Templates: https://github.com/SAP-docs/sap-hana-cloud-data-intelligence/tree/main/docs/modelingguide/using-scenario-templates
- Data Types: https://github.com/SAP-docs/sap-hana-cloud-data-intelligence/tree/main/docs/modelingguide/creating-data-types
- Git Terminal: https://github.com/SAP-docs/sap-hana-cloud-data-intelligence/tree/main/docs/modelingguide/using-git-terminal
- Graph Snippets: https://github.com/SAP-docs/sap-hana-cloud-data-intelligence/tree/main/docs/modelingguide/using-graph-snippets
Last Updated: 2025-11-22