Initial commit

This commit is contained in:
Zhongwei Li
2025-11-29 18:02:57 +08:00
commit daf63b8e96
9 changed files with 1543 additions and 0 deletions

719
skills/erd-skill/DBML.md Normal file
View File

@@ -0,0 +1,719 @@
# DBML - Full Syntax Docs
DBML (database markup language) is a simple, readable DSL language designed to define database structures. This page
outlines the full syntax documentations of DBML.
- [DBML - Full Syntax Docs](#dbml---full-syntax-docs)
- [Example](#example)
- [Project Definition](#project-definition)
- [Schema Definition](#schema-definition)
- [Public Schema](#public-schema)
- [Table Definition](#table-definition)
- [Table Alias](#table-alias)
- [Table Notes](#table-notes)
- [Table Settings](#table-settings)
- [Column Definition](#column-definition)
- [Column Settings](#column-settings)
- [Default Value](#default-value)
- [Constraint Definition](#constraint-definition)
- [Constraint Settings](#constraint-settings)
- [Index Definition](#index-definition)
- [Index Settings](#index-settings)
- [Relationships \& Foreign Key Definitions](#relationships--foreign-key-definitions)
- [Relationship settings](#relationship-settings)
- [Many-to-many relationship](#many-to-many-relationship)
- [Enum Definition](#enum-definition)
- [Note Definition](#note-definition)
- [Project Notes](#project-notes)
- [Table Notes](#table-notes-1)
- [Column Notes](#column-notes)
- [TableGroup Notes](#tablegroup-notes)
- [Sticky Notes](#sticky-notes)
- [TableGroup](#tablegroup)
- [TableGroup Notes](#tablegroup-notes-1)
- [TableGroup Settings](#tablegroup-settings)
- [TablePartial](#tablepartial)
- [Multi-line String](#multi-line-string)
- [Comments](#comments)
- [Syntax Consistency](#syntax-consistency)
## Example
```text
Table users {
id integer
username varchar
role varchar
created_at timestamp
}
Table posts {
id integer [primary key]
title varchar
body text [note: 'Content of the post']
user_id integer
created_at timestamp
}
Ref: posts.user_id > users.id // many-to-one
```
## Project Definition
You can give overall description of the project.
```text
Project project_name {
database_type: 'PostgreSQL'
Note: 'Description of the project'
}
```
## Schema Definition
A new schema will be defined as long as it contains any table or enum.
For example, the following code will define a new schema `core` along with a table `user` placed inside it
```text
Table core.user {
...
}
```
### Public Schema
By default, any **table**, **relationship**, or **enum** definition that omits `schema_name` will be considered to belong to the `public` schema.
## Table Definition
```text
// table belonged to default "public" schema
Table table_name {
column_name column_type [column_settings]
}
// table belonged to a schema
Table schema_name.table_name {
column_name column_type [column_settings]
}
```
- (Optional) title of database schema is listed as `schema_name`. If omitted, `schema_name` will default to `public`
- title of database table is listed as `table_name`
- name of the column is listed as `column_name`
- type of the data in the column listed as `column_type`
- supports all data types, as long as it is a single word (remove all spaces in the data type). Example, JSON, JSONB, decimal(1,2), etc.
- list is wrapped in `curly brackets {}`, for indexes, constraints and table definitions.
- settings are wrapped in `square brackets []`
- string value is be wrapped in a `single quote as 'string'`
- `column_name` can be stated in just plain text, or wrapped in a `double quote as "column name"`
:::tip
Use [TablePartial](#tablepartial) to reuse common fields, settings and indexes across multiple tables. Inject partials into a table using the `~partial_name` syntax.
:::
### Table Alias
You can alias the table, and use them in the references later on.
```text
Table very_long_user_table as U {
...
}
Ref: U.id < posts.user_id
```
### Table Notes
You can add notes to the table, and refer to them in the visual plane.
```text
Table users {
id integer
status varchar [note: 'status']
Note: 'Stores user data'
}
```
### Table Settings
Settings are all defined within square brackets: `[setting1: value1, setting2: value2, setting3, setting4]`
Each setting item can take in 2 forms: `Key: Value` or `keyword`, similar to that of Python function parameters.
- `headercolor: <color_code>`: change the table header color.
Example,
```text
Table users [headercolor: #3498DB] {
id integer [primary key]
username varchar(255) [not null, unique]
}
```
## Column Definition
### Column Settings
Each column can take have optional settings, defined in square brackets like:
```text
Table buildings {
...
address varchar(255) [unique, not null, note: 'to include unit number']
id integer [ pk, unique, default: 123, note: 'Number' ]
}
```
The list of column settings you can use:
- `note: 'string to add notes'`: add a metadata note to this column
- `primary key` or `pk`: mark a column as primary key. For composite primary key, refer to the 'Indexes' section
- `null` or `not null`: mark a column null or not null. If you ommit this setting, the column will be null by default
- `unique`: mark the column unique
- `default: some_value`: set a default value of the column, please refer to the 'Default Value' section below
- `increment`: mark the column as auto-increment
- ``constraint: `check expression` ``: add a check expression to this column. Multiple constraints can be defined on a column. For constraints involving multiple columns, refer to the 'Constraints' section
**Note:** You can use a workaround for un-supported settings by adding the setting name into the column type name, such as `id "bigint unsigned" [pk]`
### Default Value
You can set default value as:
- number value starts blank: `default: 123` or `default: 123.456`
- string value starts with single quotes: `default: 'some string value'`
- expression value is wrapped with backticks: ``default: `now() - interval '5 days'` ``
- boolean (true/false/null): `default: false` or `default: null`
Example,
```text
Table users {
id integer [primary key]
username varchar(255) [not null, unique]
full_name varchar(255) [not null]
gender varchar(1) [not null]
source varchar(255) [default: 'direct']
created_at timestamp [default: `now()`]
rating integer [default: 10]
}
```
## Constraint Definition
Constraints allow users to specify custom checks on one or many columns. These checks can be used to enforce constraints on the possible values of one or many columns, which are otherwise impossible to express.
```text
Table users {
id integer
wealth integer
debt integer
constraints {
`debt + wealth >= 0` [name: 'chk_positive_money']
}
}
```
### Constraint Settings
- `name`: name of constraint
## Index Definition
Indexes allow users to quickly locate and access the data. Users can define single or multi-column indexes.
```text
Table bookings {
id integer
country varchar
booking_date date
created_at timestamp
indexes {
(id, country) [pk] // composite primary key
created_at [name: 'created_at_index', note: 'Date']
booking_date
(country, booking_date) [unique]
booking_date [type: hash]
(`id*2`)
(`id*3`,`getdate()`)
(`id*3`,id)
}
}
```
There are 3 types of index definitions:
- Index with single column (with index name): `CREATE INDEX created_at_index on users (created_at)`
- Index with multiple columns (composite index): `CREATE INDEX on users (created_at, country)`
- Index with an expression: `CREATE INDEX ON films ( first_name + last_name )`
- (bonus) Composite index with expression: `CREATE INDEX ON users ( country, (lower(name)) )`
### Index Settings
- `type`: type of index (btree, gin, gist, hash depending on DB). For now, only type btree and hash are accepted.
- `name`: name of index
- `unique`: unique index
- `pk`: primary key
## Relationships & Foreign Key Definitions
Relationships are used to define foreign key constraints between tables across schemas.
```text
Table posts {
id integer [primary key]
user_id integer [ref: > users.id] // many-to-one
}
// or this
Table users {
id integer [ref: < posts.user_id, ref: < reviews.user_id] // one to many
}
// The space after '<' is optional
```
There are 4 types of relationships: **one-to-one**, **one-to-many**, **many-to-one** and **many-to-many**
- `<`: one-to-many. E.g: `users.id < posts.user_id`
- `>`: many-to-one. E.g: `posts.user_id > users.id`
- `-`: one-to-one. E.g: `users.id - user_infos.user_id`
- `<>`: many-to-many. E.g: `authors.id <> books.id`
**Zero-to-(one/many)** or **(one/many)-to-zero** relationships will be automatically detected when you combine the relationship with foreign keys nullable constraint. Like this example:
```text
Table follows {
following_user_id int [ref: > users.id] // many-to-zero
followed_user_id int [ref: > users.id, null] // many-to-zero
}
Table posts {
id int [pk]
user_id int [ref: > users.id, not null] // many-to-one
}
```
In DBML, there are 3 syntaxes to define relationships:
```text
// Long form
Ref name_optional {
schema1.table1.column1 < schema2.table2.column2
}
// Short form
Ref name_optional: schema1.table1.column1 < schema2.table2.column2
// Inline form
Table schema2.table2 {
id integer
column2 integer [ref: > schema1.table1.column1]
}
```
:::note
* When defining one-to-one relationships, ensure columns are listed in the correct order:
* With long & short form, the second column will be treated as a foreign key.
E.g: `users.id - user_infos.user_id`, *user_infos.user_id* will be the foreign key.
* With inline form, the column that have the `ref` definition will be treated as a foreign key.
E.g:
```text
Table user_infos {
user_id integer [ref: - users.id]
}
```
*user_infos.user_id* will be the foreign key.
* If `schema_name` prefix is omitted, it'll default to `public` schema.
:::
**Composite foreign keys:**
```text
Ref: merchant_periods.(merchant_id, country_code) > merchants.(id, country_code)
```
**Cross-schema relationship:**
```text
Table core.users {
id integer [pk]
}
Table blogging.posts {
id integer [pk]
user_id integer [ref: > core.users.id]
}
// or this
Ref: blogging.posts.user_id > core.users.id
```
### Relationship settings
```text
// short form
Ref: products.merchant_id > merchants.id [delete: cascade, update: no action, color: #79AD51]
// long form
Ref {
products.merchant_id > merchants.id [delete: cascade, update: no action, color: #79AD51]
}
```
- `delete / update: cascade | restrict | set null | set default | no action`
Define referential actions. Similar to `ON DELETE/UPDATE CASCADE/...` in SQL.
- `color: <color_code>`: change the relationship color.
*Relationship settings and names are not supported for inline form ref.*
### Many-to-many relationship
There're two ways to represent many-to-many relationship:
- Using a single many-to-many relationship (`<>`).
- Using 2 many-to-one relationships (`>` and `<`). For more information, please refer to [https://community.dbdiagram.io/t/tutorial-many-to-many-relationships/412](https://community.dbdiagram.io/t/tutorial-many-to-many-relationships/412)
Beside presentation aspect, the main differece between these two approaches is how the relationship will be mapped into physical design when exporting to SQL.
## Enum Definition
`Enum` allows users to define different values of a particular column.
When hovering over the column in the canvas, the enum values will be displayed.
```text
// enum belonged to default "public" schema
enum job_status {
created [note: 'Waiting to be processed']
running
done
failure
}
// enum belonged to a schema
enum v2.job_status {
...
}
Table jobs {
id integer
status job_status
status_v2 v2.job_status
}
```
**Note:** if `schema_name` prefix is omitted, it'll default to `public` schema
If your enum values contain spaces or other special characters you can use double quotes.
```text
enum grade {
"A+"
"A"
"A-"
"Not Yet Set"
}
```
## Note Definition
Note allows users to give description for a particular DBML element.
```text
Table users {
id int [pk]
name varchar
Note: 'This is a note of this table'
// or
Note {
'This is a note of this table'
}
}
```
Note's value is a string. If your note spans over multiple lines, you can use [multi-line string](#multi-line-string) to define your note.
### Project Notes
```text
Project DBML {
Note: '''
# DBML - Database Markup Language
DBML (database markup language) is a simple, readable DSL language designed to define database structures.
## Benefits
* It is simple, flexible and highly human-readable
* It is database agnostic, focusing on the essential database structure definition without worrying about the detailed syntaxes of each database
* Comes with a free, simple database visualiser at [dbdiagram.io](http://dbdiagram.io)
'''
}
```
### Table Notes
```text
Table users {
id int [pk]
name varchar
Note: 'Stores user data'
}
```
### Column Notes
You can add notes to your columns, so you can easily refer to it when hovering over the column in the diagram canvas.
```text
column_name column_type [note: 'replace text here']
```
Example,
```text
Table orders {
status varchar [
note: '''
💸 1 = processing,
✔️ 2 = shipped,
❌ 3 = cancelled,
😔 4 = refunded
''']
}
```
### TableGroup Notes
```text
TableGroup e_commerce [note: 'Contains tables that are related to e-commerce system'] {
merchants
countries
// or
Note: 'Contains tables that are related to e-commerce system'
}
```
## Sticky Notes
You can add sticky notes to the diagram canvas to serve as a quick reminder or to elaborate on a complex idea.
Example,
```text
Table jobs {
...
}
Note single_line_note {
'This is a single line note'
}
Note multiple_lines_note {
'''
This is a multiple lines note
This string can spans over multiple lines.
'''
}
```
## TableGroup
`TableGroup` allows users to group the related or associated tables together.
```text
TableGroup tablegroup_name { // tablegroup is case-insensitive.
table1
table2
table3
}
// example
TableGroup e_commerce1 {
merchants
countries
}
```
### TableGroup Notes
Table groupings can be annotated with notes that describe their meaning and purpose.
```text
TableGroup e_commerce [note: 'Contains tables that are related to e-commerce system'] {
merchants
countries
// or
Note: 'Contains tables that are related to e-commerce system'
}
```
### TableGroup Settings
Each table group can take optional settings, defined within square brackets: `[setting1: value1, setting2: value2, setting3, setting4]`
The list of table group settings you can use:
- `note: 'string to add notes'`: add a note to this table group.
- `color: <color_code>`: change the table group color.
Example,
```text
TableGroup e_commerce [color: #345] {
merchants
countries
}
```
## TablePartial
`TablePartial` allows you to define reusable sets of fields, settings, and indexes. You can then inject these partials into multiple table definitions to promote consistency and reduce repetition.
**Syntax**
To define a table partial:
```text
TablePartial partial_name [table_settings] {
field_name field_type [field_settings]
indexes {
(column_name) [index_settings]
}
}
```
To use a table partial, you can reference (also called injection) it in the table definition using the `~` prefix:
```text
Table table_name {
~partial_name
field_name field_type
~another_partial
}
```
**Example**
```text
TablePartial base_template [headerColor: #ff0000] {
id int [pk, not null]
created_at timestamp [default: `now()`]
updated_at timestamp [default: `now()`]
}
TablePartial soft_delete_template {
delete_status boolean [not null]
deleted_at timestamp [default: `now()`]
}
TablePartial email_index {
email varchar [unique]
indexes {
email [unique]
}
}
Table users {
~base_template
~email_index
name varchar
~soft_delete_template
}
```
Final result:
```text
Table users [headerColor: #ff0000] {
id int [pk, not null]
created_at timestamp [default: `now()`]
updated_at timestamp [default: `now()`]
email varchar [unique]
name varchar
delete_status boolean [not null]
deleted_at timestamp [default: `now()`]
indexes {
email [unique]
}
}
```
**Conflict Resolution**
When multiple partials define the same field, setting or index, DBML resolves conflicts based on the following priority:
1. Local Table Definition: Fields, settings and indexes defined directly in the table override those from partials.
2. Last Injected Partial: If a conflict exists between partials, the definition from the last-injected partial (in source order) takes precedence.
## Multi-line String
Multiline string will be defined between triple single quote `'''`
```text
Note: '''
This is a block string
This string can spans over multiple lines.
'''
```
- Line breaks: \<enter\> key
- Line continuation: `\` backslash
- Escaping characters:
- `\`: using double backslash `\\`
- `'`: using `\'`
- The number of spaces you use to indent a block string will be the minimum number of leading spaces among all lines. The parser will automatically remove the number of indentation spaces in the final output. The result of the above example will be:
```text
This is a block string
This string can spans over multiple lines.
```
## Comments
**Single-line Comments**
You can comment in your code using `//`, so it is easier for you to review the code later.
Example,
```text
// order_items refer to items from that order
```
**Multi-line Comments**
You can also put comment spanning multiple lines in your code by putting inside `/*` and `*/`.
Example,
```text
/*
This is a
Multi-lines
comment
*/
```
## Syntax Consistency
DBML is the standard language for database and the syntax is consistent to provide clear and extensive functions.
- curly brackets `{}`: grouping for indexes, constraints and table definitions
- square brackets `[]`: settings
- forward slashes `//`: comments
- `column_name` is stated in just plain text
- single quote as `'string'`: string value
- double quote as `"column name"`: quoting variable
- triple quote as `'''multi-line string'''`: multi-line string value
- backtick `` ` ``: function expression

151
skills/erd-skill/SKILL.md Normal file
View File

@@ -0,0 +1,151 @@
---
name: erd-skill
description: Comprehensive database design and ERD (Entity-Relationship Diagram) toolkit using DBML format. This skill should be used when creating database schemas from requirements, analyzing existing DBML files for improvements, designing database architecture, or providing guidance on database modeling, normalization, indexing, and relationships.
---
# ERD Design Skill
## Overview
This skill helps you design, analyze, and manage database schemas using DBML (Database Markup Language). DBML is a simple, readable DSL for defining database structures that can be converted to SQL and visualized as ERDs. Use this skill for creating new schemas, analyzing existing designs, and converting between DBML and SQL formats.
## When to Use This Skill
Trigger this skill when users request:
- "Create an ERD for [system description]"
- "Design a database schema for [application]"
- "Analyze this DBML file and suggest improvements"
- "Help me design a database for [use case]"
- "Review my database schema"
- "Convert DBML to SQL" or "Convert SQL to DBML"
- Working with `.dbml` files
- Database normalization, indexing, or relationship guidance
## Resource Guide
### For DBML Syntax Questions
**Read `DBML.md` when:**
- Users ask about specific DBML syntax (tables, columns, relationships)
- You need to understand index or constraint syntax
- Working with enums, notes, or TableGroups
- Learning about TablePartials for reusable field sets
- Questions about DBML capabilities and advanced features
### For Design Best Practices
**Read `best-practices.md` when:**
- Analyzing existing schemas for improvements
- Deciding on normalization levels (1NF, 2NF, 3NF)
- Planning indexing strategies
- Choosing naming conventions
- Understanding relationship patterns (one-to-many, many-to-many, etc.)
- Implementing common patterns (timestamps, soft deletes, audit trails, versioning)
- Making performance vs normalization trade-offs
### For Schema Templates
**Read `templates/` when:**
- Starting a new schema and need a reference pattern
- `templates/basic.dbml` - Simple user-following system example
- `templates/advanced.dbml` - Complex e-commerce schema with products, orders, and users
Use templates as starting points or adapt patterns for similar use cases.
### For CLI Operations
**Read `cli.md` when:**
- Converting DBML to SQL (various database types)
- Converting SQL to DBML
- Generating DBML directly from a live database connection
- Need examples of command-line usage
## Workflows
### Reading and Analyzing Schemas
**For understanding schema structure:**
1. Read the `.dbml` file directly
2. Analyze table relationships, indexes, and constraints
3. Reference `DBML.md` for syntax clarification if needed
**For schema review and improvements:**
1. Read the existing `.dbml` file
2. **MANDATORY - READ ENTIRE FILE**: Read `best-practices.md` completely from start to finish. **NEVER set any range limits when reading this file.**
3. Check against standards:
- Naming conventions
- Normalization level (1NF, 2NF, 3NF)
- Index strategy
- Missing constraints or timestamps
- Relationship integrity
4. Prioritize recommendations (critical, important, nice-to-have)
5. Provide concrete DBML improvements
### Creating New Schemas
1. **Understand requirements**: Clarify entities, relationships, and business rules
2. **Choose a template**: Read relevant template from `templates/` for similar patterns
- `templates/basic.dbml` for simple applications
- `templates/advanced.dbml` for complex e-commerce-like systems
3. **MANDATORY - READ FILES**: Read both `best-practices.md` and `DBML.md` completely from start to finish for design principles and syntax
4. **Design the schema**:
- Define tables with appropriate data types
- Apply naming conventions (plural tables, singular columns, snake_case)
- Add primary keys and indexes
- Define relationships between tables
- Apply normalization guidelines (typically target 3NF)
5. **Add documentation**:
- Include table and column notes
- Group related tables with TableGroups
- Document constraints and business rules
6. **Use TablePartials** for reusable patterns (timestamps, soft deletes, etc.)
### Modifying Existing Schemas
1. Read the existing `.dbml` file
2. Identify the required changes
3. Reference `DBML.md` for syntax when adding new features
4. Reference `best-practices.md` for design decisions
5. Apply changes while maintaining consistency with existing patterns
6. Update documentation (notes, comments)
### CLI Operations (DBML ↔ SQL Conversion)
**Converting DBML to SQL:**
1. **Read `cli.md`** for command syntax and examples
2. Use `dbml2sql` command:
```bash
dbml2sql schema.dbml --mysql -o schema.sql
```
3. Specify target database: `--mysql`, `--postgres`, `--mssql`, `--oracle`
**Converting SQL to DBML:**
1. **Read `cli.md`** for command syntax and examples
2. Use `sql2dbml` command:
```bash
sql2dbml dump.sql --postgres -o schema.dbml
```
3. Specify source database type
**Generating from live database:**
1. **Read `cli.md`** for connection string examples
2. Use `db2dbml` command with appropriate connection string
3. Support for: PostgreSQL, MySQL, MSSQL, Snowflake, BigQuery
## Code Style Guidelines
**IMPORTANT**: When generating DBML schemas:
- Write concise, readable DBML code
- Use consistent naming conventions (snake_case, plural tables, singular columns)
- Add comments only for complex business logic
- Use table and column notes for documentation instead of excessive comments
- Leverage TablePartials for reusable patterns (timestamps, audit fields)
- Group related tables with TableGroups for better organization
- Keep indexes close to their table definitions
## Dependencies
Required dependencies (install if not available):
- **@dbml/cli**: `npm install -g @dbml/cli` (for DBML ↔ SQL conversion)
- Includes: `dbml2sql`, `sql2dbml`, `db2dbml` commands

View File

@@ -0,0 +1,309 @@
# Database Design Best Practices
## Normalization
### First Normal Form (1NF)
- Each column contains atomic (indivisible) values
- Each column contains values of a single type
- Each column has a unique name
- The order of rows doesn't matter
### Second Normal Form (2NF)
- Meets all requirements of 1NF
- No partial dependencies (all non-key attributes fully depend on the primary key)
- Relevant for composite primary keys
### Third Normal Form (3NF)
- Meets all requirements of 2NF
- No transitive dependencies (non-key attributes don't depend on other non-key attributes)
- Most common target normalization level
### Denormalization
Consider denormalization for:
- Read-heavy workloads where query performance is critical
- Aggregated data that's expensive to compute
- Historical snapshots that shouldn't change
## Naming Conventions
### Tables
- Use plural nouns (e.g., `users`, `posts`, `orders`)
- Use snake_case for multi-word names (e.g., `user_profiles`, `order_items`)
- Keep names descriptive but concise
### Columns
- Use singular nouns (e.g., `id`, `name`, `email`)
- Use snake_case for multi-word names (e.g., `created_at`, `user_id`)
- Boolean columns should use prefixes like `is_`, `has_`, `can_` (e.g., `is_active`, `has_verified_email`)
### Foreign Keys
- Use format: `{referenced_table_singular}_id` (e.g., `user_id`, `post_id`)
- Be consistent across the entire schema
### Indexes
- Name format: `idx_{table}_{columns}` or `{table}_{columns}_idx`
- Include purpose when relevant: `idx_users_email_unique`, `idx_posts_author_created`
## Primary Keys
### Auto-incrementing Integers
```dbml
Table users {
id integer [primary key, increment]
}
```
**Pros:** Simple, compact, sequential, human-readable
**Cons:** Exposes record count, potential security concern, not globally unique
### UUIDs
```dbml
Table users {
id uuid [primary key]
}
```
**Pros:** Globally unique, can be generated client-side, harder to enumerate
**Cons:** Larger storage, less human-readable, non-sequential (worse for indexing)
### Composite Keys
```dbml
Table user_roles {
user_id integer
role_id integer
indexes {
(user_id, role_id) [primary key]
}
}
```
**Use when:** Natural composite identifier exists and makes sense
## Indexing Strategies
### Single-column Indexes
Create indexes on columns frequently used in:
- WHERE clauses
- JOIN conditions
- ORDER BY clauses
- Foreign keys
### Composite Indexes
- Order matters: most selective column first
- Consider query patterns when designing
- Can satisfy multiple query types
```dbml
Table posts {
id integer [primary key]
author_id integer
status varchar
created_at timestamp
indexes {
(author_id, created_at) [name: 'idx_posts_author_created']
status [name: 'idx_posts_status']
}
}
```
### Unique Indexes
Use for:
- Ensuring data uniqueness (e.g., email, username)
- Natural keys
- Business constraints
```dbml
Table users {
id integer [primary key]
email varchar [unique]
username varchar [unique]
}
```
## Relationships
### One-to-Many
Most common relationship type. Use foreign key in the "many" table.
```dbml
Table users {
id integer [primary key]
}
Table posts {
id integer [primary key]
author_id integer [ref: > users.id]
}
```
### Many-to-Many
Requires junction/join table.
```dbml
Table users {
id integer [primary key]
}
Table roles {
id integer [primary key]
}
Table user_roles {
user_id integer [ref: > users.id]
role_id integer [ref: > roles.id]
indexes {
(user_id, role_id) [primary key]
}
}
```
### One-to-One
Less common. Can be modeled with unique foreign key or by merging tables.
```dbml
Table users {
id integer [primary key]
}
Table user_profiles {
id integer [primary key]
user_id integer [ref: - users.id, unique]
}
```
## Constraints and Validation
### NOT NULL
Use for required fields:
```dbml
Table users {
id integer [primary key]
email varchar [not null]
username varchar [not null]
}
```
### Default Values
Provide sensible defaults:
```dbml
Table posts {
id integer [primary key]
status varchar [default: 'draft']
created_at timestamp [default: `now()`]
}
```
### Check Constraints
Enforce business rules at the database level:
```dbml
Table products {
id integer [primary key]
price decimal [note: 'CHECK (price >= 0)']
stock integer [note: 'CHECK (stock >= 0)']
}
```
## Common Patterns
### Timestamps
If tracking creation and update times has business value:
```dbml
Table posts {
id integer [primary key]
// ... other fields
created_at timestamp [not null, default: `now()`]
updated_at timestamp [not null, default: `now()`]
}
```
### Soft Deletes
For preserving deleted records:
```dbml
Table users {
id integer [primary key]
// ... other fields
deleted_at timestamp [null]
}
```
### Polymorphic Associations
When a table can belong to multiple parent types:
```dbml
Table comments {
id integer [primary key]
commentable_type varchar // 'Post' or 'Product'
commentable_id integer // ID of Post or Product
content text
}
```
### Audit Trail
Track changes to important data:
```dbml
Table order_history {
id integer [primary key]
order_id integer [ref: > orders.id]
changed_by integer [ref: > users.id]
old_status varchar
new_status varchar
changed_at timestamp [default: `now()`]
}
```
### Versioning
For maintaining version history:
```dbml
Table documents {
id integer [primary key]
version integer [not null, default: 1]
// ... other fields
indexes {
(id, version) [unique]
}
}
```
## Performance Considerations
### Avoid Over-indexing
- Each index adds write overhead
- Maintain only indexes that are actually used
- Monitor query performance and adjust
### Use Appropriate Data Types
- VARCHAR vs TEXT: Use VARCHAR with appropriate length
- INT vs BIGINT: Choose based on expected range
- DECIMAL for money: Avoid floating point for currency
- TIMESTAMP vs DATE: Use appropriate precision
## Schema Organization
### Use Table Groups
Organize related tables:
```dbml
TableGroup user_management {
users
user_profiles
user_roles
}
TableGroup content {
posts
comments
tags
}
```
### Use Schemas/Namespaces
For large applications:
```dbml
Table auth.users {
id integer [primary key]
}
Table cms.posts {
id integer [primary key]
author_id integer [ref: > auth.users.id]
}
```

146
skills/erd-skill/cli.md Normal file
View File

@@ -0,0 +1,146 @@
# DBML CLI Usage Guide
Convert DBML files to SQL and vice versa.
## Prerequisites
Assumes cli is already installed globally. If not installed: npm install -g @dbml/cli
## Convert a DBML file to SQL
```bash
$ dbml2sql schema.dbml
CREATE TABLE "staff" (
"id" INT PRIMARY KEY,
"name" VARCHAR,
"age" INT,
"email" VARCHAR
);
...
```
By default it will generate to "PostgreSQL". To specify which database to generate to:
```bash
$ dbml2sql schema.dbml --mysql
CREATE TABLE `staff` (
`id` INT PRIMARY KEY,
`name` VARCHAR(255),
`age` INT,
`email` VARCHAR(255)
);
...
```
To **output to a file** you may use `--out-file` or `-o`:
```bash
$ dbml2sql schema.dbml -o schema.sql
✔ Generated SQL dump file (PostgreSQL): schema.sql
```
### Syntax Manual
```bash
$ dbml2sql <path-to-dbml-file>
[--mysql|--postgres|--mssql|--oracle]
[-o|--out-file <output-filepath>]
```
## Convert a SQL file to DBML
To convert SQL to DBML file:
```bash
$ sql2dbml dump.sql --postgres
Table staff {
id int [pk]
name varchar
age int
email varchar
}
...
```
**Output to a file:**
```bash
$ sql2dbml --mysql dump.sql -o mydatabase.dbml
✔ Generated DBML file from SQL file (MySQL): mydatabase.dbml
```
### Syntax Manual
```bash
$ sql2dbml <path-to-sql-file>
[--mysql|--postgres|--mssql|--postgres-legacy|--mysql-legacy|--mssql-legacy|--snowflake]
[-o|--out-file <output-filepath>]
```
Note: The `--postgres-legacy`, `--mysql-legacy` and `mssql-legacy` options import PostgreSQL/MySQL/MSSQL to dbml using the old parsers. It's quicker but less accurate.
## Generate DBML directly from a database
```bash
$ db2dbml postgres 'postgresql://dbml_user:dbml_pass@localhost:5432/dbname?schemas=public'
Table "staff" {
"id" int4 [pk, not null]
"name" varchar
"age" int4
"email" varchar
}
...
```
**Output to a file:**
```bash
$ db2dbml postgres 'postgresql://dbml_user:dbml_pass@localhost:5432/dbname?schemas=public' -o schema.dbml
✔ Generated DBML file from database's connection: schema.dbml
```
### Syntax Manual
```bash
$ db2dbml postgres|mysql|mssql|snowflake|bigquery
<connection-string>
[-o|--out-file <output-filepath>]
```
Connection string examples:
- postgres: `'postgresql://user:password@localhost:5432/dbname?schemas=schema1,schema2,schema3'`
- mysql: `'mysql://user:password@localhost:3306/dbname'`
- mssql: `'Server=localhost,1433;Database=master;User Id=sa;Password=your_password;Encrypt=true;TrustServerCertificate=true;Schemas=schema1,schema2,schema3;'`
- snowflake: `'SERVER=<account_identifier>.<region>;UID=<your_username>;PWD=<your_password>;DATABASE=<your_database>;WAREHOUSE=<your_warehouse>;ROLE=<your_role>;SCHEMAS=schema1,schema2,schema3;'`
- bigquery: `/path_to_json_credential.json`
For BigQuery, the credential file supports flexible authentication:
**1. Application Default Credentials (ADC):**
- Empty file: `{}` - uses environment authentication
- Override specific fields: `{"project_id": "my-project", "datasets": [...]}`
For more information about ADC, see [How Application Default Credentials works](https://cloud.google.com/docs/authentication/application-default-credentials)
**2. Explicit Service Account (bypasses ADC):**
```json
{
"project_id": "your-project-id",
"client_email": "your-client-email",
"private_key": "your-private-key",
"datasets": ["dataset_1", "dataset_2", ...]
}
```
:::note
- Both `client_email` and `private_key` must be provided together.
- If `datasets` is not specified or is empty, all accessible datasets will be fetched.

View File

@@ -0,0 +1,112 @@
//// -- LEVEL 1
//// -- Schemas, Tables and References
// Creating tables
// You can define the tables with full schema names
Table ecommerce.merchants {
id int
~country_code
merchant_name varchar
"created at" varchar
admin_id int [ref: > U.id, not null]
Indexes {
(id, country_code) [pk]
}
}
// If schema name is omitted, it will default to "public" schema.
Table users as U {
id int [pk, increment] // auto-increment
full_name varchar
created_at timestamp
~country_code
}
Table countries {
code int [pk]
name varchar
continent_name varchar
}
TablePartial country_code {
country_code int [ref: > countries.code]
}
//----------------------------------------------//
//// -- LEVEL 2
//// -- Adding column settings
Table ecommerce.order_items {
order_id int [ref: > ecommerce.orders.id] // inline relationship (many-to-one)
product_id int
quantity int [default: 1] // default value
Indexes {
(order_id, product_id) [pk]
}
}
Ref: ecommerce.order_items.product_id > ecommerce.products.id
Table ecommerce.orders {
~auto_id
user_id int [not null, unique]
status varchar
created_at varchar [note: 'When order created'] // add column note
}
//----------------------------------------------//
//// -- Level 3
//// -- Enum, Indexes
// Enum for 'products' table below
Enum ecommerce.products_status {
out_of_stock
in_stock
running_low [note: 'less than 20'] // add column note
}
// Indexes: You can define a single or multi-column index
Table ecommerce.products {
~auto_id
name varchar
merchant_id int [not null]
price int
status ecommerce.products_status
created_at datetime [default: `now()`]
Indexes {
(merchant_id, status) [name:'product_status']
id [unique]
}
}
Table ecommerce.product_tags {
~auto_id
name varchar
}
Table ecommerce.merchant_periods {
~auto_id
merchant_id int
~country_code
start_date datetime
end_date datetime
}
TablePartial auto_id {
id int [pk]
}
// Creating references
// You can also define relationship separately
// > many-to-one; < one-to-many; - one-to-one; <> many-to-many
Ref: ecommerce.products.merchant_id > ecommerce.merchants.id // many-to-one
Ref: ecommerce.product_tags.id <> ecommerce.products.id // many-to-many
// Composite foreign key
Ref: ecommerce.merchant_periods.(merchant_id, country_code) > ecommerce.merchants.(id, country_code)
Ref user_orders: ecommerce.orders.user_id > public.users.id

View File

@@ -0,0 +1,27 @@
Table follows {
following_user_id integer
followed_user_id integer
created_at timestamp
}
Table users {
id integer [primary key]
username varchar
role varchar
created_at timestamp
}
Table posts {
id integer [primary key]
title varchar
body text [note: 'Content of the post']
user_id integer [not null]
status varchar
created_at timestamp
}
Ref user_posts: posts.user_id > users.id // many-to-one
Ref: users.id < follows.following_user_id
Ref: users.id < follows.followed_user_id