Initial commit

This commit is contained in:
Zhongwei Li
2025-11-30 08:51:26 +08:00
commit d0f8d956e7
6 changed files with 921 additions and 0 deletions

View File

@@ -0,0 +1,14 @@
{
"name": "rebuy-go-sdk",
"description": "A plugin to assist with projects using rebuy-go-sdk",
"version": "0.0.1",
"author": {
"name": "rebuy"
},
"skills": [
"./skills"
],
"commands": [
"./commands"
]
}

3
README.md Normal file
View File

@@ -0,0 +1,3 @@
# rebuy-go-sdk
A plugin to assist with projects using rebuy-go-sdk

View File

@@ -0,0 +1,5 @@
---
description: Checks the the current projects follows the optimal structure for rebuy-go-sdk servers.
---
Please verify that this project follows the structure of the rebuy-go-sdk as described below. This also includes file contents and the project should follow contents as close as possible.

53
plugin.lock.json Normal file
View File

@@ -0,0 +1,53 @@
{
"$schema": "internal://schemas/plugin.lock.v1.json",
"pluginId": "gh:rebuy-de/rebuy-go-sdk:claude/plugin",
"normalized": {
"repo": null,
"ref": "refs/tags/v20251128.0",
"commit": "b2f0943de33fb0ca8011a57bd0fbebe43bd94965",
"treeHash": "3b63008d0239b3f15ace0d660d03d5da89066e848cf4106468f74f73921ba0bb",
"generatedAt": "2025-11-28T10:27:56.334939Z",
"toolVersion": "publish_plugins.py@0.2.0"
},
"origin": {
"remote": "git@github.com:zhongweili/42plugin-data.git",
"branch": "master",
"commit": "aa1497ed0949fd50e99e70d6324a29c5b34f9390",
"repoRoot": "/Users/zhongweili/projects/openmind/42plugin-data"
},
"manifest": {
"name": "rebuy-go-sdk",
"description": "A plugin to assist with projects using rebuy-go-sdk",
"version": "0.0.1"
},
"content": {
"files": [
{
"path": "README.md",
"sha256": "27f9dcf185e910740e0364293bc77a08ee0f9d8db1feffb39022eef013228d95"
},
{
"path": ".claude-plugin/plugin.json",
"sha256": "07d7ac3e7cc750ee8cd3b81ff83ff6df008d030c3b9d70c561ff7bc029c1fc6e"
},
{
"path": "commands/verify-server.md",
"sha256": "9547ad72bbbb8ef2d64bdc06c2a875f09aca78232494c046248bfd1fb860758e"
},
{
"path": "skills/rebuy-go-sdk/SKILL.md",
"sha256": "c7fcd9d0367860da84f126253139a9d972e4c56e3a551658e8c7f42881f269fa"
},
{
"path": "skills/rebuy-go-sdk/docs.md",
"sha256": "e0ef672b859b80fbcf476fbd2cf0499fd91a4c93a052677a2551bdfbcb1c384b"
}
],
"dirSha256": "3b63008d0239b3f15ace0d660d03d5da89066e848cf4106468f74f73921ba0bb"
},
"security": {
"scannedAt": null,
"scannerVersion": null,
"flags": []
}
}

View File

@@ -0,0 +1,8 @@
---
name: rebuy-go-sdk
description: A plugin to assist with projects using rebuy-go-sdk
---
# rebuy-go-sdk
See @docs.md for full docs.

838
skills/rebuy-go-sdk/docs.md Normal file
View File

@@ -0,0 +1,838 @@
---
description: Reads the documentation for rebuy-go-sdk into the LLM context.
---
# General Advice
- The examples below might have a wrong import path that needs to be adjusted to the project go module.
- Always use `./buildutil` for compiling the project.
- Strings that are passed into dependency injections should have a dedicated type (`type FooParam string`) that gets
converted back into a plain `string` in the `New*` functions.
# File main.go
The file `./main.go` should look exactly like in the example project:
```
package main
import (
"github.com/rebuy-de/rebuy-go-sdk/v9/pkg/cmdutil"
"github.com/sirupsen/logrus"
"github.com/rebuy-de/rebuy-go-sdk/v9/examples/full/cmd"
)
func main() {
defer cmdutil.HandleExit()
if err := cmd.NewRootCommand().Execute(); err != nil {
logrus.Fatal(err)
}
}
```
# File tools.go
The file `./tools.go` should contain blank imports for go generate tools, like in this example:
```
//go:build tools
// +build tools
package main
// https://github.com/golang/go/wiki/Modules#how-can-i-track-tool-dependencies-for-a-module
import (
_ "github.com/Khan/genqlient" // only when using graphql
_ "github.com/a-h/templ/cmd/templ" // only when using templ
_ "github.com/rebuy-de/rebuy-go-sdk/v9/cmd/buildutil" // always used
_ "github.com/rebuy-de/rebuy-go-sdk/v9/cmd/packageutil" // only when building packages during CI
_ "github.com/sqlc-dev/sqlc/cmd/sqlc" // only when using a database with sqlc
_ "honnef.co/go/tools/cmd/staticcheck" // always used
)
```
# Tool packageutil
The tool `packageutil` creates distribution packages from Go binaries built with buildutil. It should be used after building with buildutil.
## Usage
```bash
# Build binaries first
./buildutil -x linux/amd64 -x darwin/amd64 -x windows/amd64
# Create compressed archives for all platforms
./packageutil --compressed dist/myapp-v*
# Create system packages (only for Linux binaries)
./packageutil --rpm --deb dist/myapp-v*-linux-*
# Upload to S3
./packageutil --compressed --s3-url s3://bucket/releases/ dist/myapp-v*
```
Package formats:
- `--compressed`: Creates .tgz for POSIX and .zip for Windows
- `--rpm`: Creates .rpm packages (Linux only)
- `--deb`: Creates .deb packages (Linux only)
- `--s3-url`: Uploads artifacts to S3
See `cmd/packageutil/README.md` for detailed documentation.
# File cmd/root.go
The file `./cmd/root.go` defines all subcommands for the project. Mandatory ones are `daemon` and `dev`, which start the server either in production mode or in dev mode for local testing.
The entry point is always NewRootCommand`, which looks like this:
```
func NewRootCommand() *cobra.Command {
return cmdutil.New(
"full-example", "A full example app for the rebuy-go-sdk.",
cmdutil.WithLogVerboseFlag(),
cmdutil.WithLogToGraylog(),
cmdutil.WithVersionCommand(),
cmdutil.WithVersionLog(logrus.DebugLevel),
cmdutil.WithSubCommand(
cmdutil.New(
"daemon", "Run the application as daemon",
cmdutil.WithRunner(new(DaemonRunner)),
)),
cmdutil.WithSubCommand(cmdutil.New(
"dev", "Run the application in local dev mode",
cmdutil.WithRunner(new(DevRunner)),
)),
)
}
```
It might contain additional commands, but `daemon` and `dev` are mandatory. The `cmdutil.With*` options are also mandatory.
A Runner looks like this:
```
type FooRunner struct {
// contains fields that are targets fir binding command line flags in `Bind()`.
myParameter string
}
func (r *FooRunner) Bind(cmd *cobra.Command) error {
// binds flags
cmd.PersistentFlags().StringVar(
&r.myParameter, "my-parameter", "default",
`This is an example flag to show how the binding works.`)
return nil
}
func (r *FooRunner) Run(ctx context.Context, _ []string) error {
c := dig.New() // dig always gets initialized in the beginning
err := errors.Join(
c.Provide(web.ProdFS), // web.DevFS for dev command
c.Provide(webutil.AssetDefaultProd), // webutil.AssedDefaultDev for dev command
c.Provide(func() *redis.Client {
return redis.NewClient(&redis.Options{
Addr: r.redisAddress,
})
}),
// more environment-specific dependencies might be provided
)
if err != nil {
return err
}
return RunServer(ctx, c) // a Runner always calls RunServer in cmd/server.go
}
```
# File cmd/server.go
The file `./cmd/server.go` always contains the single function RunServer that registers dependencies which are the same for all environments, registers HTTP handlers, registers workers and finally runs all workers with `runutil.RunProvidedWorkers`.
It looks similar to the example below. It is useful to group all `webutil.ProvideHandler` functions and all `runutil.ProvideWorker` functions.
```
func RunServer(ctx context.Context, c *dig.Container) error {
err := errors.Join(
c.Provide(templates.New),
// Register HTTP handlers
webutil.ProvideHandler(c, handlers.NewIndexHandler),
webutil.ProvideHandler(c, handlers.NewHealthHandler),
webutil.ProvideHandler(c, handlers.NewUsersHandler),
c.Provide(func(
authMiddleware webutil.AuthMiddleware,
) webutil.Middlewares {
return webutil.Middlewares(append(
webutil.DefaultMiddlewares(),
authMiddleware,
))
}),
// Register background workers
runutil.ProvideWorker(c, func(redisClient *redis.Client) *workers.DataSyncWorker {
return workers.NewDataSyncWorker(redisClient)
}),
runutil.ProvideWorker(c, workers.NewPeriodicTaskWorker),
// Register the HTTP server itself
runutil.ProvideWorker(c, webutil.NewServer),
)
if err != nil {
return err
}
// Start all registered workers
return runutil.RunProvidedWorkers(ctx, c)
}
```
# Package pkg/bll
The package `./pkg/bll` contains isolated packages, that do not need access to things like network or the OS. Also they are usually very well testable.
* A good example is `xff` that takes HTTP headers as input and outputs the real-ip.
* Another good example is `humanize` that takes an integer and returns a human readable version with K, M or G sufixes.
* A bad example is a redis client.
# Package pkg/dal
The package `./pkg/dal` contains wrapper packages that help accessing data from outside of the program. Usually that are HTTP clients.
# Package pkg/app
The package `./pkg/app` contains sub-packages that define the actual project logic. Most common ones are
- pkg/app/handlers — Contains all HTTP handlers.
- pkg/app/templates — Contains HTML templates.
- pkg/app/workers — Contains background workers.
There might be additional packages, but they need to be focused on a specific topic. For example something like `pkg/app/tasks`, which contains a bunch of different task implementations.
# Package pkg/app/workers
The package `./pkg/app/workers` contains all background workers. There is one worker perfile, but there might be subworkers in each worker.
The worker must be registered using `runutil.ProvideWorker` in `cmd/server.go`.
The worker must implement this interface:
```
type WorkerConfiger interface {
Workers() []Worker
}
```
All files should follow this example:
```
package workers
import (
"context"
"fmt"
"time"
"github.com/redis/go-redis/v9"
"github.com/rebuy-de/rebuy-go-sdk/v9/pkg/logutil"
"github.com/rebuy-de/rebuy-go-sdk/v9/pkg/runutil"
)
// DataSyncWorker is responsible for periodically syncing data
type DataSyncWorker struct {
redisClient *redis.Client // this is an example dependency
}
// NewDataSyncWorker creates a new data sync worker
func NewDataSyncWorker(redisClient *redis.Client) *DataSyncWorker {
return &DataSyncWorker{
redisClient: redisClient,
}
}
// Workers implements the runutil.WorkerConfiger interface
func (w *DataSyncWorker) Workers() []runutil.Worker {
return []runutil.Worker{
runutil.DeclarativeWorker{
Name: "DataSyncWorker",
Worker: runutil.Repeat(5*time.Minute, runutil.JobFunc(w.syncData)),
},
}
}
// syncData performs the actual data synchronization
func (w *DataSyncWorker) syncData(ctx context.Context) error {
logutil.Get(ctx).Info("Synchronizing data...")
// Record the current time in Redis as our last sync
_, err := w.redisClient.Set(ctx, "last_sync", time.Now().Format(time.RFC3339), 0).Result()
if err != nil {
return fmt.Errorf("failed to update last sync time: %w", err)
}
// Simulate some work
time.Sleep(500 * time.Millisecond)
// Update the counter in Redis
_, err = w.redisClient.Incr(ctx, "sync_count").Result()
if err != nil {
return fmt.Errorf("failed to update sync counter: %w", err)
}
logutil.Get(ctx).Info("Data synchronization completed")
return nil
}
```
## Distributed Repeating Workers
For multi-instance deployments, use `runutil.NewDistributedRepeat` to ensure only one instance executes a periodic task at a time. This uses a Redis-based lease with cooldown that acts as a distributed lock:
```
func (w *DataSyncWorker) Workers() []runutil.Worker {
return []runutil.Worker{
runutil.DeclarativeWorker{
Name: "DataSyncWorker",
Worker: runutil.NewDistributedRepeat(
w.redisClient,
"data-sync-lock",
5*time.Minute,
runutil.JobFunc(w.syncData),
),
},
}
}
```
The lease gets automatically refreshed during job execution to prevent lock expiry for long-running tasks.
# Package pkg/app/handlers
The package `./pkg/app/handlers` contains all HTTP handlers. There is one handler per file and one handler might handle multiple routes.
The handler must be registered using `webutil.ProvideHandler` in `cmd/server.go`.
The handler must implement this interface:
```
type Handler interface {
Register(chi.Router)
}
```
All files should follow this example:
```
package handlers
import (
"net/http"
"github.com/go-chi/chi/v5"
"github.com/rebuy-de/rebuy-go-sdk/v9/examples/full/pkg/app/templates"
"github.com/rebuy-de/rebuy-go-sdk/v9/pkg/webutil"
)
// IndexHandler handles the home page
type IndexHandler struct {
viewer *templates.Viewer
}
// NewIndexHandler creates a new index handler
func NewIndexHandler(
viewer *templates.Viewer,
) *IndexHandler {
return &IndexHandler{
viewer: viewer,
}
}
// Register registers the handler's routes
func (h *IndexHandler) Register(r chi.Router) {
r.Get("/", webutil.WrapView(h.handleIndex)) // the path is always the full path
// might contain additional routes
}
func (h *IndexHandler) handleIndex(r *http.Request) webutil.Response {
return templates.View(http.StatusOK, h.viewer.WithRequest(r).HomePage())
}
```
# Package pkg/app/templates
When using templ as template engine, the package `./pkg/app/templates` looks like described here.
The file `./pkg/app/templates/view.go` always looks like this:
```
package templates
import (
"fmt"
"net/http"
"github.com/a-h/templ"
"github.com/rebuy-de/rebuy-go-sdk/v9/pkg/logutil"
"github.com/rebuy-de/rebuy-go-sdk/v9/pkg/webutil"
)
//go:generate go run github.com/a-h/templ/cmd/templ generate
//go:generate go run github.com/a-h/templ/cmd/templ fmt .
type Viewer struct {
assetPathPrefix webutil.AssetPathPrefix
// All values that are needed by the templates and are provided by dig should go here.
}
type RequestAwareViewer struct {
*Viewer
request *http.Request
// Should only contain fields that change between requests. Everything else should be injected into the Viewer.
}
func New(
assetPathPrefix webutil.AssetPathPrefix,
) *Viewer {
return &Viewer{
assetPathPrefix: assetPathPrefix,
}
}
func (v *Viewer) assetPath(path string) string {
return fmt.Sprintf("/assets/%v%v", v.assetPathPrefix, path)
}
func (v *Viewer) WithRequest(r *http.Request) *RequestAwareViewer {
return &RequestAwareViewer{
Viewer: v,
request: r,
}
}
func View(status int, node templ.Component) webutil.Response {
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/html; charset=utf-8")
w.WriteHeader(status)
err := node.Render(r.Context(), w)
if err != nil {
logutil.Get(r.Context()).Error(err)
}
}
}
```
The `RequestAwareViewer` is only needed, when a component accessed request data, like auth information. If that is not the case the `Viewer` is enough, but it is fine to always use the `RequestAwareViewer`.
The `RequestAwareViewer` can be called like this from a handler:
```
return templates.View(http.StatusOK,
h.viewer.WithRequest(r).APIKeyPage(apikeys))
```
An example component could look like this:
```
templ (v *RequestAwareViewer) APIKeyPage(apikeys []sqlc.Apikey) {
@v.page("API Keys") {
<ul>
for _, key := range apikeys {
<li>{ key }</li>
}
</ul>
}
}
```
It is advised to have the file `./pkg/app/templates/page.go` to have a base layout that looks like this:
```
templ (v *RequestAwareViewer) base(title string) {
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8"/>
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
<title>{ title }</title>
<link rel="icon" type="image/svg+xml" href={ v.assetPath("/favicon.svg") }/>
<link rel="stylesheet" href={ v.assetPath("/index.css") }/>
<script src={ v.assetPath("/index.js") }></script>
<script src={ v.assetPath("/hyperscript.org/dist/_hyperscript.min.js") }></script>
<script src={ v.assetPath("/hyperscript.org/dist/template.js") }></script>
<script src={ v.assetPath("/htmx.org/dist/htmx.min.js") }></script>
<script src={ v.assetPath("/idiomorph/dist/idiomorph-ext.min.js") }></script>
</head>
<body hx-ext="morph">
<nav class="navbar" role="navigation" aria-label="main navigation">
<div class="navbar-brand">
<a class="navbar-item" href="/">
<img src={ v.assetPath("/favicon.svg") } width="28" height="28" class="mr-3"/>
<strong>LLM Gateway</strong>
</a>
</div>
<div class="navbar-menu">
<div class="navbar-start"></div>
<div class="navbar-end">
@v.authComponent()
<div class="navbar-item">
<button _="on click send ry:toggleTheme to <html/>">
<i class="fa-solid fa-circle-half-stroke"></i>
</button>
</div>
</div>
</div>
</nav>
<section class="section">
<div class="container-fluid">
{ children... }
</div>
</section>
</body>
</html>
}
templ (v *RequestAwareViewer) page(title string) {
// Store the title in the viewer
// v.currentTitle = title - done in WithRequestPage
@v.base(title) {
{ children... }
}
}
```
# Package pkg/pgutil
The package `./pkg/pgutil` provides utilities for PostgreSQL database operations and is the recommended way to handle database connections and migrations.
## Integration with Dependency Injection
The recommended way to use `pgutil` is through dependency injection in `cmd/root.go` and `cmd/server.go`:
In `cmd/root.go`, provide the database URI:
```go
func (r *DaemonRunner) Run(ctx context.Context, _ []string) error {
c := dig.New()
err := errors.Join(
digutil.ProvideValue[pgutil.URI](c, "postgres://postgres:postgres@localhost/postgres?sslmode=disable"),
digutil.ProvideValue[pgutil.EnableTracing](c, true), // optional: enable tracing
// ... other dependencies
)
if err != nil {
return err
}
return RunServer(ctx, c)
}
```
In `cmd/server.go`, configure the database pool and run migrations:
```go
func RunServer(ctx context.Context, c *dig.Container) error {
err := errors.Join(
// Provide the context to the container
digutil.ProvideValue[context.Context](c, ctx),
// Configure Database
digutil.ProvideValue[pgutil.Schema](c, "my_schema"),
digutil.ProvideValue[pgutil.MigrationFS](c, pgutil.MigrationFS(sqlc.MigrationsFS)),
c.Provide(pgutil.NewPool, dig.As(new(sqlc.DBTX))),
c.Provide(sqlc.New),
c.Invoke(pgutil.Migrate), // runs migrations on startup
// ... other dependencies
)
if err != nil {
return err
}
return runutil.RunProvidedWorkers(ctx, c)
}
```
## Migrations
The migration system supports two types of migration files:
1. **Versioned migrations**: `DDDD_$title.up.sql` - Run once in sequential order
2. **Repeatable migrations**: `R__$title.sql` - Run every time (for views, functions, demo data)
Repeatable migrations are useful for:
- Creating/updating database views
- Defining stored procedures and functions
- Loading reference/demo data
## Transactions
Use `pgutil.WithTransaction` for transaction handling:
```go
err := pgutil.WithTransaction(ctx, queries, func(tx *Queries) error {
// Your transactional operations here
return nil
})
```
# Package pkg/dal/sqlc
The package `./pkg/dal/sqlc` contains all SQL queries, when using SQLC.
SQL queries are stored in files with the name pattern `query_$table.sql`. SQLC reads those files and writes Go code in `query_$table.sql.go`. The command for this is `go run github.com/sqlc-dev/sqlc/cmd/sqlc generate`, which gets executed by `go generate`.
The file `./pkg/dal/sqlc/sqlc.go` should always look close like this:
```go
package sqlc
import (
"embed"
)
//go:generate go run github.com/sqlc-dev/sqlc/cmd/sqlc generate
//go:embed migrations/*.sql
var MigrationsFS embed.FS
```
Note: When using `pgutil` with dependency injection (as shown in the pkg/pgutil section), you don't need manual `NewQueries` or `Migrate` functions. The `pgutil.NewPool` and `pgutil.Migrate` functions handle this through the dig container.
The file `./pkg/dal/sqlc/tx.go` should always look close like this:
```
package sqlc
import (
"context"
"fmt"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/rebuy-de/rebuy-go-sdk/v9/pkg/logutil"
)
type WithTxFunc func(*Queries) error
type beginner interface {
Begin(ctx context.Context) (pgx.Tx, error)
}
func (q *Queries) Tx(ctx context.Context, fn WithTxFunc) error {
db, ok := q.db.(beginner)
if !ok {
return fmt.Errorf("DB interface does not implement transactions: %T", q.db)
}
tx, err := db.Begin(ctx)
if err != nil {
return err
}
defer tx.Rollback(ctx)
qtx := q.WithTx(tx)
err = fn(qtx)
if err != nil {
return err
}
return tx.Commit(ctx)
}
func (q *Queries) Hijack(ctx context.Context) (*Queries, func(), error) {
pool := q.db.(*pgxpool.Pool)
pconn, err := pool.Acquire(ctx)
if err != nil {
return nil, nil, err
}
conn := pconn.Hijack()
closer := func() {
err := conn.Close(context.Background())
if err != nil {
logutil.Get(ctx).Error(err)
}
}
return New(conn), closer, nil
}
```
```
version: 2
sql:
- engine: "postgresql"
schema: "migrations/"
queries: "."
gen:
go:
package: "sqlc"
out: "."
sql_package: "pgx/v5"
emit_json_tags: true
emit_pointers_for_null_types: true
output_db_file_name: gen_db.go
output_models_file_name: gen_models.go
json_tags_case_style: camel
# rename contains an object with a mapping from postgres identifiers to Go identifiers.
# The mapping is done by sqlc and only needs an entry here, if the auto generated one is wrong. This is mostly the case for wrong initialisms.
rename:
my_example_uid: MyExampleUID # example
# overrides specifies to which Go type a database entry gets deserialized to.
overrides:
# UUIDs should always be deserialized the Google UUID package.
- db_type: "uuid"
go_type:
import: "github.com/google/uuid"
package: "uuid"
type: "UUID"
- db_type: "uuid"
nullable: true
go_type:
import: "github.com/google/uuid"
package: "uuid"
type: "NullUUID"
# Timestamps should always be deserialized to native Go times.
- db_type: "timestamptz"
go_type:
import: "time"
type: "Time"
- db_type: "timestamptz"
nullable: true
go_type:
import: "time"
type: "Time"
pointer: true
# there might be other project-specific entries
```
The directory `./pkg/dal/sqlc/migrations` contains two types of migration scripts:
1. **Versioned migrations**: `DDDD_$title.up.sql` - Run once in sequential order
- DDDD is a number with 0 padding
- $title is a short title of the migration step
- Example: `0001_initial_schema.up.sql`
2. **Repeatable migrations**: `R__$title.sql` - Run every time migrations are executed
- Useful for views, stored procedures, and demo data
- Example: `R__user_stats_view.sql`, `R__demo_data.sql`
# Pakage web
The package `./web` contains all web assets that get delivered to the browser. It supports dependency management by Yarn.
The file `./web/web.go` is the interface to other Go packages and must look like this:
```
package web
import (
"embed"
"io/fs"
"os"
"github.com/rebuy-de/rebuy-go-sdk/v9/pkg/webutil"
)
//go:generate yarn install
//go:generate yarn build
//go:embed all:dist/*
var embedded embed.FS
func DevFS() webutil.AssetFS {
return os.DirFS("web/dist")
}
func ProdFS() webutil.AssetFS {
result, err := fs.Sub(embedded, "dist")
if err != nil {
panic(err)
}
return result
}
```
The file `./web/esbuild.config.mjs` contains the build script and looks like this:
```
import * as esbuild from 'esbuild'
import fs from 'node:fs'
await esbuild.build({
entryPoints: [
'src/index.js', 'src/index.css',
],
bundle: true,
minify: true,
sourcemap: true,
outdir: 'dist/',
format: 'esm',
loader: {
'.woff2': 'file',
'.ttf': 'file'
},
})
fs.cpSync('src/www', 'dist', {recursive: true});
// The HTMX stuff does not deal well with ESM bundling. It is not needed tho,
// therefore we copy the assets manually and link them directly in the <head>.
const scripts = [
'hyperscript.org/dist/_hyperscript.min.js',
'hyperscript.org/dist/template.js',
'htmx.org/dist/htmx.min.js',
'idiomorph/dist/idiomorph-ext.min.js',
];
scripts.forEach((file) => {
fs.cpSync(`node_modules/${file}`, `dist/${file}`, {recursive: true});
});
```
The `scripts` array only needs to contain files, that are actually used in any HTML `<head>`. The remaining code above should follow the example closely.
The file `./web/package.json` describes the needed dependencies and looks like this, where the actual dependencies might be different:
```
{
"name": "project-name",
"version": "1.0.0",
"packageManager": "yarn@4.7.0",
"private": true,
"dependencies": {
"@fortawesome/fontawesome-free": "^6.7.2",
"bulma": "^1.0.4",
"htmx.org": "^2.0.4",
"hyperscript.org": "^0.9.14",
"idiomorph": "^0.7.3"
},
"devDependencies": {
"esbuild": "^0.25.4",
"nodemon": "^3.1.10"
},
"scripts": {
"build": "node esbuild.config.mjs"
}
}
```